Designing Elixir Systems With OTP – Part Seven

I have finally finished working through the example in the book. ( 94529b1)

This is a great introduction to OTP covering some details that I have not seen elsewhere. In particular the ability to import an OTP project and not have it auto start will be useful elsewhere.

The last few chapters do seem rushed, and the example code does not quite match what you get when it is typed in. However it is a great example of building a domain model in Elixir and then adding on the external links later. This would count as Hexagonal Architecture.

This is more of a training course that a book. You would have trouble simply reading through without working on the exercise. This does take a significant amount of time.

Designing Elixir Systems With OTP – Part Six

We are now onto writing the outside in tests.

I have got as far as the first test. This is failing with authentication errors.

[error] Postgrex.Protocol (#PID<0.251.0>) failed to connect: ** (Postgrex.Error) FATAL 28P01 (invalid_password) password authentication failed for user "chris*****"

iex -S mix came to the rescue here:

h Ecto.Adapters.SQL.Sandbox.checkout

Checks a connection out for the given repo.

The process calling checkout/2 will own the connection until it calls checkin/2
or until it crashes when then the connection will be automatically reclaimed by
the pool. 

## Options

  • :sandbox - when true the connection is wrapped in a transaction.
    Defaults to true.
  • :isolation - set the query to the given isolation level.
  • :ownership_timeout - limits how long the connection can be owned.
    Defaults to the value in your repo config in config/config.exs (or
    preferably in config/test.exs), or 60000 ms if not set. The timeout exists
    for sanity checking purposes, to ensure there is no connection leakage, and
    can be bumped whenever necessary.

This was the important clue that I had not got my test database configured correctly.

This is the new version of config/test.exs

use Mix.Config

config :mastery_persistence, MasteryPersistence.Repo, 
                             database: "mastery_test", 
                             hostname: "localhost", 
                             port: 54320, 
                             pool: Ecto.Adapters.SQL.Sandbox,
                             username: "postgres",
                             password: "postgres"

config :mastery, :persistence_fn, &Mastery.Persistence.record_response/2

I had “forgotten” the username here.

I now have the persistence tests implemented: (dc6c5bf)

Designing Elixir Systems With OTP – Part Five

I am now onto chapters 7 – 9 which deal with OTP. This is the really interesting part of the book. I am unsure that my sample code is accurate as there are very few tests and the supplied sample code has diverged from that which you enter if you are following along. I can’t tell if I have made a mistake or if the supplied code is wrong (my suspicion is a little of both). This has taught me a lot about debugging Elixir applications which will be useful. What looked like a cryptic error message was actually precise and detailed where the genserver received a Map when it was expecting an atom. Adding guard clauses is a great way to flush out these problems.

The book is written as a long exercise where you add code snippets to your application to update. Given that I am using an epub format I can’t simply cut and paste the code, so I am typing it in. This is interesting since epub adds a trailing - when wrapping lines which takes a while to get used to. This has a certain error rate so I frequently use `iex -S mix` to check for typos. This works well for sections where the required pieces are defined first. These later chapters were written outside-in (a technique that I like, but normally back with tests) so this does not help.

One of the downsides of these long exercises is that it can be hard to find a mistake in a snippet that has been updated several times.

The database additions have slowed me down as I don’t currently have a local working postgres version. The samples for the database completely miss out the credentials required, but this information is available elsewhere.

Here is the docker-compose.yml that I got to work (note that the port is marked host:container):

version: "3"
    image: "postgres:11"
      - POSTGRES_PASSWORD=postgres
    container_name: "my_postgres"
      - "54320:5432"
      - my_dbdata:/var/lib/postgresql/data
  my_dbdata: {}

In practice we would use a better username and password. I can have a portable database setup by using: docker-compose up -d db

Here is the config that I used to get the database setup:

use Mix.Config

config :mastery_persistence, MasteryPersistence.Repo,
  database: "mastery_dev",
  hostname: "localhost",
  port: 54320,
  username: "postgres",
  password: "postgres"

The use of OTP fails to mention the default 5 second timeout behaviour built into OTP requests. It does cover using timeouts to handle late responses and has a good explination of the use of via.

Poncho projects are an interesting concept. You just create a full mix application in a subdirectory (or a parallel directory). This allows it to be moved into a distinct repository later on should it be needed, but is less overhead than an umbrella project.

Now on to integrating the mastery_persistence into mastery. This part looks a bit rushed. They are asking us to change files that have not yet been created (config/config.exs).

I have worked through the examples and pushed the finished details to:

I am currently at: 6963329

All that remains is the testing chapter and going back and making sure everything works.

Designing Elixir Systems With OTP – Part Four

I am again working through this book.

I started again with the real edition. The major change is that it is now focused on Worker Bees rather than Wildabeast’s.

So far I have made it to the testing chapter at the end of part one.

It appears that the samples have not been updated to match the latest code.

The testing techniques are really good. Given that variables are immutable then test fixtures really are reusable. The context part of ExUnit is very clear.

I understand delaying the tests to keep the book to a reasonable length, but event a single character typo can break functionality and be hard to fix later. I had misspelt the key on a lookup and though that the tests were broken.

Here is the current state at f4d9f35

What is Normal? Part two

This is a follow up to part one

I have made some more progress on this project and we are now down to 60 errors per day from the previous thousand. Given that we are integrating with other a hundred system this is more acceptable. Once a breakthrough was made to clear the highest frequency problem, the rest were fixed easily.

A lot of the existing log messages lacked context. They typically included what method was being used and the stacktrace. What was missing was enough information to recreate the problem.

My previous claim about not using a debugger no longer stands. I was forced to use this to identify paths through some code that had not been developed via TDD (it had some acceptance tests but the details were obscured). We are still removing the old Betamax tests – these consisted of recordings of a production run that are replayed. This is a good start, but is painful to adapt.

Types of Standup

Most agile teams following some form of Scrum hold a daily standup.

Typically this is held in the morning at a time when the whole team can be in. I have also heard of end of day standups. These work better for teams that work remotly across timezones.

There are several forms of this meeting.

The basic is the three questions:

– What did you do yesterday

– What are you going to do today

– Any blockers

Another that I prefer is to walk the project board from right to left. Duscuss all cards that are not yet done. This will ensure that the board is upto date plus can show when someone is working off the plan.

I have also worked on a team that held two standups a day. A full one in the morning and a second quick version after lunch to cover changes.

Using Wardley Maps to Document a Software Architecture

I recently saw a tweet with a diagram using the wardley map value chains to document some software.

This looks to be an interesting way of viewing a set of products.

Given my use of graphviz I thought that I would give it a go.

Here is the repo:

Wardley Map visualisation of a set of software products.

This includes the dot file for the above diagram.

The idea of the value stream is to show the dependencies from the visible to the invisible. The horizontal axis has the custom items on the left and the commodity ones on the right.

I will be experimenting with this for a while.

Using Tampermonkey to Customise a Website

This is a demonstration of the power of the Tampermonkey extension to Chrome.

My employer, Codurance, have a page on their website listing the current staff:

Now it would be useful to know how many people we have listed.

If you were to visit that page, select inspect and type the following into the console of the browser:


This will return the count.

This is a start.

The next step is to make this more visible.

Install the Tampermonkey extension into Chrome.

This will add the following icon to your browser:

Tampermonkey icon

Visit the page you want to customise:

Press the tampermonkey icon

Select Create a new script

This will give you a script that looks like this:

// ==UserScript==
// @name New Userscript
// @namespace
// @version 0.1
// @description try to take over the world!
// @author You
// @match
// @grant none
// ==/UserScript==
(function() {
‘use strict’;
// Your code here...

Now replace // Your code here… with:

$(‘div.u-heading-v2-3–bottom h2’).text( ‘The Team (‘ + $(‘div.g-max-width-800’).length + ‘)’ );

Save this and view the page.

This will when you revisit the page change the label to include a count of how many people we have.

This is a great way to make small changes to a website that you don’t control.

If you want to allow other people to use an updateable script then you can publish the script on a public url and use the settings tab to record where it comes from.

This technique is great for customising third party websites that don’t quite do what you want them to. You can even add javascript dependencies into the page if you want to use them.

This results in this (numbers will vary):

Most frequently used commands

I was just thinking about the most frequently used commands that I use on my main development machine. It did not take me long to work out how to automate this process. My history has the last 10000 commands:

history | awk '{print $2}' | sort | uniq -c | sort -r | head -n 20 | awk '{print $2}' 

Here is the output





















This is a giveaway that I am a mac user, work with groovy, node and elixir. I also do some work in docker and use a mixture of vim and code.

I’d be interested to see what other people have