Elixir Dependencies

I was going to write a dependency checker for Elixir using :digraph to model the dependencies.

However while trying to work out how to read the deps from a mix.exs file (without parsing it myself) I found an existing command.

mix deps.tree

This solved the problem that I had, namely identifying why a given dependency is used.

This has a useful switch, which combined with a graphviz utility:

mix deps.tree --format dot && dot -Tpng deps_tree.dot -o deps_tree.png

I updated the phoenix generator and built a simple phoenix, naturally called Fawkes.

This is the dependency tree:

Full dependency graph of a phoenix application.

Jason Schemas and Validation

I am currently working on an API. The frontend that feeds the API uses GraphQL to send the requests to the server.
The frontend code does have a certain amount of validation, but since it is impossible to full secure a web client some of the validation needs to be repeated on the server. Anything that the client can do over https can be automated as an API. The client code used by the user contains all the information needed to do this!

Given that I have been receiving a json document it would be useful to have one central place to describe what is expected to be sent. This is where JSON Schemas come in.

The obvious starting point is https://json-schema.org/

This ends up with a document that looks something like:

{
   "$schema": "http://json-schema.org/schema", 
  "title": "My Schema",
    "description": "Some description",
    "required": [
        "list_of_things",
        "name",
        "my_object"
    ],
    "type": "object",
    "properties": {
        "list_of_things": {
            "type": "array",
            "items": {
                "$ref": "#/$defs/thing"
            }
        },
        "name": {
            "type": "string"
        },
        "my_object": {
            "$ref": "#/$defs/my_object"
        },
        "age": {
            "type": "integer"
        }
    },
    "$defs": {
        "thing": {
            "type": "object",
            "required": [
                "lines",
                "postcode"
            ],
            "properties": {
                "lines": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    }
                },
                "postcode": {
                    "type": "string"
                }
            }
        },
   "my_object": {
        "type": "object",
       "required": [],
       "properties": {
           "length": {"type": "integer"}
       }
    }
}

Once you have that you can use https://hex.pm/packages/ex_json_schema to use this to create a validator for the json obect.

  schema =
      File.read!("myschema.json")
      |> Jason.decode!()
      |> ExJsonSchema.Schema.resolve()

ExJsonSchema.Validator.validate(schema, %{"foo" => "bar"}
# Returns this:
{:error,
             [
               {"Required properties list_of_things, name, my_object were not present.",
                "#"}
             ]}

The practical use is that you get a description of what is wrong with the document and an idea of where in the document the error is.

In terms of validation this is a good start. You can find out missing fields and fields of a wrong type.
I have yet to make it handle smarter validations (such as only one item in the list can have the main boolean set).

Upgrading an Old Mac Mini

I am trying to upgrade an old Mac Mini that I have had sitting around for a while.
Apple upgrade are cumulative and slow, you need to finish one before the next will be allowed.

Just found a strange option. The upgraded mac required me to slow down the monitors refresh rate, otherwise it would not turn the screen on. Currently it has reach version 10, but I need to get it to 12.5

It’s a 2 hour reboot cycle, when it can find a screen.

Mix Test Output Part Two

`This is how I fixed the tests over an umbrella project problem:

  mix test | tee test-logs.log
  echo "=== Unit Test Summary ==="
  grep "==>\| failure\|are no tests" test-logs.log
  rm test-logs.log

This way the last thing in the log file on the build server is a summary of the above tests.
It may include some false positives if you are logging too much, but it does include what is needed.

Plantuml-Libs now has a single file version of all of the templates

https://github.com/tmorin/plantuml-libs

Now has single file includes for all of the templates.

https://raw.githubusercontent.com/tmorin/plantuml-libs/master/distribution/eventstorming/single.puml
@startuml

!include https://raw.githubusercontent.com/tmorin/plantuml-libs/master/distribution/eventstorming/single.puml

' display elements
FacadeCommand('FacadeCommand')
Command('Command')
Result('Result')
Event('Event')
DomainEvent('DomainEvent')
IntegrationEvent('IntegrationEvent')
Query('Query')
ReadModel('ReadModel')
UserInterface('UserInterface')
Aggregate('Aggregate')
Service('Service')
Policy('Policy')
Saga('Saga')
Process('Process')
Timer('Timer')
Person('Person')
System('System')
Comment('Comment')

@enduml

This now works for all of the templates, have a look.

This is kind of ironic as earlier today Simon Brown posted about how complex using plantuml for architecture diagrams becomes.

New Mix Archive gen_docker_db

I have been experimenting with creating custom mix tasks and publishing them as archives.
The project is here: https://github.com/chriseyre2000/gen_docker_db/

The first release is here: https://github.com/chriseyre2000/gen_docker_db/releases/tag/0.1.0

If you download the archive and use mix archive.install gen_docker_db-0.1.0.ez

This adds a new command to mix: gen_docker_db
This will echo the command you need to create a docker container for postgres,

It is entirely insecure and uses a default password, but it will allow you to quickly stand up a local docker image for phoenix.

Evolving a CQRS ES Model

Here is a video I found on the topic:

https://www.youtube.com/watch?v=04_esxp8C_o

The video covers some of the basics including how to version, reproject, handle “Errors” and mask invalid data.

Here is an article on Evolving Data Warehouses:

https://www.sqlshack.com/implementing-slowly-changing-dimensions-scds-in-data-warehouses/

I have a theory that the two topics are at least similar enough to consider using across the two domains.

Kubernetes and Elixir Part 2

It appears that the series that I linked to stopped at the first article.

Here is my plan for the weekend:

– build a generator to create the docker file for an elixir app.

– build a generator to build the skeleton for a Kubernetes/Helm setup

– deploy this to minikube

– establish how to network the nodes together within k8s

– establish and document how to connect to this from outside k8s usung both iex and livebook

– work out how to deploy to a pod running inside k8s

If possible experiment with the build server system from the link in the previous article. It would be good to use this to build something that could be used to test github actions locally or to recreate a heroku environment.

A recent Thinking Elixir podcast talked about per PR environments. This could be achieved with Kubernetes. A previous client had attempted something like this.

Kubernetes and Elixir

This is a series of articles that cover using Elixir in Kubernetes:

https://david-delassus.medium.com/elixir-and-kubernetes-a-love-story-721cc6a5c7d5

This uses the following github repo:

github.com/linkdd/elixir-k8s-love-story

The difference in this series from most others is that it covers building a Kubernetes setup assuning that you have a BEAM cluster rather than just a set of go applications that require elections.

The programming model for the BEAM is very different. The model is closer to an operating system than a single machine.

I have not yet worked through it, but will study it.

Here is another article on clustering on Kubernetes https://mbuffa.github.io/tips/20201022-elixir-clustering-on-kubernetes/