Craft GraphQL APIs in Elixir with Absinthe Part Four

Still working. through the examples.

Project is at https://github.com/chriseyre2000/absinthe_demo

Currently at df7c52f

Again there is a minor difference in the error message format, and the error is a 200 not a 400.

It’s interesting to see custom modification.


Now at the end of chapter 3

Chapter 4 introduces some useful extra structure, allowing the schema to be broken down into manageable pieces.

The next step will cover unions.

VS Code Custom Settings

This is my current VSCode settings:

{
    "files.autoSave": "afterDelay",
    "diffEditor.ignoreTrimWhitespace": false,
    "git.openDiffOnClick": false
}

It’s not much but it is a start.

Autosave is a must.
Opening the file from diff is much more sensible than assuming you want the diff.

Protobuf in Elixir

Protox is a great library to allow Protobuf to be used in Elixir.
While developing a span catcher for OpenTelemetry I found that I needed to decode a protobuf format message.

Here is the repo if you are interested:

https://github.com/chriseyre2000/span_eater

To construct the protobuf file I looked at the readme of opentelemetry_exporter, which pointed me to the protobuf definitions:

https://github.com/open-telemetry/opentelemetry-proto/tree/v0.11.0

I chose to simplify these into a single file (it’s not that large).

Protobuf is a wire format serializer. This means that if you send a Protobuf message any language that can use protobuf can read the message (provided that both side have the definition).

Given that this proto file has been implemented by a number of OpenTelemetry consumers it can be assumed to be stable.

Working with OpenTelemetry in Elixir

I have created two github projects to demonstrate using OpenTelemetry (admittedly badly).

The first project https://github.com/chriseyre2000/something_to_measure is a simple GenServer that has a method that can be used to generate OpenTelemetry Spans.

The second project https://github.com/chriseyre2000/span_eater is a simple GenServer that consumes spans on the default port that OpenTelemetry sends them on.

In production you would typically have a sidecar application to capture and rebroadcast the messages.

In OpenTelemetry terms a Span is a time interval during which a given process ran. These can be nested. The result is that an Observability tool could capture the spans and construct a visualisation of what was happening. Spans are more useful than raw log data as it has a controlled meaning which would need to be inferred.

span_eater currently just logs that it has received the message. I am planning to make it more sophisticated, and then build a LiveBook to host it in. Currently it is useful to remove the log messages that otherwise get generated:
`

[info]  client error exporting spans {:failed_connect,
 [{:to_address, {'localhost', 4318}}, {:inet, [:inet], :econnrefused}]}

These messages typically flood the logs of a locally run application that is instrumented to publish OpenTelemetry data

I have just worked out how to decode the Protobuf data sent over the wire. We can now listen to OpenTelemetry messages sent by our local machine.

Thoughts on Configuration and Supervisors

Back on a previous project I worked with Heroku.
Heroku has a great setup for applications. You deploy to a git repo and have a parallel set of configuration via a UI/API.
If either changes the application redeploys the application.

On a walk this morning I realised that you could recreate some of this behaviour inside your Elixir (or other BEAM based) application.

Supervision Tree

If you had a supervisor with an all for one restart policy, if the Config Watcher notices a change it can simply terminate itself and restart the Worker service that depends upon the configuration.

The Config worker is both the cache for the data and periodically checks for changes.
This requires you to keep the configuration watcher specific to the service that uses it to reduce the blast radius of changes.

This seems like a different pattern of use to the typical supervisor.

First Steps into K8s

I am now need to get a better understanding of Kubernetes.
For the last few years I have been working with Docker, sometimes deployed to ecs, and before that the very simple Heroku setup.

I have just quickly skimmed through Kubernetes: Up and Running.

So far: minikube is a small one-node Kubernetes setup that can be deployed to a developers machine.

kubectl is the command line tool used to start/stop/scale things.

k8s has its own dns server and a host of self-hosted services. I am also aware that docker is involved.
There is also the important concepts of pods, daemonSets and tags.

The next item to read up on is Helm This seems to be an improved tool for deploying things to K8s.