Programming Phoenix Liveview Part 1

I have started working through Programming Phoenix Liveview

These are my notes.

Currently working on the setup. Given that I had already got phoenix installed it is useful to add the following:

mix archive.install hex phx_new

docker run -d -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:11

These bring the generator upto date and gets a minimal version of postgres running locally.
Don’t use these settings in production, but it will make making quick examples easier.

Expoloring the liveview socket.

iex> Phoenix.LiveView.Socket.__struct__ |> Map.keys
[:__struct__, :assigns, :endpoint, :fingerprints, :host_uri, :id, :parent_pid,
 :private, :redirected, :root_pid, :router, :transport_pid, :view]

Why Observability

My current employer has just completed an Observability Week trying to raise the quality of how we understand our system.

Properly implemented Observability allows you to see the impact of your changes. Its great when you can see a cliff in a graph going hopefully in the right direction.

Observability allows you to answer the question how do I know that the change has worked and by how much. In some cases you want one graph to change and another to remain the same. Its not always possible to show a direct impact but for bug fixes or work driven by log analysis this is important.

Generating Diagrams in Livebook

I have paused working on my kino_wardley component for a while.

Just having a think about how to integrate diagram generation into a livebook.
I came up with this https://github.com/chriseyre2000/livebooks/blob/main/DynamicDiagrams.livemd

Here is the code

Mix.install([{:kino, "~> 0.7.0"}])

data = """
digraph architecture {
  rankdir=LR;

  subgraph client_side_apps {
      front_end -> {auth_api, my_app_api};
      extension -> {auth_api, my_app_api};
      
      {rank=same; front_end, extension, auth_api};
  }
  
  subgraph api_gateways {
      my_app_api -> {photos_ms, chats_ms, friends_ms};
  }
  
  subgraph microservices {
      photos_ms -> {database};
      chats_ms -> {database, cache};
      friends_ms -> {database, facebook_api};
  }
}
"""

{body, _} = System.shell( "echo '#{data}' | /usr/local/bin/dot -Tpng", stderr_to_stdout: true)
Kino.Image.new(body, :png)

It simply installs Kino and uses the graphviz command line tool dot to build a png which is then displayed in a Kino.Image.

This gives the basis of adding diagrams generated from dot or plantuml straight into a livebook.
These would make very useful as a LiveCell

Sample rendered image

Tests > Types

Just because you are using a type system doesn’t mean you can avoid writing a test.

The test may be easy to write but it adds value.

This is especially true if you are working in a type erasure system such as Typescript. Types in typescript do not exist at runtime.

Elixir, K8s and Graceful Shutdown

I am now working on a large application that use k8s (for better or worse).

We have an interesting issue in that during deployments there is a time window in which some of our services are called during the time that they are being shut down. This is a consequence of the lag between the liveness probe (or one of the probes) being called periodically. If the router does not stop sending requests to the server (we are using https|) the only way to ensure that these are neatly handled is to have a retry policy for http requests.

It does not matter how well you wire up notifications there is still the possibility of a failed call.

In Favour of Boring Launches

When you are launching a product or a new version of a product the actual release can be exciting. However from a development perspective it should be much simpler. Turn on a feature flag and check a few logs.

It can be exciting to see the numbers, but the technical process of go live should be as dull as you can make it. You should have tested all of the possible failure modes up front and the system knows how to handle them (or tell you via the logs that it needs help).

There will be some small issues with integrations that can’t be tested in advance, especially when the partners production system does not work exactly in the same way as the test system.

My employer went live with two partners over the last two days!

The Pros and Cons of Cloud Software as a Service

Over the last few years I have have been working with software almost exclusively in the cloud. There has not been a single local server running. In order to do this we use an array of cloud services:. Databases, Content Management Systems, Content Delivery Networks.

By using these it is possible to operate.a system at scale without costing the earth or having a huge development team. You do need a development team as things will always change.

With enough items in your supply chain (you do know your own supply chain?) you will find in any given year at least one of the providers will decide that they don’t want to (or show that they can’t) run a stable reliable service. Sometimes this is an attempt to move you to another service. Sometimes it is no longer viable for them to run it.

You have a choice at this point. Do you move to a different provider or do you build it yourself? If you do move to a different provider how similar is it? I had one case of moving a MongoDB to an AWS MongoDB compatible service. In this instance the AWS Product had the same API for entering data but had vastly different performance characteristics and in one specific case could not store some data that existed in the old system. This is fun when migrating a database and getting “sorry I can’t store that due to an embedded unprintable character”.

When you choose a vendor keep the analysis of the second and third choices. They may be needed later. Always plan an exit strategy. Those fancy platform specific features that you like so much can be a real pain when the next platform does not have them.

Building it yourself may be an option, but you are now taking on complexities that you didn’t previously have.

kino_wardley 0.8.0 released

Wardley map with three component states

This version adds the three simple states. These can be repurposed to identify important things.

Here is the commands to create the above:

KinoWardley.Output.new("""
component Tea [0.5, 0.5] label [10, 0]
outsource Tea

component Milk [0.4, 0.4] label [10, 0]
buy Milk

component Sugar [0.3, 0.3] label [10, 0]
build Sugar
""")

Domain Storytelling In Plantuml


This image

Domain Storytelling Diagram

Can be generated from:

@startuml

!include https://raw.githubusercontent.com/tmorin/plantuml-libs/master/distribution/domainstorytelling/single.puml

Title("What is Domain Storytelling?", "From the book __Domain Storytelling__")

include('domainstorytelling/Actor/User')
include('fontawesome-6/Regular/CommentDots')
include('fontawesome-6/Regular/Image')

User('domain_expert', "domain expert")
User('developer', "developer")
CommentDots('domain_story_a', 'domain story')
Image('domain_story_b', 'domain story')

domain_expert -r-> domain_story_a : Activity("01", "tells")
domain_story_a -r-> developer : Activity("", "to")

developer -d-> domain_story_b : Activity("02", "draws")

domain_expert -d-> domain_story_b : Activity("03", "reads and corrects")

@enduml

Putting this in source control keeps the story in alignment with the code.