Programming Phoenix Liveview Part 3

Now onto chapter 2.

This is where having a moving library makes books like this difficult to keep upto date.
phx_gen_auth is now part of phoenix as of 1.6.

However it is a bit harder to work that out from the docs.
Adding the dependency leads to a mix resolution conflict.

‚Äč:phx_gen_auth is listed as 0.4 in the book, The latest version available is 0.7.0
However that only works with phoenix 1.5.2, where as the latest version is 1.6.15

phx_gen_auth has now vanished from hex.pm, and the github project is archived.

Here is the project currently at the end of chapter 2

https://github.com/chriseyre2000/pento

Programming Phoenix Liveview Part 2

Still working on getting started.

I am using phx.new version 1.6.15 which is slightly different to the book.
The mix phx.new command no longer has the –live flag so some of the expected templates are not there.
This means that page_live needed to be created manually and the route added.

This is my current version of that module. It’s probably not what. is expected but it is good enough.

defmodule PentoWeb.PageLive do
  use PentoWeb, :live_view

  def mount(_params, _session, socket) do
    {:ok, assign(socket, query: "123", results: %{})}
  end

  def render(assigns) do
    ~H"""
    <%= @query %>
    """
  end
end

I have also found that you get some exotic errors if you make a typo in an ~L template.

If you type phn-click instead of phx-click you get asked about some missing methods.

Having Observability Is Not Enough

Adding observability to a system is a good start. Being able to see what is going on is just the start.

Now that you have the data you can start cleaning up errors. Look at the logs and try to clear the most frequent errors. Statistics determine that at any given time a few errors will dominate. Try and add details to break them down into categories.

Once the backgtound errors are out the way it is much easier to see the impact of any incident or change.

Look for any gaps in the information you have. Is sonething not instrumented?

Observability gives you the information that you need to make data driven decisions about your systems. It is almost a conversation where you can ask where is it wrong.

Programming Phoenix Liveview Part 1

I have started working through Programming Phoenix Liveview

These are my notes.

Currently working on the setup. Given that I had already got phoenix installed it is useful to add the following:

mix archive.install hex phx_new

docker run -d -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:11

These bring the generator upto date and gets a minimal version of postgres running locally.
Don’t use these settings in production, but it will make making quick examples easier.

Expoloring the liveview socket.

iex> Phoenix.LiveView.Socket.__struct__ |> Map.keys
[:__struct__, :assigns, :endpoint, :fingerprints, :host_uri, :id, :parent_pid,
 :private, :redirected, :root_pid, :router, :transport_pid, :view]

Why Observability

My current employer has just completed an Observability Week trying to raise the quality of how we understand our system.

Properly implemented Observability allows you to see the impact of your changes. Its great when you can see a cliff in a graph going hopefully in the right direction.

Observability allows you to answer the question how do I know that the change has worked and by how much. In some cases you want one graph to change and another to remain the same. Its not always possible to show a direct impact but for bug fixes or work driven by log analysis this is important.

Generating Diagrams in Livebook

I have paused working on my kino_wardley component for a while.

Just having a think about how to integrate diagram generation into a livebook.
I came up with this https://github.com/chriseyre2000/livebooks/blob/main/DynamicDiagrams.livemd

Here is the code

Mix.install([{:kino, "~> 0.7.0"}])

data = """
digraph architecture {
  rankdir=LR;

  subgraph client_side_apps {
      front_end -> {auth_api, my_app_api};
      extension -> {auth_api, my_app_api};
      
      {rank=same; front_end, extension, auth_api};
  }
  
  subgraph api_gateways {
      my_app_api -> {photos_ms, chats_ms, friends_ms};
  }
  
  subgraph microservices {
      photos_ms -> {database};
      chats_ms -> {database, cache};
      friends_ms -> {database, facebook_api};
  }
}
"""

{body, _} = System.shell( "echo '#{data}' | /usr/local/bin/dot -Tpng", stderr_to_stdout: true)
Kino.Image.new(body, :png)

It simply installs Kino and uses the graphviz command line tool dot to build a png which is then displayed in a Kino.Image.

This gives the basis of adding diagrams generated from dot or plantuml straight into a livebook.
These would make very useful as a LiveCell

Sample rendered image

Tests > Types

Just because you are using a type system doesn’t mean you can avoid writing a test.

The test may be easy to write but it adds value.

This is especially true if you are working in a type erasure system such as Typescript. Types in typescript do not exist at runtime.

Elixir, K8s and Graceful Shutdown

I am now working on a large application that use k8s (for better or worse).

We have an interesting issue in that during deployments there is a time window in which some of our services are called during the time that they are being shut down. This is a consequence of the lag between the liveness probe (or one of the probes) being called periodically. If the router does not stop sending requests to the server (we are using https|) the only way to ensure that these are neatly handled is to have a retry policy for http requests.

It does not matter how well you wire up notifications there is still the possibility of a failed call.

In Favour of Boring Launches

When you are launching a product or a new version of a product the actual release can be exciting. However from a development perspective it should be much simpler. Turn on a feature flag and check a few logs.

It can be exciting to see the numbers, but the technical process of go live should be as dull as you can make it. You should have tested all of the possible failure modes up front and the system knows how to handle them (or tell you via the logs that it needs help).

There will be some small issues with integrations that can’t be tested in advance, especially when the partners production system does not work exactly in the same way as the test system.

My employer went live with two partners over the last two days!