Warp as a terminal

I have been using https://www.warp.dev/ as a terminal for a while.

The ai hints are sometimes useful, but it really shines as a multi-tab terminal.

Recently I accedentally found it has a built in IDE. This will be something that will be worth looking at once they actually have an auto-save feature. Once you have been using auto-save for a while you lose the almost continuous ctrl-s twitch.

Experimenting with Gleam

I am now trying to experiment with using Gleam.

Gleam is a small language that can compile to both the BEAM and Javascript systems.

My current blocker is that my mac is too old for the brew version to install.
I am currently waiting to see if I can build it from source.

This language is unusual in being defined by what it does not support:

  • Loops
  • Any type
  • Exceptions

Summary:

brew install <- Failed as does not support macOS 13

first attempt a make build – failed with `error[E0658]: use of unstable library feature `unsigned_is_multiple_of`

Now trying rustup update before installing.

Rustup update did compile successfully.

Next step is make install

Rust may claim to be fast, but that does not apply to the compiler.

Yep this has installed!

Trunk Based Development and functional feature flags

Technically I don’t use TBD. That would involve commiting directly to the main branch. However I do work with very small PRs. If it makes sense to deploy a small slice then it gets a PR and is merged.

Typically the way to make this safe is to use feature flags, so that production can behave differently to staging. If you are downstream of other changes you can use real data as the switch. Effectively you are live with a feature that won’t be available until someone else relases their part.

For example if there is a new type of product that will be sold by another team you can use the existence of the purchased product as the feature flag. This means that there are no coordination dances with the upstream team (other than don’t release until we have some minimal version ready).

Working this way only works in environments where you can deploy to production many times per day. Heavily regulated environments will limit this to staging.

One of the downsides of working this way is that you may have two or three parallel flows active at one time. You need to aggressivly cull the explicit feature flags or the combantorial number of test paths will explode.

Its also important to name feature flags in the positive and for them to be on the behaviour enabled not the customer or the project. I have seen crazy complexity when you want to give part of customer A’s feature to customer B.

Modern Software Development

XP and TDD seem to have fallen out of fashion lately. They can work well but it does have some strong requirements.

The actual aim is to have a system that you can extend quickly, safely and reliably. It requires each change to add some value to a system without breaking things. Feature flags can be used to maintain existing functionality that is being replaced. You want to add value for the users with each change.

Careful ordering of changes can help. Show the user the data that will be automatically checked in a later stage.

The codebase needs to be sufficiently covered by unit tests that you trust. If this is not the case start with Working Effectively with Legacy Code.

You need a readinably fast build cycle. It won’t do if you can’t find out what is happening within a few seconds of writing that line of code.

You need a fast repeatable deployment system so that you can get a feature deployed to a staging environment within 15 mins.

You need decent logging of both staging and production. Boring info messages are a great way to detect things that did not happen.

Thunk about the analytics before you deploy to production. Ask your Product people what questions they want answered in the day after the launch. Its amazing how much info can be captured by analysis messages in the form (name, id, datetime).

For example:

Start process for xxx

Email sent for xxx

Stock updated for xxx

Product shipped for xxx

The above are each 1 line of code but together allow a range of analysis to be built later.

Use the logs and analytics to listen to the system for feedback.

Allow the developers to see the production logs. Without this they are cut off from the most valuable feedback.

Understand that a Minimum Viable Product is not finished as soon as it launched. This should be the starting point for a range of improvements. Your users are going to do things with and to the system that you did not expect. Some of these will provide the best feedback you ever see.

My current engineering manger reports seeing “sorry I don’t know my quote number” as an entered quote number on a mandatory field. That could lead to you offering alternatives.

Take The Data to The User

I am working on integrating with a large system. This is one of those applications that my users spend half their time using,

There are two main approaches to integrating with systems like that.
One is to extract the data and present it inside the system you have. This means that the users have to use both the system they are using and yours.

The second approach is to push your data into that large system. There are several ways of doing this.
Embed a custom UI inside the system (if it allows for that). Another alternative is to use some form of commenting system as a UI. If you can write an HTML table then you can embed whatever data you need in the UI, This will be challenging but very rewarding for the users as they won’t have to switch away from their primary system.

Evolution of Microservices

I am currently working on a set of microservices. The initial design made sense.

A small microservice stood outside a monolith. It was triggered by webhooks from a 3rd party service. It had a service inside the monolith that provided it the required data.

This provided isolation so that the third party had no direct communication route with the main application.

A few months later and more features have been added. We now use a form inside the monolith to capture the data equivalent that the third party sent.

Moving the data around to perform the calculations in the small microservice is becoming increadingly complex. Once we capture all the data with the new form the small microservice now has no reason to exist.

This is a concept that is hard to track if you have a large amount of microservices. How do you know that a given microservice still needs to exist? This is frequently not documented well.

This concept repeats at other levels. Why does a given function exist? Without continous pruning you end up maintaining dead code.

The only answer is for each service to have a sponsor to keep it alive. If the sponsor is removed find another or remove the item.

Problems with the UK digital ID Card

This is a thought experiment about how the digital ID card would work in real world situations.

How do you avoid someone impersonating you? Identity theft is fairly common. This would require a system for flagging an account as having suspicious activity.

This is where it gets complicated: how do you authenticate the flagging of suspicious activity? There would be oppertunities for denial of service by bad actors.

It took me 3 months of chasing to get a failed fraud attempt off of my credit score. How would this work for a digital id?

How do you prevent “fake” id apps showing up? Are all requests going back to a central system – good luck keeping that up, secure and accessible from everywhere!

I am not sure that the details have been thought through (even if Mr Blair (Senior) has been pushing this for 20 years.

Funx library

I am reading Advanced Functional Programming with Elixir.

The book shows how to use various techniques from Haskell in Elixir. Rather than build the library as you go the author has released the full version as https://hex.pm/packages/funx

Eq – Chapter 2 This is about Identity

Ord – Chapter 3 This is about sorting

Monoid – Chapter 4 This is about collecting things

Predicates – Chapter 5 – This is about boolean logic

AI and Clickops

It seems that all of the AI tools seem to have embraced clickops as a configuration method.

Clickops is the practice of using manual entry and configuration of a system. This contrasts with the infrastructure as code approach which is now the main case for traditional development.

Clickops makes having distinct production and staging environments difficult to maintain. How can you know what you had in place at a given time and restore it.

This is especially difficult for secret management. You eventually have to manually configure all the required secrets. This will prevent automatic expiry and renewal (or risk outages).

It might be worth trying to build desired state configuration tools with whatever APIs you do have.