Web clients are fundamentally insecure

You can’t trust that a web client has not been compromised.
The only safe bet is to assume that any API that you expose to your web client is being directly used as an API.

The client side javascript code for a site makes great documentation for attacking your server.

A simple `wget -r URL will give you the html, css and javascript of most of the site.

The internal urls are stored in the javascript along with any of the graphql queries that you are using.

Developer tools in the browser plus a simple graphql client tool can give you access to far more than you expect.

HMRC Online Tax Return a review

This is an annual ritual in the Uk if you have been asked to fill one in.
The deadline is end of January but it actually covers the year to the previous April.

You are asked to enter data from various forms you have been sent over the year.
It fails to carry forward simple information from the previous year.
My name has not changed nor has my married status.

My employers name is longer than the maximum length that can be input!
The form is multipart asking you in early steps to determine how much of it you actually need to complete.
It is lacking a “hang on I need to ask someone for one field” option. This means you have to pause the process if you find you are missing a P11D.

Given that there are multiple steps I would like to be able to fill in the capital gains section even if I can’t complete the employee section.

The joke is that they already have all this information. The only important part is making you state that you believe this to be correct.

I don’t want to do all this in the new app that they created for it.

Beyond Equality: Compare

There are lots of ways to check for equality. Funx even defines a protocol for it.

This is useful in a number of situations and combines well with other protocols.

There are many situations in which you don’t merely want to say that two things are different you need to know what is different. For this I typically use a compare function that either returns :ok or a list of Validation errors.

Having a simple contract like this makes them easily composable into rulesets. Rulesets can know when it is appropriate to apply a given compare. Using this approach allows a system to provide actionable feedback. Its more than a computer says no, its a these are the problems that I have found.

This is similar to the changeset that Ecto provides for databases but at the domain model level.

Extracting composable compares and rulesets makes the domain logic explicit and testable. This will make extensions easier.

Warp as a terminal

I have been using https://www.warp.dev/ as a terminal for a while.

The ai hints are sometimes useful, but it really shines as a multi-tab terminal.

Recently I accedentally found it has a built in IDE. This will be something that will be worth looking at once they actually have an auto-save feature. Once you have been using auto-save for a while you lose the almost continuous ctrl-s twitch.

Experimenting with Gleam

I am now trying to experiment with using Gleam.

Gleam is a small language that can compile to both the BEAM and Javascript systems.

My current blocker is that my mac is too old for the brew version to install.
I am currently waiting to see if I can build it from source.

This language is unusual in being defined by what it does not support:

  • Loops
  • Any type
  • Exceptions

Summary:

brew install <- Failed as does not support macOS 13

first attempt a make build – failed with `error[E0658]: use of unstable library feature `unsigned_is_multiple_of`

Now trying rustup update before installing.

Rustup update did compile successfully.

Next step is make install

Rust may claim to be fast, but that does not apply to the compiler.

Yep this has installed!

Trunk Based Development and functional feature flags

Technically I don’t use TBD. That would involve commiting directly to the main branch. However I do work with very small PRs. If it makes sense to deploy a small slice then it gets a PR and is merged.

Typically the way to make this safe is to use feature flags, so that production can behave differently to staging. If you are downstream of other changes you can use real data as the switch. Effectively you are live with a feature that won’t be available until someone else relases their part.

For example if there is a new type of product that will be sold by another team you can use the existence of the purchased product as the feature flag. This means that there are no coordination dances with the upstream team (other than don’t release until we have some minimal version ready).

Working this way only works in environments where you can deploy to production many times per day. Heavily regulated environments will limit this to staging.

One of the downsides of working this way is that you may have two or three parallel flows active at one time. You need to aggressivly cull the explicit feature flags or the combantorial number of test paths will explode.

Its also important to name feature flags in the positive and for them to be on the behaviour enabled not the customer or the project. I have seen crazy complexity when you want to give part of customer A’s feature to customer B.

Modern Software Development

XP and TDD seem to have fallen out of fashion lately. They can work well but it does have some strong requirements.

The actual aim is to have a system that you can extend quickly, safely and reliably. It requires each change to add some value to a system without breaking things. Feature flags can be used to maintain existing functionality that is being replaced. You want to add value for the users with each change.

Careful ordering of changes can help. Show the user the data that will be automatically checked in a later stage.

The codebase needs to be sufficiently covered by unit tests that you trust. If this is not the case start with Working Effectively with Legacy Code.

You need a readinably fast build cycle. It won’t do if you can’t find out what is happening within a few seconds of writing that line of code.

You need a fast repeatable deployment system so that you can get a feature deployed to a staging environment within 15 mins.

You need decent logging of both staging and production. Boring info messages are a great way to detect things that did not happen.

Thunk about the analytics before you deploy to production. Ask your Product people what questions they want answered in the day after the launch. Its amazing how much info can be captured by analysis messages in the form (name, id, datetime).

For example:

Start process for xxx

Email sent for xxx

Stock updated for xxx

Product shipped for xxx

The above are each 1 line of code but together allow a range of analysis to be built later.

Use the logs and analytics to listen to the system for feedback.

Allow the developers to see the production logs. Without this they are cut off from the most valuable feedback.

Understand that a Minimum Viable Product is not finished as soon as it launched. This should be the starting point for a range of improvements. Your users are going to do things with and to the system that you did not expect. Some of these will provide the best feedback you ever see.

My current engineering manger reports seeing “sorry I don’t know my quote number” as an entered quote number on a mandatory field. That could lead to you offering alternatives.

Take The Data to The User

I am working on integrating with a large system. This is one of those applications that my users spend half their time using,

There are two main approaches to integrating with systems like that.
One is to extract the data and present it inside the system you have. This means that the users have to use both the system they are using and yours.

The second approach is to push your data into that large system. There are several ways of doing this.
Embed a custom UI inside the system (if it allows for that). Another alternative is to use some form of commenting system as a UI. If you can write an HTML table then you can embed whatever data you need in the UI, This will be challenging but very rewarding for the users as they won’t have to switch away from their primary system.

Evolution of Microservices

I am currently working on a set of microservices. The initial design made sense.

A small microservice stood outside a monolith. It was triggered by webhooks from a 3rd party service. It had a service inside the monolith that provided it the required data.

This provided isolation so that the third party had no direct communication route with the main application.

A few months later and more features have been added. We now use a form inside the monolith to capture the data equivalent that the third party sent.

Moving the data around to perform the calculations in the small microservice is becoming increadingly complex. Once we capture all the data with the new form the small microservice now has no reason to exist.

This is a concept that is hard to track if you have a large amount of microservices. How do you know that a given microservice still needs to exist? This is frequently not documented well.

This concept repeats at other levels. Why does a given function exist? Without continous pruning you end up maintaining dead code.

The only answer is for each service to have a sponsor to keep it alive. If the sponsor is removed find another or remove the item.