These are rather clunky.

Then when in Italy you need to add an adapter.

This gets fun when the power sockets are low or there is a shelf below them.
Guess what my hotel room had?
Random outpourings of a software developer
These are rather clunky.

Then when in Italy you need to add an adapter.

This gets fun when the power sockets are low or there is a shelf below them.
Guess what my hotel room had?
XP and TDD seem to have fallen out of fashion lately. They can work well but it does have some strong requirements.
The actual aim is to have a system that you can extend quickly, safely and reliably. It requires each change to add some value to a system without breaking things. Feature flags can be used to maintain existing functionality that is being replaced. You want to add value for the users with each change.
Careful ordering of changes can help. Show the user the data that will be automatically checked in a later stage.
The codebase needs to be sufficiently covered by unit tests that you trust. If this is not the case start with Working Effectively with Legacy Code.
You need a readinably fast build cycle. It won’t do if you can’t find out what is happening within a few seconds of writing that line of code.
You need a fast repeatable deployment system so that you can get a feature deployed to a staging environment within 15 mins.
You need decent logging of both staging and production. Boring info messages are a great way to detect things that did not happen.
Thunk about the analytics before you deploy to production. Ask your Product people what questions they want answered in the day after the launch. Its amazing how much info can be captured by analysis messages in the form (name, id, datetime).
For example:
Start process for xxx
Email sent for xxx
Stock updated for xxx
Product shipped for xxx
The above are each 1 line of code but together allow a range of analysis to be built later.
Use the logs and analytics to listen to the system for feedback.
Allow the developers to see the production logs. Without this they are cut off from the most valuable feedback.
Understand that a Minimum Viable Product is not finished as soon as it launched. This should be the starting point for a range of improvements. Your users are going to do things with and to the system that you did not expect. Some of these will provide the best feedback you ever see.
My current engineering manger reports seeing “sorry I don’t know my quote number” as an entered quote number on a mandatory field. That could lead to you offering alternatives.
I am working on integrating with a large system. This is one of those applications that my users spend half their time using,
There are two main approaches to integrating with systems like that.
One is to extract the data and present it inside the system you have. This means that the users have to use both the system they are using and yours.
The second approach is to push your data into that large system. There are several ways of doing this.
Embed a custom UI inside the system (if it allows for that). Another alternative is to use some form of commenting system as a UI. If you can write an HTML table then you can embed whatever data you need in the UI, This will be challenging but very rewarding for the users as they won’t have to switch away from their primary system.
I am currently working on a set of microservices. The initial design made sense.
A small microservice stood outside a monolith. It was triggered by webhooks from a 3rd party service. It had a service inside the monolith that provided it the required data.
This provided isolation so that the third party had no direct communication route with the main application.
A few months later and more features have been added. We now use a form inside the monolith to capture the data equivalent that the third party sent.
Moving the data around to perform the calculations in the small microservice is becoming increadingly complex. Once we capture all the data with the new form the small microservice now has no reason to exist.
This is a concept that is hard to track if you have a large amount of microservices. How do you know that a given microservice still needs to exist? This is frequently not documented well.
This concept repeats at other levels. Why does a given function exist? Without continous pruning you end up maintaining dead code.
The only answer is for each service to have a sponsor to keep it alive. If the sponsor is removed find another or remove the item.
This is a thought experiment about how the digital ID card would work in real world situations.
How do you avoid someone impersonating you? Identity theft is fairly common. This would require a system for flagging an account as having suspicious activity.
This is where it gets complicated: how do you authenticate the flagging of suspicious activity? There would be oppertunities for denial of service by bad actors.
It took me 3 months of chasing to get a failed fraud attempt off of my credit score. How would this work for a digital id?
How do you prevent “fake” id apps showing up? Are all requests going back to a central system – good luck keeping that up, secure and accessible from everywhere!
I am not sure that the details have been thought through (even if Mr Blair (Senior) has been pushing this for 20 years.
I am reading Advanced Functional Programming with Elixir.
The book shows how to use various techniques from Haskell in Elixir. Rather than build the library as you go the author has released the full version as https://hex.pm/packages/funx
Eq – Chapter 2 This is about Identity
Ord – Chapter 3 This is about sorting
Monoid – Chapter 4 This is about collecting things
Predicates – Chapter 5 – This is about boolean logic
It seems that all of the AI tools seem to have embraced clickops as a configuration method.
Clickops is the practice of using manual entry and configuration of a system. This contrasts with the infrastructure as code approach which is now the main case for traditional development.
Clickops makes having distinct production and staging environments difficult to maintain. How can you know what you had in place at a given time and restore it.
This is especially difficult for secret management. You eventually have to manually configure all the required secrets. This will prevent automatic expiry and renewal (or risk outages).
It might be worth trying to build desired state configuration tools with whatever APIs you do have.
Recently I have been working on a new service. It has a single rest endpoint that triggers some work.
The tests for this use Hammox to mock out the external services that it talks to. This ensures that the tests always match the declared typespecs for the services.
This week I added ex_coveralls to the project to see how complete the test coverage was. It revealled several parts of the code that had missing tests. Adding those revealled a few small bugs (one of which had been found in production while I was writing the tests!).
The test coverage was approaching 100% in the main processing area. Combined with a suite of tests that produced all of the possible output states I was confident about performing some refactoring.
The refactors were to replace some tuples with structs and them to remove some pointless wrapping error tuples. Coveralls helped to reveal dead code paths that once removed achieved the 100% coverage.
The tests also helped when working out what could be changed. By adding a log statement at a key points I could see all possible inputs to a function. This helped in finding what to change when adapting.
Now I have simpler code that with the tests I have confidence in.
This is a great example of using a python library within Elixir:
https://github.com/nelsonmestevao/pdf_extractor/blob/main/lib/pdf_extractor/pdf_plumber.ex
This means we can easily turn a python cli application into a managed Elixir webservice.
If we don’t have a native Elixir version we are free to use a Python one.
Keep these in small services as it may not play well with a supervision tree!
Elixir finds lots of ways to work well with other languages!
I recently wrote a useful application that does not have its own UI. Your users probably already have enough primary applications to work with so adding another will make their life more complex.
Instead the application writes the data it finds into the application they are already using. I have used this technique now with Contentful, Teams, and now Zendesk.
The trick is to combine webhooks from the application with some form of comment API to write what you need where the users are already working.
In this case I notice that a ticket has been submitted with some metadata and a well known pdf structure. The app reads the pdf, parses it and checks the details against what we have stored on our systems. It also writes both back to the ticket with a list of differences. This means that the user does not need to open the attachments (or our system) to start work.