Here is a quick example of implementing gRPC in node.js: https://grpc.io/docs/languages/node/basics/
I work on a large system that has a number of external dependencies. We have a shared repo that provides mocks for these services. Other teams are moving some of the services from http or graphql to gRPC. We need decent mocks or we end up with a fragile local development experience.
Category: Uncategorized
Annoyance Driven Development
These are the changes you make because a system does something that annoys you.
Expanding Practices From Personal to Team to Company.
I have a personal practice at work of looking at the error logs of our system each day. This leads to actions to make small improvements.
At the start of this year my engineering manager encouraged me to expand this to a team activity. We meet once a week and look at the highest frequency patterns. These are turned into either tickets for us to fix or requests for other teams. We have halved the volume of errors. The logs are now mostly free of noise and we are clearing real errors, some of which have been in production for years.
The next step is to expand this to the entire company.
The key point is that an observability platform is more effective if everyone is looking at it every data. I aim to inspire curiosity: what has caused this error and do I need ho fix or ignore it.
The basis for this is a simple dashboard in Datadog. List error logs for a number of systems. Look at errors in production grouped by patterns. Add exclusion rules to ignore things that are too expensive to fix now. View over a 1 day time window. Sa e as a favourite.
Look at this every day. Keep stats in a spreadsheet – datadog has a limited time window.
Analyse it every week.
Fix issues raised.
The first point people raise is to automate this. Resist this urge as it is the looking at it regulatly that is the point.
Logs are great for this as it is always possible to drill down to find a specific example to look at.
As a side effect the team will become better at writing log messages and using the observability tools. They will learn more about the systems they are supporting.
Working with AI
I have been skeptical of using AI in development work. Recently I have been using Vapi. This is a means of having a voice UI without having to do the Alexa rules option.
You give it a prompt that must include acting professionally and sticking to a knowledge base. Thats the core feature.
What really impressed me is that Vapi have applied this in chatbot form to their own documentation.
In a few questions I was able to get key information that would not have been found without exhaustivly reading the documentation.
I am still wary of using AI to write code having worked with Genetic Algorythms – you can end up with a solution that works for some cases but can’t say why it does it.
Moving to DDD
I curently working for a medium size company that is slowly moving towards using DDD. We have notionally been domain based for the last 18 months.
Its an international company with distinct business in three countries. The difficult part is working out what can be made global without breaking the existing offerings.
Each country has different preferences and work at different scales.
Common infrastructure pieces could be extracted. It makes sense to only integrate with payment providers once.
Each country will have regulatory requirements for reporting that need to remain country specific.
I like Vaugh Vernons definition of DDD as
Developing ubiquitous language within a bounded context
I like to have a ubiquitous dictionary so we have somewhere to document a Ubquitous Language.
Homemade Pasta
Apparently I have been a pasta snob since childhood. My parents tell a tale where I objected to tinned pasta when I first encountered it on holiday (Isle of Wight early 1970s). At home we normally had dried pasta.
Since then I have been using either dried pasta or shop bought “fresh” pasta.
A week or so ago I had a team building trip to Rome where we were taught to make fresh pasta by an intructing chef.
When I got home I purchased the pasta machine. This is a specialised press and slicer. As this is experimental I bought a cheap one from Lidl.

I have some experience with parts of the process as I have been baking bread for the last decade. One of the tricks I have learned is that dough is linerarly scalable.
Fresh pasta is made from very simple ingredients:
– Pasta Flour
– Semolina (which apparently is for beginners)
– Water
– Salt
Egg (optional)
The recipie that came with the machine called for equal amounts of flour, semolina and slightly less water (plus a pinch of salt). The chef advised that these are only starting points and you will learn to adjust them based upon the ingredients.
I would start with 25g of each per person plus one more to clean the machine. You are meant to put a small test sample through the machine and dispose of it to clean the blades.
Mix all the ingredients together and knead for 10 mins. This should give a soft ball of dough. Wrap in clingflim and leave to proove for half an hour
Take pieces of the dough (covering the remainder) and put it through the pasta machine about 7 times on the widest setting folding it in half each time. Then you can reduce the size to get the thickness of pasta you want. These can then be sliced using the other head of the pasta machine. Mine can make spagetti or tagatelle.
Be carefull to catch the cut pasta as it can start to stick together. I need to get a pasta drying rack to go with the machine. The pasta needs to dry for a while (book says 1 hour, chef much less).
The fresh pasta cooks in boiling water in 2 mins.
GDPR Rights
One of the fundamental GDPR rights is the right to be forgotten.
I recently received a cold call on my mobile number from a company that I had not given it to.
They had purchased it from Cognism.
Now Cognism are reasonable and have decent records (kind of they needed to be reminded that the initial daraset was incomplere) of where they have sold it to.
Their website has a simple form to remove me from their database,
The problem is now the companies they have shared my details with:
lead-forensics
zymplify
lightrun
revgen
ukfast
These all have gdpr compliant messages on their sites but don’t have a simple form to access them.
Rome
I just had a very quick trip to Rome.
It was a team building so a one night stay.
Lots to see, food is great. Attended a pasta making training session.
Rome is said to be on seven hills. The hills are not very big. It’s more that the city is not as flat as say Milan.
Hotel rooms in Rome have kettles – something that the Milan equivalent lacks.
The international airport is a 35 min Train ride from the main Rome station.
The arrivals board is more prominent then the departures board.
Getting onto the train is more complex than necessary due to some repair work. You need to leave the platform and rejoin (or walk through the building work).
The travelator at the station moves at less than walking pace!
Playwright – Notes after a week of use
The playwright e2e tests are significantly easier to work with than cypress.
While developing the tests using the playwright test --ui option makes developing the test really easy.
You get to see what is happening at each step and can simply copy the final url to allow experimentation.
The only drawback is that on CI you can’t always see the server logs. This can be alleviated by adding small healthcheck endpoints.
E2eTests
Recently I have been working with some e2e tests. These run inside the ci build and all external services are mocked and dockerised.
Given the frontend is typescript and has an Elixir backend this is the only point we can test the integration of the two.
The existing system uses Cypress. This has been slow and unreliable. We sometimes get errors that we cannot manually recreate.
I have been working on moving to Playwright. We lose the gherkin format but gain faster more reliable tests.
I have just found https://github.com/HamedStack/HamedStack.Playwright.Screenplay which provides some abstractions to help the testing.