Copilot is surprisingly effective on a large codebase with tests

I had been skeptical about using AI for development. My first experiences were with the simpler auto-complete options.

These were erratic – it would offer to complete what you were typing but would frequently start using properties and functions that would have been useful, frequently did not exist. The AI had no context and only guessed.

More recently I have been using the more sophisticsted chat option. Here you can give the AI more context about what you want it to do. This seems to work better if you give the AI context and use the plan mode to agree what you are going to get it to help with. Between each of its steps run tests and ensure that you understand all the changes.

This works well when an application has reasonable test coverage. Good practices like TDD really help. Ensure that the test coverage remains high. Refactor things to keep them tidy. The AI will only build the minimal test (one assert). Ensure that you expand the coverage.

AI is great at finding race condition issues or that failure in the middle of a long chain. You may need to interact with alternative solutions if it gets stuck. Treat it as an enthusiastic novice.

Leave a comment