Thinking AI is an upgrade from pencil to pen gives the impression that you spent zero effort incorporating it in your workflow, but still thinking you saw the whole payoff. Feels like watching my Dad using Eclipse for 20 years but never learning anything more complicated than having multiple tabs.
With that in mind, work on your prompting skills and give it a shot. Here are some things I’ve had immense success using GPT for:
Refactoring code
Turning code “pure” so it can be unit-testable
Transpiling code between languages
Slapping together frontends and backends in frameworks I’m only somewhat familiar with in days instead of weeks
I know in advance someone will tunnel vision on that last point and say “this is why AI bad”, so I will kindly remind you the alternative is doing the same thing by hand… In weeks instead of days. No, you don’t learn significantly more doing it by hand (in fact when accounting for speed, I would argue you learn less).
In general, the biggest tip I have for using LLM models is 1. They’re only as smart as you are. Get them to do simple tasks that are time consuming but you can easily verify; 2. They forget and hallucinate a lot. Do not give them more than 100 lines of code per chat session if you require high reliability.
Things I’ve had immense success using Copilot for (although I cancelled my Copilot subscription last year, I’m going to switch to this when it comes out: github.com/carlrobertoh/CodeGPT/pull/333)
Adding tonnes of unit tests
Making helper functions instantly
Basically anything autocomplete does, but on steroids
One thing I’m not getting into on this comment is licensing/morals, because it’s not relevant to the OP. If you have any questions/debate for this info though, I’ll read and reply in the morning.