Maybe these little snippets/articles will help someone.
Mock openai compatible completions server for continue.dev that checks if dependencies in generated code exist.
Custom prompt and some additions to the llama.cpp frontend to draw graphs directly from a prompt.
Some snippets around inputing correct answers into LLM chats with a Firefox extension.
Some tips on how to customize the Firefox AI sidebar with a local llama.cpp server.