The document on the tablet here is a “research” into using grammars. It’s llm slop, on purpose. I’m not asking for a solution, I’m looking at what the llm can “stochastically” infer from the current code that was loaded in the context window by virtue of creating the previously annotated document (still in context window), and see if it leads to a decomposition of the problem that feels much more amenable to a long term solution.I think it’s very easy to mistake my prompt “create a postdoc thesis on creating grammars for the admin dsl language” for me thinking the llm is a postdoc or some sort, but the words postdoc thesis here have the purpose of biasing the stochastic output towards cs theory jargon and away from admin dashboards, and thesis will elicit a certain type of writing that I found to be more amenable for me to parse.Is this valid or just me making shit up? It is both, since these systems are so complex that even benchmarking something like this, with some kind of reliable ...
Related
VergeTerrence O'Brien、待ってましたですRevamped Siri will reportedly offer auto-deleting chats https://www.theverge.com/tech/9322...
VergeTerrence O'Brien、待ってましたですRevamped Siri will reportedly offer auto-deleting chats https://www.theverge.com/tech/932207/siri-apple-intelligence-auto-deleting-chats#Apple #LLM #n...
AI Prompt Injection Attacks 2026: Real Examples That WorkPrompt injection is the #1 vulnerability in LLM applications. T...
AI Prompt Injection Attacks 2026: Real Examples That WorkPrompt injection is the #1 vulnerability in LLM applications. Technical breakdown of attack vectors, real-world exploits, a...
…could the dark patterns be bugs? I don't believe so when #ClaudeCode is so eager to tell me it's done, good enough for ...
…could the dark patterns be bugs? I don't believe so when #ClaudeCode is so eager to tell me it's done, good enough for this session -- anything to stop when I have plenty of token...