As llm tooling has evolved, my level of trust of the llm output has continually increased, and in contrast to human developers which I’m obviously going to treat with respect, this is a machine I can put through its paces and explore its boundaries. Can I trust this machine to build xyz? Well I had it build 25 xyz over the last month, i studied the code, at times i even look at its raw form, I found out the prompting methods to create solid foundations, i prompted myriads of ways to muck with it, i have extensive smoke tests that explore way more avenues than I would think of by myself, i can build explanatory tooling, dashboards, code analysis tools. The level of trust I can establish with llms is just an order of magnitude higher than what I could without (for human or for machine written code).We are at a weird inflection point where people vibecode big systems without putting that trust-establishing work in, but where others are. The lingo in both is the same: both categories _trus...
Related
VergeTerrence O'Brien、待ってましたですRevamped Siri will reportedly offer auto-deleting chats https://www.theverge.com/tech/9322...
VergeTerrence O'Brien、待ってましたですRevamped Siri will reportedly offer auto-deleting chats https://www.theverge.com/tech/932207/siri-apple-intelligence-auto-deleting-chats#Apple #LLM #n...
AI Prompt Injection Attacks 2026: Real Examples That WorkPrompt injection is the #1 vulnerability in LLM applications. T...
AI Prompt Injection Attacks 2026: Real Examples That WorkPrompt injection is the #1 vulnerability in LLM applications. Technical breakdown of attack vectors, real-world exploits, a...
…could the dark patterns be bugs? I don't believe so when #ClaudeCode is so eager to tell me it's done, good enough for ...
…could the dark patterns be bugs? I don't believe so when #ClaudeCode is so eager to tell me it's done, good enough for this session -- anything to stop when I have plenty of token...