| 12 Mar 2026 |
| crstl changed their profile picture. | 18:02:38 |
| 13 Mar 2026 |
waltmck | People keep saying this, but individual use of LLMs is really not a significant contributor to climate change---see here for a more detailed breakdown. In particular, if you are only concerned about energy usage, probably compiling Android from source is a lot more unethical than the LLM use in the course of development. | 23:16:35 |
magic_rb | individual use maybe, but thats the same argument as with vegan stuff | 23:17:10 |
waltmck | The biggest problem is that they just don't produce a great quality of code yet, so the reviewing burden becomes greater than the burden of having an LLM write you a PR | 23:17:16 |
magic_rb | individual it doesnt matter, but if everyone does it | 23:17:20 |
waltmck | Right, but if you are concerned with power use why are you compiling Android? | 23:17:31 |
magic_rb | again | 23:17:38 |
magic_rb | there are 4 people compiling android | 23:17:47 |
magic_rb | against thousands running llms and generating millions of lines of slop | 23:18:03 |
magic_rb | one isnt quite the same than the other | 23:18:16 |
magic_rb | if you cant see that we have nothing to discuss | 23:18:25 |
waltmck | Huh? So going out and shooting an animal is more ethical than eating a steak because more people are eating steak than going out and shooting an animal? That is just a very strange argument | 23:18:57 |
waltmck | I get the concern about the aggregate effect, but it seems like you are applying that aggregate concern to condemn the decisions of individuals in a way that is out of proportion | 23:19:27 |
magic_rb | not the room for this | 23:19:36 |
magic_rb | and also, i dont have time for this, got better shit do | 23:19:42 |
magic_rb | * and also, i dont have time for this, got better shit to do | 23:19:44 |
magic_rb | and also i dont want to force atemu or pentane to have to yell at us | 23:20:02 |
waltmck | To be fair you brought up this topic, I am just responding. But yeah I agree I'll drop it | 23:20:56 |
magic_rb | yeah i know | 23:21:02 |
magic_rb | i have a bad tendency of getting very tilted about this :( | 23:21:15 |
matthewcroughan - nix.zone | They will never produce quality code. Because quality code is often coinciding with new code that actually solves the problem you're trying to solve. Which they will not do. | 23:35:29 |
matthewcroughan - nix.zone | What is code quality anyway? Maybe actually being fit for purpose is part of quality. | 23:36:09 |
matthewcroughan - nix.zone | If you use an LLM to generate code, you are fuzzy finding a template and hallucinating a portion of it. This is bound to be low quality. | 23:36:51 |
waltmck | I don't know how you can be so confident about this. Five years ago I would not believe that they have a coherent conversation, and now they are playing an active part in solving outstanding math conjectures | 23:37:09 |
waltmck | * | 23:37:24 |
matthewcroughan - nix.zone | Because I know how they work. And I'm not looking at historical appearances of improved power or competence. | 23:37:40 |
waltmck | Do you know how human brains work? There is actually not a huge fundamental difference | 23:38:21 |
matthewcroughan - nix.zone | They do not know how to play chess, but they can convince you that they know how to play chess. But eventually they make an illegal move (hallucination), because that's just how they work. Throwing more compute at it won't solve that. | 23:38:27 |
matthewcroughan - nix.zone | Comparing it to a human brain is just L O L | 23:38:37 |
matthewcroughan - nix.zone | You don't know how they work, but are eager to compare them to our own brain. | 23:39:09 |