Software engineering may no longer be a lifetime career

(seangoedecke.com)

458 points | by movis 1 day ago

96 comments

  • bborud 1 day ago
    Multiple times per week I have the same conversation. It goes something like this:

      - AI will make developers irrelevant
      - Why?
      - Because LLMs can write code
      - Do you know what I do for a living?
      - Yes, write code?
      - Yes, about 2-5% of the time.  Less now.
      - But you said you are a developer?
      - I did
      - So what do you do 95-98% of the time?
      - I understand things and then apply my ability to formulate solutions
      - But I can do that!
      - So why aren't you?
    
    The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.

    Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.

    • kasey_junk 17 hours ago
      On one of my very first jobs in around 2000 I got paired with a much more experienced software engineer. He’d been a pro since the early 70s. I was stoked to learn from him.

      On like my fourth day he said “now I’m going to teach you the thing that helped me the most in my career…” I waited, ready for the received wisdom. And he said “always number your punch cards so if you drop them they will be easy to put back into order”. I was upset. We were long past the point where punch cards were in use. And then he said “I said what would help _me_ the most, not what would help _you_. Software is always changing”.

      I’ve thought about that a lot lately.

      • snvzz 10 hours ago
        That conversation seems like he was covertly teaching you about linked lists.
        • mecHacker 2 hours ago
          Or perhaps a peek into how fast the Software-engineering is changing that what works for you now may be irrelevant in future, and hence be prepared to be adaptive!
    • doug_durham 23 hours ago
      This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.

      OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.

      • estebank 23 hours ago
        Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.
        • powvans 22 hours ago
          Feels like lately most of the time is spent arguing about or at least worrying about whether or not AI is going to replace all software developers.
          • pydry 29 minutes ago
            Or dealing with the idiotic fallout of somebody who sucks at coding or even has never coded in their life trying to make that happen.
        • 7e 21 hours ago
          If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.
          • Silhouette 15 hours ago
            With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly.

            We saw a similar philosophy in TDD advocacy many years ago. Search for something like "Sudoku Jeffries" to see how that went. Then search for "Sudoku Norvig" to see what it looks like when you actually understand the problem.

            The idea that you can somehow iterate your way to a solution when you have no idea where you're trying to go or even which direction your next step should be in has always seemed absurd to some of us but in the era of LLMs there's no longer any doubt. In the agentic era (can we call a few months an "era"?) I estimate that 90% or more of the writing I've read about how to use agents most effectively came down to making sure there is a clear specification for what they need to implement first and then imposing extensive guard rails to make sure their output does in fact follow that specification. It's all about doing enough design work up front to remove any ambiguity before coding the next part of the implementation and almost everyone claiming any sort of real world success with coding agents seems to have reached a similar conclusion.

          • dasil003 20 hours ago
            This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.

            More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.

            • vintermann 11 hours ago
              Even writing code the good old way, of course we experiment. I remember the old rule "Plan to throw away the first one. You will anyway." But then there's the "second system effect" where the second system is supposedly always overengineered and trying to take every possibility into account.

              And then there's the times when the quick sloppy poc you planned to throw away gets forced into production and is still impossible to change ten years down the road.

              AI makes all these problems so much less painful.

              I worked at a company which had a huge monolithic ERP system (their product, to be clear) with no good separation between the GUI layer and presentation layer. The GUI was also dependent on an ancient version of the Borland C++ compiler. They put in a humongous effort to move to a slightly more modern UI library, and a client server architecture.

              However, someone had decided that messages in xml or json were too inefficient, they already had performance issues. So they went with a binary message protocol of their own design - with no features for protocol update. Everything communicating with the server had to be on exactly the same version, or it would throw an error. So of course they very, very rarely updated the protocol.

              I think the best help of AI will be to clean up such real life messes of soul-crushing architectural regrets. Will it do it perfectly, certainly not, but I wouldn't do it perfectly myself either if I was forced to do it - and I'd take a hell of a lot more time to do it.

            • avador 17 hours ago
              I think you and 7e are both right. Being able to iterate some N orders of magnitude quicker is a big deal. This doesn’t eliminate design and UX. Rather, it merges it with high iteration speed to produce a form of “play”.

              “Play” is what produced at least two (likely more) generations of attentive (and therefore competent) programmers. The hype around LLMs is painful, yes, but attentive human minds will ultimately bust through it.

          • GolfPopper 20 hours ago
            And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?
            • gchamonlive 20 hours ago
              Refactoring is cheap too, but you have to read your code and know when to stop and ask the agent to refactor, rewrite, adopt or change libs, fix issues presented by linters and code quality scanners, change abstractions and rethink the architecture.

              It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.

              You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.

              • microflash 8 hours ago
                Refactoring is not cheap when you take into account the cost of not breaking things.
                • gchamonlive 7 hours ago
                  When we say "X is cheap" it's in comparison to doing things manually, not irresponsibly
            • a10c 20 hours ago
              That's the point. Your prototype doesn't need to be pretty. It just needs to prove that the value is there for it to be made pretty.
            • megous 18 hours ago
              You actually get what you ask for. And you can ask for anything, vaguely or not.

              You'll end with spaghetti if you'll play a bad manager and only ever allocate time for new features and never for cleanups.

              You can go through code, add REFACTOR comments based on your tastes and thoughts, and get your result and iterate to your heart's wishes. You just don't need to do the direct code typing.

          • stuaxo 7 hours ago
            Well - you converge to a system, but do that by pruning what you don't want.

            If you care about maintainability and quality (and I include maintaining using LLM based tools) then you need to understand what it does (in doing so you will find lots of things for it to fix - you'll probably find that the architecture it's chosen is not right for what you want too).

          • jimbokun 19 hours ago
            So the infinite monkeys with infinite typewriters approach.
            • darepublic 17 hours ago
              aka "swarms". cool sounding name for.. throw yet more mud at the wall. at unprecedented scales
          • antonvs 20 hours ago
            > If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.

            No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.

        • eweise 18 hours ago
          AI is pretty good at figuring out what the right thing to do is.
          • incanus77 18 hours ago
            AI is pretty good at pulling from the body of existing solutions of what the right thing to do is.
          • CyberDildonics 17 hours ago
            AI is decent at solving problems that lots of people already solved a long time ago.
            • didgetmaster 14 hours ago
              Too many people believe that AI is going to come up with elegant solutions to problems that no one has ever solved before. Maybe someday, but for now it seems to be good at finding a solution that may be hidden away somewhere in stack overflow. If it just isn't there, then you are out of luck.
              • munksbeer 43 minutes ago
                There is almost nothing new in computer programming. 99.999% of any code most of us on this forum write will be repeating patterns that have been written thousands of times before.

                Tell a coding agent what your new thing needs to do, give it the absolute constraints, max response times, max failover times, and so on, tell it which technologies it has access to or could use, and then tell it to spend a lot of time going over and over the design, coming up with an initial X number of designs (I use 5), and then it must self criticise each one of them and weigh them up, narrow down to three, before finally presenting those three options to the user.

                Now you read the options, understand them, realise that the AI has either converged on something very sensible, or it has missed something, so you tell it what it missed and iterate. Or it nailed something good, you pick the option you prefer, and tell it to come up with a more fleshed out high level design, describing the flow and behaviour deeply (NO CODE REFERENCES!). Then once you're happy, tell it to user that and write a comprehensive coding plan. Tell it specifically what coding patterns you prefer (you should have these in your AGENTS.md file already), what patterns to avoid (single threaded? multi-threaded? Avoid gc? How you typically deal with error conditions, etc etc).

                Then have it start iteratively working on the coding plan, and it *MUST* have a strong feedback loop. If there is no feedback loop initially, I tell it to build one. It must be able to write very fluent integration tests (not just unit tests). It must be able to run the app and read the logs.

                Do all this and I bet you get a better result that 80% of developers out there. Coding agents are extremely good when used well.

      • ecocentrik 22 hours ago
        Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no senior developers are banging their heads for hours over typos.

        https://www.youtube.com/shorts/xBilK3gT5e0

        • roncesvalles 21 hours ago
          These days nobody bangs their heads over typos.

          LLMs evaporated 90% of the "moments of despair" when you have an error and googling it isn't helping, or googling it made you realize you have to read 30min of documentation.

          Coding is a joy now. LLMs shaved off all the rough edges.

          • ncruces 19 hours ago
            They created other kinds of despair.

            A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.

            Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.

            That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.

          • arcboii92 20 hours ago
            LLMs moved the moments of despair to PR reviews for me. It used to be that you could check on a junior dev occassionally throughout the day to make sure they're on the right track. Now you step away for 2 hours and they're raising a PR of bad code smell spaghetti and moving on to repeat their AI slopfest on the next task.

            It's getting hard to keep up with trying to teach new devs what bad code looks like. And I swear sometimes they just copy my PR comments into their AI tool to fix the mistakes without any of the learning.

            • jimbokun 19 hours ago
              At some point there needs to be an uncomfortable conversation about how if all they’re doing is copy pasting everything they get from you into ChatGPT, you can do it yourself for much much cheaper.
              • trustfundbaby 17 hours ago
                how? management in most Tech companies are incentivizing them to do just that, so if you bring it up, they'll happily trot over to your manager to complain and then the uncomfortable conversation is you with management about why you're getting in the way of AI uptake by the team.
            • eecc 19 hours ago
              Don’t allow juniors to use AI. It’s like university exams: no programmable calculators allowed. Review assistants or senior who know what’s going on should though, it does help when used correctly
            • FrankRay78 19 hours ago
              Write a damn good automated review agent that runs against their PRs before even looking at them… works well for me!
              • hackeman300 18 hours ago
                I've tried this without much luck. In my experience they get too bogged down on surface things and don't have the necessary business requirements/context to understand and find actual bugs.

                How have you set yours up that works well for you?

                • marcus_holmes 14 hours ago
                  So create a context document that explains the business context, and add that to the agent.

                  Take the bad result that you're getting, and pretend it's coming from an enthusiastic junior. What would you tell them to make them do this task better? Add that explanation to the agent (or explain that to the LLM and get it to add that to the agent, I have found this to work as well).

                  When you create a task for the LLM, get it to create a requirements document that lists all the requirements. Feed that into the review agent so it understands what the code agent was trying to do.

                  The LLM will do what you tell it to do. It doesn't magically understand what you want it to do. You have to tell it what to do.

          • skeeter2020 21 hours ago
            You can't possibly believe this, or you and me (and many others) are doing something different. LLMs have created an entire new - huge - set of bang-your-head moments, as they go off half-cocked in a million simultaneous directions, chasing their tail, or just making shit up. And since the vast majority of work is on existing - often ancient - codebases, let's find out if you feel the same way in 18 months.
            • GolfPopper 20 hours ago
              LLMs are great for anyone who isn't responsible for the consequences of what they code.
            • roncesvalles 17 hours ago
              That's only if you do agentic coding.

              I use LLMs in the following ways:

              1. Copy-pasting code into the web chat UI and asking for something (bugfix, add a feature, refactor, explain, review it etc), including entire source code files. A $20/mo Gemini subscription goes a long way (never been rate-limited). I only use the highest model. I often just copy-paste the entire source file between 3 backticks.

              2. Cursor Tab. I do have hotkeys to enable and disable it; it's disabled most of the time otherwise it gets annoying.

              3. Single-file changes directly from Cursor's AI sidebar. I only do this for simple, predictable stuff because even their auto-routing "Premium" setting is not as good as pasting stuff into Gemini 3.1 Pro.

              That means I have only two $20/mo subscriptions: Gemini and Cursor.

              I don't use Claude Code, it's really for people who don't know how to code. I don't use Plan Mode; I make and track the plan myself (if at all). I only tell the LLM granular tasks to execute. I don't use `claude.md` or `agents.md` or anything like that. If I don't like a particular output, I reset everything, modify my prompt and try again.

              I believe this is the only way to fully leverage LLMs without losing any product quality. If you're trading off quality for "speed" (in quotes because over the long term, a low quality codebase is a massive drag on productivity) then there's no point.

              • avador 16 hours ago
                I _think_ what you’ve said is “go shallow, not deep”. That is, don’t let the walk you make inside the latent space a long one. Twenty-five short and peppered steps, from de novo, is better than one long, protracted stew.

                Is that accurate?

                • roncesvalles 20 minutes ago
                  Well yeah. If you know what you're doing, why would you let the AI take control?
            • suzzer99 15 hours ago
              Yes, if you're using them to write large chunks of code or entire features. If you just use them to clear up some trivial problem in an unfamiliar technology that you used to spend 30 minutes googling with 50 tabs open, or stuff like write a method to filter, map and reduce an array based on specific criteria, they're a godsend.
            • jimbokun 19 hours ago
              Give them work in smaller chunks.
            • lo_zamoyski 18 hours ago
              Maybe I'm weird, but my usage has been very conservative. As in, I treat the LLM like a junior dev that I have to micromanage and handhold.

              I am terrified of allowing these things to complete tasks end-to-end with nothing intervening. Maybe that's why I don't run into many of these issues. I mostly delegate grunt work and manual tedium, not reasoning or design choices to the LLM. I may consult the LLM and ask for criticism, but there is no way I'm going to allow it to quietly make design decisions that I don't know about.

            • marcus_holmes 14 hours ago
              You are in charge of what the LLM does. If it's running off half-cocked in a million simultaneous directions, that's on you. Write better skills. Tell it not to do that. Break into its loop and ask it wtf it thinks it's doing. If it's making shit up, force it to test more.

              The LLM will do what you tell it to do. Manage it.

          • ecocentrik 21 hours ago
            Languages have been reporting compile and runtime errors for decades. Additionally very few senior developers don't already have their minds wired to spot typos the way copy editors spot bad punctuation. Typos were only really a problem for students.
            • neutronicus 14 hours ago
              I've been writing C++ for almost 20 years at this point and I do still benefit from how good Claude is at gnarly Template error messages.
          • suzzer99 15 hours ago
            100%. Googling when you don't even know enough to ask the right questions, with 50 tabs open and trying to read down to the 3rd or 4th Stack Overflow answer (which is usually the best for some inexplicable reason), was my least favorite part of development.

            I don't miss wasting an hour on a problem in a technology I'm not familiar with, where it's not like a big conceptual thing but something I could clear up in 5 seconds if I just had an expert in the room.

          • kibwen 21 hours ago
            > LLMs evaporated 90% of the "moments of despair"

            And then condensed an equal quantity of despair out of the ether via confident confabulations.

            • taurath 20 hours ago
              Equal? No, no no no. Upper management is making PoCs that promise to solve longstanding multi year learnings of tradeoffs and solution balancing, and setting goals based on that. We are heading to a cliff and everyone is going to learn what happens when you replace already vulnerable foundation pillars with pig iron.
          • leptons 17 hours ago
            LLMs create typos in the code they create all the time. Claude 4.7. Maybe you are using some next-gen super-AI nobody else has? Or you're just lucky.
            • marcus_holmes 14 hours ago
              So get the LLM to test and fix those typos. Why are you letting it mis-spell things?
        • pear01 21 hours ago
          This is temporary. What is the SKILL.md equivalent going to be in five years? In ten? You don't already see a pattern emerging around solutions to encode that "professional experience" into the tools themselves?

          These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?

          • datsci_est_2015 20 hours ago
            The word “incorporate” is doing some very heavy lifting in your assertion. These LLMs already have access to the whole corpus of architectural knowledge and software best practices, and yet they’re unable to reliably implement those best practices. Why not? Why do they often make completely unintuitive decisions, even when repeatedly prompted to ask clarifying questions?
            • pear01 20 hours ago
              To be clear by that and "cultural corpus" I meant their skill with natural languages. It is well known for instance that early LLMs were curiously better at composing sentences in English than doing basic math.

              Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.

              • datsci_est_2015 20 hours ago
                What are the properties of LLMs that have convinced you that there remains emergent complexity (e.g. the “ability” to formally reason) that we have not yet seen?
                • pear01 19 hours ago
                  There may be gains to be had in such emergence but that is not where I see the gains in the next five years. Those gains will be made by connecting LLMs more robustly with formal reasoning, which computers are already very good at. Continued iteration on connecting these right/left brain faculties could then lead to further emergence down the line.

                  The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.

                  If you have some theories in the converse in that regard I am all ears.

                  • datsci_est_2015 19 hours ago
                    Extraordinary claims require extraordinary evidence, not the opposite. There’s no current evidence to suggest limitless progress, or even superlinear progress with regards to compute and energy. My guess would be sub linear or even logarithmic progress vs. linear growth in compute and energy, as that’s how most physical systems behave.
                    • pear01 19 hours ago
                      No one said unlimited progress. Let's not revert to straw man claims.

                      If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.

                      • datsci_est_2015 15 hours ago
                        Markets can remain irrational longer than anyone can stay solvent (especially when wealth is as concentrated as it is currently: one doofus can keep an entire industry afloat).

                        “Unlimited progress” is not a statement on the rate of progress, it’s a statement on the limits of progress. It’s a much weaker claim than you’re framing it as. Your claim very much is that we have not yet reached the limits of LLMs potential. My claim, conversely, is that we’re already reaching diminishing returns, which are being masked by a massive influx of compute and energy. My short: LLMs are not the path to AGI.

                      • thinkthatover 17 hours ago
                        I really don't like this framing - it's hard to short a market at the best of times, let alone when governments have a vested interest in tech being too big to fail to compete in the global economic arms race - see Intel's stock in the past few months.

                        I agree with you both - undoubtedly there are still massive gains to be made with the frontier models we have today with tooling and iteration, yet I do not believe there's sufficient evidence to claim we are rolling towards AG/SI on an exponential curve, without some additional breakthroughs given the jagged edges and data used to train models being fundamentally linear

                        • pear01 12 hours ago
                          Just remember you don't need AGI to see massive societal change. Certainly not mass layoffs. AGI is not the bar. By the time we all agree AGI has come the world will have already changed.

                          You just need AI to be just good enough to win the tradeoff over a human employee. Just take your average office. Then ask yourself if the bar is really that high. AGI strikes me as an extremely nebulous concept. Better to just list everyone at your office and bucket them with a guess of how soon you think AI will replace them. Or weaken their market power. This is what every corporate boss in America is already doing. I'm merely suggesting rather than hope a graph curves in our individual favor we try to act more collectively as a species. Of course, I don't hold my breath.

                          I also don't find myself compelled by the notion that the danger to humanity is "AGI". The true danger is as it always has been - each other.

                          • datsci_est_2015 5 hours ago
                            > Just take your average office. Then ask yourself if the bar is really that high.

                            How many years away do you think we are from a “concierge” AI that can do the menial tasks handled by most personal assistants / program managers? Booking flights and hotels and coordinating employee availability?

            • antonvs 19 hours ago
              > Why do they often make completely unintuitive decisions

              Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.

              • datsci_est_2015 19 hours ago
                They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.
              • antihipocrat 18 hours ago
                Senior developers know what behavior to constrain.

                If incorrect LLM output is a prompt issue then demand for experienced developers will remain, and demand may actually increase as time passes.

          • ecocentrik 21 hours ago
            The capacity of the person prompting it to understand is the threshold they won't cross. They can squeeze the gap as much as possible by dumbing down answers or slowly ramping up information complexity but there is a limit to comprehension.
            • pear01 21 hours ago
              This is an interesting answer for questions about human agency and accountability/personhood questions but I don't see how it leads to increased confidence in the role of human as SWE.

              If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.

              Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.

              • ecocentrik 20 hours ago
                We don't need as many hunters because we've domesticated sources of meat. We still need ranchers, butchers... an entire supply chain to get meat to consumers. We didn't remove humans from the loop, we just created specializations.

                Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.

                • pear01 19 hours ago
                  Ranchers, butchers... and factory farms. Most meat Americans consume have had very little interaction with a person until they are being devoured on the plate.

                  I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.

                  I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.

                  • ecocentrik 18 hours ago
                    Now you're arguing against your own analogy? Hunter was ubiquitous position in human society prior to the domestication of animals. 50% of the workforce in hunter-gather societies. Today, 12 millennia after the domestication of wildlife, that number is down to 9-14% of the global workforce dedicated to the production, distribution, processing, sales of meat (not including cooked food) according to opus.

                    Considering that only 1% of the US workforce was a software engineer I expect similar workforce optimization to occur in software engineering specializations over the next 12,000 years. /s But seriously, it's never going to zero.

                    • pear01 12 hours ago
                      No one said it's going to zero. It doesn't have to go to zero for lives to change. Would you rather be a cowboy or a factory farmer? The latter are some of the least desirable jobs in the entire world. The fact that millions of people still do them isn't the point in your column you think it is.
                • jimbokun 19 hours ago
                  The software specialists may be replaced entirely by subject matter experts.

                  No need for specialized commercial software, if everyone can just explain to the computer what they want in English.

              • npodbielski 20 hours ago
                Do you really want to live in a world when nobody understand software that manages nuclear power plant? Or medical devices? Or financial software? Or radio transceivers firmware? Even something so boring like databases not understood could lead to disastrous effects if this would be the government database for managing people IDS. Hmm even if this would be working fine for years what would happen if bad actor would influence models to generate code if security issues? If nobody can comprehend the output how anybody would be able to think about the danger? This is even more grim then this https://www.citriniresearch.com/p/2028gic
                • pear01 20 hours ago
                  We live in a world with nuclear weapons. Somehow we all cope and get up every morning. I think you are missing the point - the world is already grim. It always has been. What about human affairs say in the last century alone makes you think human oversight is some panacea? The impetus for civilization was not some innate desire for financial systems or medicine. It was not having other humans murder you. The Leviathan is already here.

                  The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.

                  • jimbokun 19 hours ago
                    At least we have people who understand the technology underlying nuclear weapons!
                • esafak 19 hours ago
                  Maybe the tables will turn and people will ask, do you really want to live in a world where things aren't designed by machines (smarter than us)?
      • 01100011 20 hours ago
        Your answer reminds me that my biggest gripe with this site and programmer forums in general is the lack of awareness of the breadth and scale of software development. I'm curious what you work on, because it doesn't sound anything like what I work on.

        > Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.

        I don't think I've experienced this to a large degree. Maybe early in my career. Most of my time now is spent formulating a solution, and time spent coding is mostly spent trying to compose my changes with the existing code in a way that is performant, reliable and meets the specifications.

      • sleight42 21 hours ago
        This is far more true for junior and perhaps mid-career engineers, unless you're working in an extremely well-defined problem space (* see below).

        When working as a SWE, the longer I did it (~30 years) more of my time was spent understanding the problem, the edge cases, how to handle the edge cases, how to do all of it affordable, on time, and within budget.

        That's engineering.

        What you're describing is "writing code". That's lower value than "solving the problem".

        I imagine a response, "But agile development, etc."

        Yep. Part of solving the often sometimes involves creating prototypes to determine the essential viability of the solution. But that's only part of it. Which prototypes do you write? How much time do you allocate to same before accepting it's a dead end (at least for now) and punting on it?

        That's engineering.

        Me probably coming across as a dick today? Well, I was diagnosed autistic a year ago, and I'm on extended sabbatical/unemployment (3 years now) due to autistic burnout. And masking is part of how I got the burnout.**

        * Why would someone be paying for that when there is likely someone else already doing it? Unless you're the rare person who hopes to "disrupt" the competition).

        ** has me begging the question of why I write here at all. SMH. Why do I do what I do? No idea sometimes.

        • cduzz 20 hours ago
          I'm going to mix my metaphors a bit here...

          There's the saying "Any idiot can build a bridge; it takes an engineer to build a bridge that barely stands."

          To put this another way, any idiotic LLM can write code. It takes a person with domain experience to understand what code to write, rewrite, or not write.

          I've seen lots of organizations hollow out their internal competence in favor of outsourcing the skills. LLMs are the ultimate expression of that. There are people who say "you need to have people in your organization who understand how things work because they're the ones who solve problems!" and there are other people who say "focus on your core competencies! These problems you're worried about aren't your core competencies, so get rid of those experts, they're expensive and annoying; we can just sign a contract with an organization that'll know things for us."

          At some point we all will identify exactly how much "seed corn" you need for the next season. We'll figure that out because we're starving, but at least we'll all know.

        • dijksterhuis 19 hours ago
          you've definitely been doing this longer than i have, but our outlook and recent experiences sound very similar. also been diagnosed recently, also on similar extended sabbatical/unemployment, also come across as a dick, also trying to mask less because burn out.

          got an email address in my profile if you'd be interested in talking at some point about something, or even talking about nothing in particular. (i don't normally do this sort of HN networking stuff, i find it super cringe. but there we go).

      • AussieWog93 17 hours ago
        This was my experience as a junior back in 2019.

        The actual problem solving was trivial but I would spend days trying to work my way around some Qt work or guess the magic Cmake incantation.

      • RajT88 17 hours ago
        > under-documented API

        One wonders why AI hasn't replaced all those non-existent documentation writers yet.

        Therein lies a clue to what the future holds.

      • pear01 21 hours ago
        Let's also not forget a lot of the market edge of SWEs comes in knowing how to navigate these parts. The fact you needed to be reasonably fluent in a language was already a barrier to entry which meant in better times new grads could earn six figures at their first job just for putting in that effort.

        Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.

        It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.

        • arandomhuman 20 hours ago
          The barrier to entry to generating code may be "I can think", but the barrier to entry for solving hard, distributed/multi-faceted engineering problems still remains quite high - agents can't really do this still to a decent level of efficacy reliably.

          The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.

      • bborud 22 hours ago
        Are you, perchance, assuming that since you spend most of your time struggling with actual code, this is so for everyone else?

        Or are you saying that I'm lying. That I am secretly hammering away at my keyboard while pretending not to?

        No, writing code hasn't been how I spend most of my time for many decades now.

        • therealdrag0 22 hours ago
          Are you a staff level engineer that has dozens of other engineers banging away at code projects you help define?
          • eska 22 hours ago
            Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.

            Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.

            • dkersten 22 hours ago
              > which people find they need to do for LLMs to work at all anyway

              Everything we have to do for AI to function well, would help humans to function better too.

              If you take the things for AI, but do them for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.

              • overfeed 20 hours ago
                > If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written

                This only works on high-trust teams and organizations. A lot of AI productivity gains are from SWE putting the extra effort because the results will be attributed to them. Being a force-multiplier for others isn't always recognized, instead, your perfomance will likely judged solely on the metrics directly attributed to you. I learned this lesson the hard way by being idealistic, and overestimating the level of trust that had been built after joining a new team. Companies pay lip service to software quality, no one gives a shit if your code has the lowest SEV rates.

                • dkersten 9 hours ago
                  Ah… that’s a reasonable point. Yes, the difference between a high-trust team and what you described is night and day. I suppose for those situations there’s a much bigger incentive to just throw AI at it, which explains why the big corporates love AI.
          • prmph 6 hours ago
            No, no actually capable engineer should be just banging away at code. This is one way to know the level of an engineer.

            Less capable engineers think in terms of implementing runtime execution and solving runtime errors.

            More capable engineers think in terms of designing the most effective architecture and abstractions for maintainability, performance, and robustness.

            I have a project I am working on, It has not compiled in months, but that's ok, since the real work, for me, is in the architectural design.

            Yes, getting it to actually run takes some time and effort, but for me that is almost mechanical now.

          • bborud 22 hours ago
            It has varied over the years but it isn't actually relevant since I am talking about when I write software.

            Writing code just isn't what takes time.

            • QuercusMax 22 hours ago
              Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.
          • kakacik 21 hours ago
            I am not, yet actual coding is miniscule part of workflow. The rest is cca un-automable by any llm - politics, meetings, discussions, brainstorming, organizing testing teams, stakeholders and so on.

            This is how big corporations look like, not some SV startups.

      • logicchains 21 hours ago
        >OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together.

        Those two formulations represent different developers' approaches to the same task. The former being developers who are much better at planning than the latter.

      • Sh0000reZ 17 hours ago
        [flagged]
    • KronisLV 1 day ago
      > Yes, about 2-5% of the time.

      There are also those for whom that percentage is higher, let’s say 6-50%.

      > I understand things and then apply my ability to formulate solutions

      The AI is coming for that too.

      You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.

      People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

      • geodel 23 hours ago
        Agree. It is just like 2 totally separate groups are arguing.

        One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.

        Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.

        • marcindulak 21 hours ago
          After hearing various similarly sounding opinions about CRUD being easy for LLMs, I started tracking how well LLMs handle a standard CRUD Django app I'm familiar with at https://github.com/marcindulak/learning-api-styles-gen-ai-ex....

          So far it appears that LLMs still require constant hand-holding, even for a small educational CRUD app.

          • magicalhippo 19 hours ago
            We've had reasonable effectiveness for CRUD. It's mainly the UI toolkits we use, but the plumbing it can do quite well. It's not 100% vibecoding but certainly a significant accelerator for parts of the job.
        • kj4211cash 4 hours ago
          I agree with the 2 separate groups theory, but I don't buy that the group that produces "copy/paste junk" is the much larger group. I think in most mega-corps, there is a huge existing code base, there are huge organizational challenges, and there is huge hierarchy with most people not being the junior juniors. 90+% of the work is "not coding." Probably way, way more if we include the middle managers. At startups, there is a lot of "copy/paste junk" but also often a decent amount of push the boundary new stuff. I don't know. I've been in the industry for 8 years now and it's been really rare to see the actual coding being the bottleneck or even the thing that takes the majority of the time.
        • hjort-e 22 hours ago
          What makes you feel that a complex frontend would be easier for AI than a non-CRUD backend system?
          • evilduck 22 hours ago
            Hubris.

            I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.

            • manmal 20 hours ago
              You are describing the toy projects that had us all amazed end of last year. Large, maintainable software that can serve paying customers is in a completely different galaxy.
            • ThrowawayR2 18 hours ago
              There's rather a big difference between reverse engineering already working code and forward(?) engineering working code from nothing so that confidence seems misplaced.
            • hjort-e 21 hours ago
              I 100% agree it's coming for everything. I'm just curious what the arguments would be for why frontend would be easier.
              • svachalek 20 hours ago
                As a manager of a full stack team, we've found AI falls short a lot more on front end. It has its weak points on both front and back, but the problems with backend are quite easy to feed back into it -- needs more performance, needs to pass this security audit, needs to deal with xyz system. The problems with frontend are more like this is ugly, it's clunky to use, people don't like it. People without years of frontend experience tend to lack the vocabulary required to get AI to fix it, period, and it ends up going around in loops.
            • skydhash 22 hours ago
              > I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend

              That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.

              • megous 17 hours ago
                Whether something is tedious depends on the person and situation. If you're already an expert, you may find a lot of work that goes into your 4th USB device (especially if it's based on yet another chip and bespoke SDK) quite tedious, since lot of it is based on standard requirements/designs that you can't change.

                You may also find re-ing stuff not tedious, due to what may be motivating you.

                In any case, any work will have some things you just know how to do, or what to do, but previously (before LLM agents) no easy way to plow through them without pressing a lot of keyboard keys over long period of time.

          • geodel 22 hours ago
            It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.

            I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.

            • hjort-e 21 hours ago
              But that's equally true for any type of system. Frontend isn't inherently easier than other systems, so i was just wondering why you singled it out. To me AI just seems better at backends and database design
              • geodel 21 hours ago
                OK, my examples seemed like biased against frontend which was not the intention.

                The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.

                • hjort-e 21 hours ago
                  Ah okay that's fair. In my country boot camps aren't a thing so frontend devs are rare and good frontend devs even more, so I think it depends on where in the world you are. We got an abundance of java devs here that i fear more for
      • dmazzoni 1 day ago
        There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.

        However, even out of that 80% of my time, what fraction is actually spent "writing code"?

        AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

        - Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

        There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.

        • coldtea 23 hours ago
          >AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

          - Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

          What part of those you think it doesn't help with?

          • malfist 22 hours ago
            There is no shortcut to understanding. No one can understand things for you
            • Animats 20 hours ago
              They can make it unnecessary for you to understand.

              Consider hash tables. Nobody implements a hash table by hand any more. I've written some, but not in this century. Optimal hash table design is a specialist subject. Do you know about robin hood algorithms? Changing the random number generator's seed to discourage collision attacks? A basic hash table starts to slow down around 70% full. Modern hash tables can get above 90% full before they have to expand.

              Who keeps Knuth's Fundamental Algorithms handy any more? I own both the original edition and the revised edition. They're boxed up in the garage. I once read that book cover to cover. That was a long time ago.

              That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.

              • malfist 20 hours ago
                That's obviously not what I'm talking about. If you're asking an AI to write an optimal hash table algorithm, something is clearly wrong. I'm talking specifically about understanding the business domain and problem you are trying to solve.
                • coldtea 4 hours ago
                  >I'm talking specifically about understanding the business domain and problem you are trying to solve.

                  And that's what people use AI everyday to help with, so?

              • skydhash 18 hours ago
                > That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.

                The key word is solving. Meaning someone, after coming up with the solution, has taken times to prove that it works well in all usual and most extreme cases. With their reputation on the line.

                That’s why you trust curl, ffmpeg, Knuth’s books,… but you don’t trust random cat on the internet. We don’t trust AI and the cost to review its output is not a great tradeoffs compared to just think and solve the problem.

      • LPisGood 1 day ago
        > The AI is coming for that too.

        That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.

        • ge96 23 hours ago
          I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.
          • ASalazarMX 23 hours ago
            News: "AGI refuses to make another JS framework, rages on the follies of misguided developers and their wateful JS crutches"

            Developer community: Wow, we truly have become obsolete now!

            • ge96 23 hours ago
              Who will be the disrupters when there is nothing to disrupt
            • notpachet 21 hours ago
              In a shocking twist, it turns out that Mootools is the agents' preferred framework
          • scj 22 hours ago
            A thought experiment: When all practical software is only written by AIs, will the AIs use goto? What will the programming language of AIs look like?

            My bet is something _like_ assembly, but not assembly.

            That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.

            • r_lee 20 hours ago
              I think it won't be like assembly, because it takes more information vs building blocks that have more dense information in them, kind of like how we use libraries and frameworks
            • ge96 22 hours ago
              Yeah that's my thing for my hardware projects, I'm not going to reach for an LLM to do it, I want to write the code myself/be present. For something new I would consider using LLM to generate something, like a computer vision implementation or something I don't already know. The end result I would know how it works, just for POC.
            • timacles 16 hours ago
              There will be a new language created optimized for AI development
          • wiseowise 21 hours ago
            It can't. Framework hierarchy is largely based on social structure, rather than pure technical merit. Otherwise React would've been displaced long time ago.
          • smrq 23 hours ago
            People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.
        • BurningFrog 23 hours ago
          The history of the last 250 years is inventing new professions as old ones are automated away.

          I expect that to continue.

          • coldtea 23 hours ago
            The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.

            (And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).

            And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?

          • nitwit005 21 hours ago
            Given some sort of machine with human capabilities, there would be no reason to assign that profession to a human, excepting perhaps cost.
          • georgemcbay 22 hours ago
            > The history of the last 250 years is inventing new professions as old ones are automated away.

            Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.

            The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).

            If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.

          • charlie90 21 hours ago
            Like doordashing and pokemon card reselling.
            • wiseowise 21 hours ago
              Don't forget OnlyFans and streaming.
              • fireant 15 hours ago
                Doordash and similar are experimenting with autonomous/remotely operated vehicles and porn is getting decimated once good enough uncensored video gen ai gets available. That doesn't sound like viable career choices either.
          • timacles 15 hours ago
            This is the kind of sleep walking that’s about to walk humanity into the next dark ages.

            My parents say a lot of stuff like this. They tend to gloss over the untold suffering, great depressions and world wars that took us to get here.

            The planets resources were also not in risk of running out. As the world is min maxed by billionaires, it nice the lower classes are drained of all capital, they will soon move to fighting each other for resources. the future is looking pretty grim for even the most optimistic of scenarios

            • johnthescott 41 minutes ago
              > They tend to gloss over the untold suffering, great depressions and world wars that took us to get here..

              spot on. you gotta wear shades to survive, future so bright.

          • dvsfish 18 hours ago
            It's happening, but theres no law of the universe that says it has to be 1:1. Why are you so confident in this regard? 250 years is a very small slice of human history and could easily be the outlier.
      • nitwit005 21 hours ago
        > The AI is coming for that too.

        Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

        > People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.

        I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.

        • laughing_man 21 hours ago
          >Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

          Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?

          • dvsfish 17 hours ago
            a new paradigm of 3 day work weeks. share the salary of the days off with those less automatable, and work to automate everyone. I wish some sort of dicussion like this could happen where the workers of the world get to see some of the gains of a new technology more immediately. If "the state" wants to maintain legitimacy and protect its citizenry (one of the primary promises of a state) to avoid a period of social unrest, the likes of which has been unheard of for several generations, I think something like this should at least be a part of the discussion.
            • laughing_man 4 hours ago
              I don't think it will happen. For one thing, it hasn't in the US since the introduction of the forty hour work week in 1940.

              But beyond that, I don't think most people want a three day work week. They would rather work five days and get the extra money. I worked at a company that did government contracting. We had a couple quarters without much in the way of orders, so instead of laying people off like you'd normally see in that situation, the company decided to go to a four day week, with a commensurate cut in pay.

              I was thrilled, as a young single guy, to get Fridays off. I rented a room in someone's house and hit my monthly nut in about two weeks. But most of the people I worked with hated it. Some of them quit. A lot of them both needed the money and also had no idea what to do with themselves on that extra day.

      • PunchyHamster 1 day ago
        > The AI is coming for that too.

        Current AI tech giants prove over and over and over again that this is not the case

        • cromka 23 hours ago
          We've literally just started, what "over and over" do you refer to?
          • malfist 22 hours ago
            I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago.
            • pjmlp 3 hours ago
              I surely have seen jobs around me being replaced by AI tooling, it is getting closer in corporate consulting.
            • Danox 21 hours ago
              It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages.

              I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.

              • davenci 17 hours ago
                What makes you optimistic? Geniunly curious as I’m looking for how to take advantage of the ai disruption
            • trustfundbaby 17 hours ago
              I was the same opinion till Claude code was relased. its a lot closer now.
            • laughing_man 21 hours ago
              I'm not sure how anyone would know if it's closer or not. There's been a lot of progress in LLMs over the last four years.
            • KronisLV 21 hours ago
              > Its no closer to that than it was 4 years ago.

              There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).

              • lbrito 17 hours ago
                What's their balance, revenue - AI expenses? Using the real token costs, not the subsidized costs.
              • wiseowise 21 hours ago
                > There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated.

                And how much revenue do they generate?

            • Tesl 17 hours ago
              I mean this is just fingers-in-your-ears "LA LA LA I CAN'T HEAR YOU!!" stuff.

              I still have a job so AI hasn't taken that yet. But the suggestion it's "no closer" is ridiculous. At least in my life/career/office this last 12 months seems to have been a real inflection point in how AI is being used for software development.

              • malfist 13 hours ago
                Sure, sure. And my CEO believes the singularity happened in Q1. Doesn't make it true
            • kakacik 21 hours ago
              It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe.

              Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.

            • esafak 18 hours ago
              Ask some juniors how their job search is going. In five years, ask the seniors.
          • hansmayer 22 hours ago
            > We've literally just started

            5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...

          • ASalazarMX 23 hours ago
            Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers.
          • luckystarr 23 hours ago
            Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.

            This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?

            • coldtea 23 hours ago
              So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with.
      • no_op 22 hours ago
        Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.

        I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.

        • bmiedlar 21 hours ago
          This matches what I'm seeing. I've been building software for a long time, but building more now with AI than I ever could with a traditional team. But the throughput that's helpful is from knowing what to build and what tradeoffs matter. The AI doesn't have that. It's a force multiplier on experience, not a replacement for it.
        • laughing_man 21 hours ago
          How many Steve Jobses do we need as a percentage of people developing software?
      • tjwebbnorfolk 23 hours ago
        >> I understand things and then apply my ability to formulate solutions

        > The AI is coming for that too.

        If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.

        • itsafarqueue 22 hours ago
          > If this is true, then you'd have to conclude that AI is coming for everything.

          Now you’re getting it

        • flatline 22 hours ago
          It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.
          • bonesss 21 hours ago
            I struggle to understand the logic (in general, the way people are talking), normally efficiencies come with increases in production and scale and use-cases.

            So of 10% of lawyers get AI-d away, let’s say, the remaining 90% are 1.1x+ efficient and also up against other lawyers enjoying the same… work might go up. And on the customer side there is sooooo much BS with lawyers, but if both lawyer and customer can communicate faster or better with the LLMs, we should see more better cases with better dialog and case handling. Again, the total amount of lawyering could go up a lot. And then we have the cases prohibitive without the LLMs, now possible for big money. Better LLM empowered lawyers should be able to create new and more lawyer work.

            As it stands I see people selling services that are subsidized by VC, template jobs we’d be doing faster with copy paste but it’s not copyright infringement when OpenAI does it, and a rush for valuations to soak up VC because the business model isn’t there. I’m seeing a huge uptick in visual bugs on large commercial platforms and customer facing apps, and don’t feel OpenAI is gonna kill Office anytime soon… or Chromium… or Steam… or emacs…

            Call me an optimist, but I think those LLM pump and dumpers are creating a wave of fear that would be quite different if they weren’t lying and trying to boost an IPO. Chat GPT 2 was too dangerous to release, lul, and the class action suits are just getting started.

            An actual lawyer replacing tech company should sell lawyering for infini-money, not pens that’ll totally 10x your lawyering (bro).

            • sophacles 19 hours ago
              And what do those 10% of lawyers do? Every other industry also got reduced by 10+%, its not like they have a job elsewhere.

              So.... they just starve in the streets?

              Even if some other, arguably better job comes along, would they retrain for it? (You can say yes, but take a look at the long history of people choosing to join a cult and vote for an orange moron instead of learning a new skill).

              Either you're convinced you won't be too badly affected and will gladly watch huge swaths of people suffer, or you're deluded enough to think that it will really, truly be different this time. In any case, I hope you get the worst results of what you preach.

          • tjwebbnorfolk 21 hours ago
            Sure, but who doesn't think that 10% of white collar jobs are mostly bullshit anyway?
            • jplusequalt 17 hours ago
              The roughly 15-20 million people who would suddenly be without a job?
            • esafak 18 hours ago
              The only thing worse than a bullshit job is no job.
      • Aperocky 23 hours ago
        > The AI is coming for that too.

        That's where we fundamentally disagree about.

        Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.

        Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.

        • bel8 23 hours ago
          > because it is actually a statistical machine with context limit.

          So are humans.

          Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)

          And I argue that current LLMs have surpassed many of my capabilities already.

          For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.

          • KalMann 21 hours ago
            > So are humans.

            AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.

            • NewsaHackO 14 hours ago
              Are you suggesting a non-physical reason for human cognition?
          • Aperocky 22 hours ago
            We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.

            We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.

            I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.

            • bel8 22 hours ago
              > We are not pre-trained using the summary of all human knowledge over all of history.

              But we are? That's our education system.

              The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.

              • KalMann 21 hours ago
                > But we are? That's our education system.

                That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.

          • leptons 16 hours ago
            >Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)

            Do you have any idea how many calculations it takes for a human to put a ball through a hoop while running across a court?

            It could be millions or billions in a second. Manifesting consciousness, coordinating body movements, and everything else all at the same time takes calculations.

            You may not be aware that your brain is doing multiplication, or any other kinds of math, constantly, but it is.

            • bel8 10 hours ago
              I agree we do some marvelous things in sports but if we extrapolate from this table tennis robot, it's clear machines can/will do just as well there too:

              https://www.youtube.com/watch?v=VVEzgYxDdrc

              • leptons 8 hours ago
                That table tennis robot is not conscious. That table tennis robot does one thing well. A human is capable of far more. There is far more going on for a human playing table tennis than a robot. It doesn't matter if the table tennis robot plays table tennis better, it can't also play hocket, soccer, football, basketball, chess, polo, baseball, or many other things one human can do.

                The human condition is nothing but a massive amount of calculations under the hood. You don't feel it, or understand it, but it's there. Everything in nature is math, every physical phenomena has a cause and effect rooted in mathematics, and it's no surprise that humans are great at subconsciously calculating myriad things on-the-fly, as life is happening around us.

        • itsafarqueue 22 hours ago
          Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.
          • Aperocky 22 hours ago
            I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.
        • coldtea 23 hours ago
          >but not all of it, because it is actually a statistical machine with context limit.

          And the human mind is not?

          • KalMann 21 hours ago
            I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?
            • coldtea 4 hours ago
              Not atm, but does it matter?

              Is it similar, even if much simpler, of the sort of process that goes on there too? That's the important question.

          • nothinkjustai 23 hours ago
            It’s not.
      • bborud 1 day ago
        > The AI is coming for that too.

        To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

        This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.

        He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.

        Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.

        I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.

        And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.

        > People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

        You make it sound like it is a bad thing that certain tasks become easier.

        I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?

        It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.

        If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.

        • coldtea 23 hours ago
          >To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

          1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?

        • SoftTalker 1 day ago
          > To some degree yes, in practice, not so much.

          We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.

          We have robots walking just fine now, by the way.

          • sarchertech 23 hours ago
            If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.

            Imagine 45% of higher than average paying jobs gone.

            If that happens we’ll either figure out a new economic system, or society will collapse.

            Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.

            • ryandrake 23 hours ago
              Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.
              • geodel 23 hours ago
                > We're just going to eventually become economically irrelevant and die off.

                As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.

              • kiba 23 hours ago
                The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.

                It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.

                • shinryuu 22 hours ago
                  Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.
              • monknomo 23 hours ago
                45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?
                • rootusrootus 22 hours ago
                  And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.
                  • ryandrake 21 hours ago
                    They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.

                    Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.

              • rootusrootus 22 hours ago
                I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.

                More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.

            • BurningFrog 23 hours ago
              It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.

              "Automating half the jobs" is the same as "double productivity per worker".

              When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!

              • wartywhoa23 22 hours ago
                What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?
                • BurningFrog 21 hours ago
                  It's what I mentioned:

                  If we double productivity per worker, we have twice as much wealth on average.

                  I know there are angry people convinced that this will all be consumed by billionaires and jews, but historically that is not at all the track record of the last 250 years, and I expect that to continue.

                  • timacles 2 hours ago
                    If you are going to bring up history you should really look into what it took to redistribute wealth from oligarchs in the past.

                    The fact that oligarchy now has more resources than ever in the history of humankind, a means to mass surveillance and generating mass propaganda, those wealth redistributions are looking much MUCH harder to accomplish.

                    Yea, historically it will inevitably happen. Realistically it will be after the new version of fuedalism and dark ages. So strap in for the next 400 years aren’t looking too good

                  • jplusequalt 17 hours ago
                    >If we double productivity per worker, we have twice as much wealth on average.

                    That's not true. There are other factors at play such as demand.

                    If we make the average IT worker twice as productive, that doesn't mean now every IT worker is being paid twice as much, because most users aren't going to care if there are twice as many options on the app store, or twice as many bug fixes per release.

                    • BurningFrog 1 hour ago
                      Consumption in a society will always be roughly equal to production.

                      There are differences due to import/export balance, investments, government borrowing etc, but as a first approximation, if GDP increases by 10%, consumption will rise by a similar amount.

                      About your IT worker example: Let's say s/he produces $150k/year in value and is paid $140k. If AI makes them produce $300k of value, they may not automatically get a raise. But it becomes very attractive for another employer to hire them for $200k or $250k, or even $280k.

                      In the medium/long term, I don't see why wages would keep proportional to produced value.

          • bborud 23 hours ago
            We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.

            Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.

            > We have robots walking just fine now, by the way.

            I don't think you got the point I was trying to make.

            • SoftTalker 23 hours ago
              True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.
          • phkahler 23 hours ago
            >> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?

            Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.

            • rootusrootus 22 hours ago
              > Companies are currently too busy exploiting the local maxima of LLMs

              I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.

          • terseus 1 day ago
            > Why do we believe that LLMs are going to stop there?

            Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.

            > We have robots walking just fine now, by the way.

            Walking and reasoning are unrelated abilities.

            • SoftTalker 23 hours ago
              Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.

              What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.

      • vga1 23 hours ago
        >bosses

        The AI is coming for those too.

        • snozolli 21 hours ago
          Something like five to ten years ago, when AI hype was starting to hit media, one of the claims was that AI would come for middle-management first. Since middle-management can generally be described as collecting information from underlings and reporting information to upper management, their work was supposed to be easy to automate with AI. As far as I can tell, this hasn't proven to be true at all, and we software engineers proudly wrote ourselves out of work by constantly publishing our source code and discussing it openly.
      • thisisit 23 hours ago
        Lot of people don't seem to get that - It is easier to go from terrible to average but much harder to go from average to good.

        I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.

        • Peanuts99 21 hours ago
          I agree with the statement and think a lot of people miss this, but I also wonder how many people probably don't care for good, they only care for 'good enough'.
          • manmal 20 hours ago
            Many large systems can’t be built good enough because they just fall apart. Try letting a junior dev make an ERP or a database system.
        • lostmsu 23 hours ago
          No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.
          • rootusrootus 22 hours ago
            I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.
            • lostmsu 19 hours ago
              Your own comment in my timeline is 7 years out of date. GPT-2 talked pretty, that was its whole thing. If you are trying to claim there's no difference between 5.5 and 2 you are delusional (hallucinating?).
              • rootusrootus 18 hours ago
                I think I was fairly clear, I said that I think it is hubris to think what we have created is anything even slightly like human intelligence. It talks very pretty (a lot of work has gone into this aspect in particular), and it does demonstrate the extent to which, as individuals, most of us do not have especially unique thoughts nor problems to solve. It exposes how quickly humans jump to anthropomorphizing pretty much anything.

                Is it a handy tool? Yep! I use it every day. But it is laughable to think this is the path to AGI. The most common counterargument on HN is some variation of "but you can't prove that this isn't just like how a human thinks". A conspiracy theory at best, just reinforcing the fact that we know very little about how even simple non-human brains function.

                • lostmsu 17 hours ago
                  You do you. I stick to the simplest reasonable definitions. From my perspective we are already in AGI, just the intelligence isn't quite on human level yet across the board.

                  I am yet to see anyone saying it's just like human, so it looks like you are mostly hallucinating that too.

                  You didn't address my point on GPT-2 vs 5.5. Your only relevant claim is that 5.5 talks very pretty vs 2 just pretty I assume. Well, you have to be blind to claim this is the main difference.

      • at-fates-hands 22 hours ago
        >> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

        Anecdotal evidence to support this.

        I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."

        The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.

        I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."

      • oblio 1 day ago
        >> I understand things and then apply my ability to formulate solutions

        > The AI is coming for that too.

        In that case all [1] non manual work is doomed, until robotics has an LLM moment.

        [1] With the exception of all fields protected by politics or nepotism.

        • rootusrootus 22 hours ago
          > all non manual work is doomed

          All work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.

      • wiseowise 21 hours ago
        [flagged]
    • therealpygon 1 hour ago
      I agree in some ways, but I think this also overlooks that your job might be like that, but most decidedly “developer” jobs are not all like what you say which is more Engineering. Many people are able to have a career making basic HTML website changes. Are they not developers? Will their job not potentially be replaced by an AI that can make that change in seconds?

      It’s weird that people always seem to argue the extremes when reality is jumbled mess in the middle. Will developers lose jobs to AI? Without question. Will many “developer” jobs be eliminated because of that? Without question. Is it probably a really bad time to think you can go from your retail job to fixing people’s website as a lifetime career move? Yeah, probably not the best idea. Would it be smarter to focus on becoming a “Software Engineer” instead of a “Developer”? Yes usually. Does that mean it is a bad idea for EVERYONE to choose to become of developer? No, and that would be a dumb thing to argue.

      We’re still going to need developers and definitely engineers, we are just going to need less of them in their current form, just like we needed less saddle makers, farriers and blacksmiths. We didn’t stop needing Horse Mechanics, we just needed less of them because we needed Car Mechanics. Some of those skills transferred, some didn’t.

    • amw-zero 20 hours ago
      I agree in principle, but I think the 2-5% estimate is extremely low. I could be sold on most developers spending ~25%, up to 40% of their time on code. But very few people are spending 2% of their time on it. Unless you're some sort of super senior staff / advisor to the CTO at a gigantic company, which has already placed you on rare terrain.
      • siren2026 19 hours ago
        Most people overestimate how much time they spend "writing code".

        I interviewed a ton of people in my career and when I ask "how much time writing code on your last job?". The more junior the person the more they would overestimate the time writing code (Some would say 90%!). Once they joined I was able to see how much time they really wrote code and it is almost never more than 30%.

        Mostly because the code is only the final output. You spend most of your time doing research, talking to people. Working on Quarterly OKRs, going to meetings etc.

        If you just write code you are either an extremely junior person that works on things trivial enough to not have to research or your are disillusioned and you don't realize you spend most of your time doing other things

      • d3rockk 19 hours ago
        might be closer to accurate if the 2-5% is his estimate of the physical time spent making key-strokes
        • bee_rider 19 hours ago
          Surely we should only count the time actuating the key. Apple keyboard users are in shambles.
          • layer8 18 hours ago
            Only the keydown, not the keyup.
        • mh- 17 hours ago
          That's <24 minutes per 8 hour day.

          If you're reading this and that matches your experience as an IC SWE whose job is ostensibly developing software.. you're either trapped in a very atypical org, or you're heading for a PIP.

        • siren2026 19 hours ago
          Nope. I would bet most people really only do 2-10%.

          But we would like to convince ourselves we don't.

    • hateful 1 day ago
      Not sure where I first heard this, but I say it to my team all the time: "Programming is thinking, not typing"
      • strbean 22 hours ago
        I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.

        Seeing him type really reinforced this idea.

      • the_hoffa 23 hours ago
        I've always told my Jr Engineers to "think twice, code once".

        If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).

        To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"

      • bborud 4 hours ago
        A former colleague of mine used to work for a boss who would periodically stick their head into the office where the programmers were and yell "I can't hear typing! Why are you not working!?".

        The reason I just remembered that is that the other day they proudly announced that everyone in their company would now be vibe-coding exclusively.

      • CodeMage 22 hours ago
        That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.
    • brandensilva 22 hours ago
      I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.

      So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.

      Even with it advancing someone has to be responsible for when it screws up which we know it will.

    • ravenstine 20 hours ago
      This answer makes two big assumptions that haven't been proven out yet.

      - Understanding code without writing it is as viable as understanding code that you've worked with directly or indirectly

      - Businesses care that you understand code

      I really doubt the first one. Traditionally, understanding a code base in large part came from working with it intimately and building that muscle memory. The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.

      Whether businesses care that their engineers (which they are increasingly viewing as monkeys at LLM typewriters) to understand the code remains to be seen. I don't think they particularly care whether their code runs slow and is buggy so long as it works just enough to churn out features and continue to pull income.

      • simonw 20 hours ago
        > The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.

        As one of those developers who has written almost no significant code by hand since November 2025, but has produced a great deal of working software, I still understand the majority of the code I've produced just as well as if I'd typed it myself.

        I may not be typing it myself, but I'm manipulating it constantly. It's not as simple as "reading" it - I'm reading it, executing it, figuring out refactorings for it, having tests built for it, having documentation built for it, sometimes writing that documentation myself, spinning up example scripts that use it, then building new code that depends on that previous code.

        It's that act of exercising the code that gives me confidence that I understand it.

      • foobarian 20 hours ago
        > understanding it from writing it

        On the surface it sounds weird - why would this be?

        Possibly because building a system is not a one-shot step, but a process of many iterations, each of which involves experiments in production, and gaining more learnings. So at the end of the process, you don't just have N lines of working code, but also N lessons learned along the way. So presumably with the AI process we miss out on half the value.

        Now the going thesis is that this extra value is unnecessary if we take the plunge and don't look back. My gut says the answer is somewhere halfway, I guess we'll see.

    • czhu12 22 hours ago
      Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?

      Theres another, different loop I keep seeing which is:

        - Company A lays off engineers citing AI efficiencies
        - People say its because of over hiring during 2020
        - Company B lays off engineers citing AI efficiencies
        - People say its because it was never a good business
        - Company C lays off engineers citing AI efficiencies
        - People say its because theres a recession
      
      I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.
      • Xirdus 21 hours ago
        If company A is Amazon, company B is Ubisoft, and company C is Oracle, then I think it's very likely there isn't any pattern or "loop" here and it's legitimately just 3 different companies in 3 different situations doing layoffs for 3 different reasons but all 3 reaching for the same PR playbook. "We're leveraging AI to increase productivity" is the new "we're streamlining our business and focusing on our core products".
    • sefrost 23 hours ago
      Only 5% of your time is spent writing code? That sounds like a low estimate for most software engineers I work with.

      May I ask if you could estimate how you spend the other 95% of the time?

      • hatthew 22 hours ago
        In no particular order

            - Meetings
            - Reading papers
            - Understanding legacy code
            - Reading internal news
            - Ad hoc chats with coworkers
            - Writing docs
            - Editing configs
            - Thinking about solutions
            - Slacking off
            - Analyzing results
            - Testing code
            - Reviewing PRs
            - Understanding others' ongoing projects
        • throwaway2037 10 hours ago

              > Slacking off
          
          I laughed when I read this, but there is something to it. I like to say "intellectual relaxation" or take a break. Sometimes getting up from your desk to do some mindless admin task like photocopy a document for HR can free up your mind. If we were line workers at a factory, this would be mandated breaks. Business/Financial newspapers and factory executives love the old quote: "With robots, they never need a break, never need holiday, and can work 24x7." With the advent of agentic LLMs, a tiny fraction of that reality is leaking into the white collar world.
        • PizzaBorsch 21 hours ago
          AI can do everything you listed except chats with coworkers and slacking off.

          I just don't think you've utilized the most recent versions of codex or claude.

          • hatthew 16 hours ago
            It's definitely theoretically possible, but not there yet. I use cursor, claude (opus 4.7), and several proprietary LLMs/LLM frameworks at my job. The institutional knowledge I have wouldn't fit in the context window, and AIs lack my mental index/intuition of where to look for answers. When my AI makes a PR, I generally have to make some important changes, without which it's solution would be fundamentally broken. AI also cannot be trusted to make the right business tradeoff decisions.
      • Enginerrrd 21 hours ago
        It sounds plausible to me since this is pretty en par with most other engineering disciplines. I’m a civil engineer. My responsibility is ultimately mostly to produce a constructable plan set. I spend far less than 5% of my time drafting or modeling.
      • davidw 23 hours ago
        Commenting on Hacker News?
        • wartywhoa23 22 hours ago
          For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
          • truncate 21 hours ago
            >> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"

            I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.

            LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.

            • pjmlp 3 hours ago
              Because compilers are only deterministic when using ahead of time compilation, without profiling data, and always the same set of compiler flags.

              Introduce dynamic compilation, profiling data, optimization passes, multiple implementations, ML driven heuristics, and getting deterministic Assembly output from a compiler starts to get harder to achieve.

          • cobbzilla 19 hours ago
            By extension, does this imply that all the HLL advocates from decades past were shilling for compiler companies?
        • icedchai 23 hours ago
          In all seriousness, communications consumes a lot of time. Meetings, emails, Slack messages, pestering stake holders and other developers...
          • hjort-e 23 hours ago
            If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
            • icedchai 22 hours ago
              I agree it would be absurd for it to take 95% of your time. I have, however, seen that it takes a lot more time than one would think.

              I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!

              • hjort-e 22 hours ago
                Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.

                But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.

              • fragmede 22 hours ago
                So about 26 hours of meetings to 13 hours of "real technical work" per week, but that's is 33%, not 5%.
              • skydhash 22 hours ago
                Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
                • hjort-e 22 hours ago
                  I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
                  • skydhash 22 hours ago
                    > it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code

                    And that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.

                    It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.

                    Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.

                    • hjort-e 21 hours ago
                      Huh? So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate?

                      We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby

                      • skydhash 18 hours ago
                        > So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate

                        I believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.

                        If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.

                        That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.

      • varispeed 22 hours ago
        The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc. Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code. Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.
        • nomel 22 hours ago
          Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.
          • varispeed 21 hours ago
            Finance, web services, service integration
        • sefrost 18 hours ago
          I don’t know if that’s true for most of us, who simply work in CRUD apps. Maybe I’m in a bubble though.
      • FatherOfCurses 21 hours ago
        Sneering at "kids these days"
      • mxksisksm 23 hours ago
        [dead]
    • rhubarbtree 21 hours ago
      If you’re a developer and you’re writing code 2% of your time pre-Claude, that’s 9 minutes a day, you will and should be fired.
      • wan23 18 hours ago
        Things besides writing code that you might be doing:

        - Meetings

        - Code reviews

        - Manual testing

        - Deployments and more testing

        - Triaging issues

        - WTF how did this bug happen?

        - JIRA in general

        - Whiteboarding sessions / Design docs

        - Interviews

        - 1:1s (mandatory ones)

        - 1:1s (networking / problem solving / political alignment)

        - Whatever your company's version of corporate extracurriculars is

    • nomel 21 hours ago
      The perspective here is "lifetime career", so you need to project out 30 years here, for a meaningful argument.

      I think, much sooner than that, you'll have AI pumping out practically complete implementations that meet the requirements of function, set by the people who desire that function. THOSE people will be the developers, and will be more akin to technical "creatives", more on the product side, than the developer side.

      • AllanSavageDev 20 hours ago
        Someday people are going to get tired of "programming in English" with prompts, getting inaccurate output, etc and someone is going to invent a higher level kind of CODE that allows the user to directly specify the actions the computer should take to solve the problem. Later someone will invent a kind of tooling that COMPILES these CODES into a runnable thing skipping the prompt part all together. It might be called something like Unified Prompt Language.
        • mawadev 11 hours ago
          I think we can call it Claude++ or short C++
    • DeusExMachina 6 hours ago
      > Natural selection will take care of them in due course.

      While you are seemingly not at the moment, some day you might be at the receiving end of that "natural selection" in ways that seriously impact your remainint time on the planet.

      In that case you might reconsider your stance, and especially question how natural is the selection of a few powerful rich people depriving others of their way to earn a living and their way to draw meaning from their lives.

      The AI revolution keeps getting compared to the industrial revolution, but people keep forgetting the consequences of that one.

      • bborud 3 hours ago
        I'm not terribly worried. The reason I am not worried is that software isn't my only marketable skillset. That is deliberate. Even though I see myself as primarily a software engineer, in the past decade I've worked in areas that tend to be viewed as wildly different strata and domains.

        And if the apocalypse comes, I'm actually not that bad at a handful of skilled blue collar jobs.

        The people who should be worried are the ones with narrow skill-sets and no capacity for dealing with rapid change. Especially if those skills are shallow too.

        But I wasn't talking about people. I was talking about companies. And the reason I'm not worried about companies going under is that they have gone all the time since the start of the industrial revolution. Yes, it happens faster and more violently today than before but neither the churn nor the reasons are all that new. They just need to be understood so you can deal with change rationally and without panicking.

        It is a good idea to read up on historic innovation/disruption cycles and realize that they are nothing new. The only reason people think this is a new problem is that 50-100 years ago they used to take about as much time as your productive career. So people wouldn't need to understand how to deal with it. And every generation would be convinced that this is some unexpected and unique upheaval that only their generation has to deal with.

        My stance is the only one that works well during disruption: you make sure you have more legs to stand on and you don't waste time fretting over things you can't change. If you find yourself out of options, you can only blame yourself.

    • AlexCoventry 23 hours ago
      You don't think AI is going to be able to understand things and apply their ability to formulate solutions better than you, in the near future?
      • koonsolo 21 hours ago
        In 2000 I learned about this old technology called "neural networks".

        AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.

        The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.

        I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.

        • jghn 17 hours ago
          Yes, same. In the late 90s through early aughts then I was taught over and over and over again that neural networks were a dead end concept and would never amount to anything.

          Just like all the preceding AI booms, this one will hit its maximal point, the hype train will fizzle, the best parts will just become "normal", and then a couple of decades later something new will come to push the boundary again.

    • ljm 17 hours ago
      We switched to 'software engineer' to encapsulate that, I think. You can receive requirements and churn out code or you can go up a level and think about the solution. Go another level up and think about the problem. Another level and it's the context of the problem. Further than that and it's the priority of it. And even higher up is how it fits in the product roadmap and the architectural decisions.

      At some point you stop developing and start weighing up the requirement against your understanding of the system and the environment it works in.

    • xracy 17 hours ago
      There's an old Chemistry joke, that I've reapplied to Software Engineering, and it goes something like:

      A New Engineer (NE) shows up on their first day on the job, notebook in hand ready to learn. They get assigned to shadow an Experienced Engineer (EE) for their first day.

      EE: Now, the thing is, for any project on our team, you only need to change about 3 lines of code. NE, preparing to write down notes: Which 3? EE: Well, it depends.

      (Originally about Material Safety Data Sheets, and there only being 3 relevant lines on them).

      I think this is what people miss about Software Development.

    • atleastoptimal 17 hours ago
      LLMs also can “understand things and apply their ability to formulate solutions”. There is nothing that will inherently limit AI from doing all knowledge work (and all physical work once robotics is good enough).

      Of course developers could just move up the “next level of abstraction” and become managers of agents who write the code, but eventually AI becomes a better manager of agents than even the best humans, at which point there is no contribution a human can make that an AI model or system of models couldn’t do better.

      • lambdas 17 hours ago
        > There is nothing that will inherently limit AI from doing all knowledge work

        Resources is one. Energy, water, cost. There seems to be diminishing returns in intelligence at the moment, whilst power and memory usage continues to go up.

    • intelVISA 6 hours ago
      Well said, the only flaw is the unfortunate realization that "I understand things and then apply my ability to formulate solutions" is rarely required, how many zombie corps are still roaming these days?

      Judging by how many day to day tech products in my life are buggy, slow or user-hostile there can't be more than 50-100 tech companies actually innovating, right?

    • pjmlp 19 hours ago
      Usually that means you're already a senior developer, understanding things and formulating solutions is part of work delegation.

      Now those juniors whose job is to implement those solutions, they will have a hard time.

      On my 50s, I also don't write as much code as I used to, even less nowadays with serverless, managed services, low code/no code tools, agent orchestration workflows, and with it I keep seeing development teams getting smaller.

    • davnicwil 15 hours ago
      > Natural selection will take care of them in due course

      Wonderful articulation. There's a plethora of prognostication about how AI will change everything in software and beyond and the thing I keep thinking is, well, when will the talk stop and the demonstration of results commence. It doesn't seem to have as yet.

      If it works, it'll work. The methods will spread and quickly be accessible to everyone, and progress will go on. That's great.

      If it doesn't work, we'll also see that in the absence of real results. And simply stating you are seeing it doesn't qualify. It must be something we can all see and use that is unavoidably, undeniably real.

    • madduci 23 hours ago
      Because that classifies in "developers" and "software engineers". And software engineering isn't going to disappear anytime soon
      • hellojesus 23 hours ago
        Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.

        I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.

        • borski 23 hours ago
          If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.

          If you are a licensed engineer of some kind, you’d state that outright.

          The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.

          (I know this is different elsewhere)

          • VonGallifrey 22 hours ago
            > If there were a license in the US for it, I’d agree with you.

            Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.

            Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.

        • traderj0e 23 hours ago
          I call myself a computer programmer unless someone is asking for my official job title (software engineer)
        • madduci 12 hours ago
          I call myself engineer because I have also an engineering degree.

          But yeah, the term is mostly misused

        • bilbo0s 23 hours ago
          I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".

          I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.

          • whstl 23 hours ago
            I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.

            On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.

    • beej71 18 hours ago
      > I'm getting old and I value my remaining time on the planet.

      It's an interesting sentiment. I, too, am getting old and value my remaining time on the planet, and so I code by hand every chance I get. :) Luckily I'm in a position to be able to do that.

    • sirnicolaz 6 hours ago
      I had a professor at my CS university (one of the greatest I had) who used to say (in 2008): "a developer should write no more than 5 lines of code a day"
    • ChrisRR 3 hours ago
      10% writing code. 90% reading and understanding code
    • hyperjeff 22 hours ago
      You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.
      • onethought 22 hours ago
        I think it’s more nuanced. Even a “coder” spends the majority of their time, not coding.
    • dev_l1x_be 22 hours ago
      And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.
    • m463 22 hours ago

        - Compilers will make developers irrelevant
        ...
        - Compilers can write assembly language code
      
        - Compilers have -O3 now
      
      etc...

      Maybe we should rejoice. I remember dreading writing documentation, and now I would happily hand that off to AI.

      • geodel 21 hours ago
        It is indeed exciting (for you at least). The problem is for most people is not that AI is spewing out code and reading documentation while developer do more interesting things. It is that companies are handing over the job of those developers to AI itself.

        So those ex-developers are free to do most interesting things in the world with little change of not relying on nice, steady paychecks every month.

    • ryandvm 1 day ago
      I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.

      The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.

      The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.

      If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."

      • golddust-gecko 1 day ago
        100% this.

        I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")

        So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.

        • hibikir 23 hours ago
          Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.

          Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices

      • bborud 1 day ago
        You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.
      • WorldMaker 23 hours ago
        > max out at "knowing everything"

        LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.

        We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)

        But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.

        • anon84873628 21 hours ago
          I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts.
          • WorldMaker 21 hours ago
            Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.

            It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.

            • anon84873628 12 hours ago
              Can you explain how you "know" two things are related? If I ask you the similarities between a cat and a dog, is your answer based solely on an understanding of their genetic phylogeny and how those genes express traits?

              Grouping vectors in concept space is exactly how you create semantic understanding. The proof is in how good they are at creating semantically valid text. The fact that it took massive amounts of data is irrelevant. That just shows how much knowledge is encoded in all our language. It takes humans a ton of training to know things too.

              • WorldMaker 4 minutes ago
                > is exactly how

                We don't know that. It seems like great hubris to declare we know how the human brain works. You are asking me to explain how we know things and then telling me we've already figured it out in the same breath, and that's hilarious.

                It doesn't take massive amounts of language data to train a baby human. It is almost entirely just: "Look. Here's a cat. Can you say cat? Cats go meow." "Over here, your aunt has a dog. Dogs go woof."

                There's generally a flood of non-lingual contextual data in such moments such as sights, smells, sounds, movements, touch but that also only further underscores how different LLM training is from anything we'd consider human learning. Our memories aren't just "conceptual spaces of linguistic topics", they are complex sensory maps where a smell can remind you of the first dog you ever met. There is so much of our human knowledge that is not and never been encoded in most of our languages.

                The fact that LLMs take massive amounts of linguistic data is relevant, because it shows how far we still have to go in barely scratching the surface of how the human brain seems to work. (Which again, we know only the barest details. Anyone who tells you they know 100% of how the human brain operates so far tends to be a snake oil salesman.)

          • wiseowise 21 hours ago
            Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it.
            • anon84873628 12 hours ago
              Well Wikipedia can't write an essay for me, and LLMs can.

              I'd say they are quite adroit at using their knowledge.

              I mean, is Mythos finding all these vulnerabilities not evidence enough? Does AI Studio not clearly understand React and use it artfully?

          • koonsolo 21 hours ago
            I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated.

            So they surely know a lot, but you are never sure if the info is correct or not.

            • anon84873628 12 hours ago
              They can estimate confidence based on distances in that state space.

              But yes, it gets tricky.

      • koonsolo 21 hours ago
        Do you think the latter can be achieved with the LLM neural network architecture? I highly doubt it. Neural networks are very old tech, and it took us that long to get us here.

        I'm sure we'll reach AGI at some point, but looking at AI history, I don't see that coming any time soon.

    • dawnerd 21 hours ago
      The problem is people think AI can replace the 95-95% that isn't code too. That's where we end up with massive unusable codebases that no one understands.
    • codemog 18 hours ago
      Anyone read posts like this and picture someone who doesn’t actually do anything all day besides posture in meetings? Probably with a super inflated title and salary.

      I doubt this is what the OP does, but there’s tons of developers like what I described and they seem actively proud at not actually building anything and playing politics all day.

    • timedude 22 hours ago
      > Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.

      Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.

    • dakiol 22 hours ago
      That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.

      > I understand things and then apply my ability to formulate solutions

      AI is coming for that too. Don't be naive

      • varispeed 22 hours ago
        It will be interesting for governments using workers as proxy for taxing corporations.
    • jchonphoenix 22 hours ago
      You miss the major factor in your compensation: pricing pressure due to supply/demand.

      By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.

      • fragmede 22 hours ago
        In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.
    • rpdillon 22 hours ago
      This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.

      Saying otherwise is sort of like reducing the task of writing a novel to typing.

    • fnordpiglet 22 hours ago
      Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.

      As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.

      However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.

      • pjmlp 3 hours ago
        This is kind of country specific, in many European countries the kind of university various in years and content required for a degree, depending if they focus on vocational or more generic high education.

        Example, university versus polytechnic.

    • vagab0nd 21 hours ago
      This is a valid perspective, but I don't think a useful one.

      Being able to produce code is a huge unlock for many non-programmers. So in a way, it doesn't matter how much time existing developers spend on coding. It's about helping anyone become a developer.

    • Insanity 18 hours ago
      Yeah coding speed was almost never the bottleneck I found. AI now does the typing and (some) of the thinking. It doesn’t figure out what needs to be built and how it all plays together (yet).
    • agnishom 2 hours ago
      This is a bit of a strawman. When people say "writing code", they don't necessarily mean [pressing the keys on the keyboard that produces the necessary bytes in a text file].
    • xhevahir 21 hours ago
      The "apply my ability" is doing a lot of work, so to speak, in the above exchange. Work that might eventually well be automated away.
    • atoav 23 hours ago
      Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.

      Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.

    • pasquinelli 13 hours ago
      > Natural selection will take care of them in due course.

      or you.

    • bdangubic 21 hours ago
      > Yes, about 2-5% of the time. Less now.

      I spent 2nd half of my 30y career fixing organizations and process where this was the case. so many things are wrong in places where this is the case (or alternatively you need a different job title :) )

    • insane_dreamer 23 hours ago
      What you described are senior developers and system architects.

      Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)

      > The developers who still think their job is about writing code will perhaps not have a job in the future.

      So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.

      (How will Sr Devs in the future be created? That's the question, isn't it.)

      • vineyardmike 23 hours ago
        > How will Sr Devs in the future be created?

        As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.

        Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.

    • boring-human 22 hours ago
      The true argument is about quantity - of people, not code. All qualitative arguments are missing the point.
    • bluegatty 21 hours ago
      This is maybe a bit myopic.

      Dude - look what happened in the last 2 years on software.

      Now project out another 10.

      I totally agree with you 'as of now, in the current paradigm'.

      But that could very well change.

    • izacus 23 hours ago
      Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.
    • coldtea 23 hours ago
      >- I understand things and then apply my ability to formulate solutions

        - Well, and AI can do part of that too, maybe more of it soon.
        - ...
        - Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
        - ...
    • jstummbillig 21 hours ago
      > Multiple times per week I have the same conversation.

      Really? I mean, good on you if it's true and you like the attention but that's sounds like an implausible amount of interest in someone and their relatively mundane profession.

    • engineer_22 17 hours ago
      Engineering in a nut shell... What did we do before computers??? Halls full of draftsmen...
    • FpUser 17 hours ago
      I think similarly. To me value of the "programmer" is not "I love Rust", "I am React expert" etc. That "love" is for sure replaceable.
    • doctorpangloss 21 hours ago
      In my community almost all problems are political. "Problem solving ability" matters if you are HFT, but everything else? Math can't tell you the best way to use land, educate a kid, what to pay for healthcare and how, how to prioritize biotech research, set a minimum wage, decide congressional maps, all sorts of stuff that actually I pay for or care a lot about. in fact I think you are totally misinterpreting what people are saying to you, you are 200% wrong: the 2-3% of your time spent coding was the valuable part, and your so called problem solving ability rarely solved any real problems.
    • foldr 23 hours ago
      I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.

      Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.

      • pjmlp 3 hours ago
        Kind of ironic, given than OS implemented in high-level programming language trace back to 1958, with JOVIAL being one of the first systems programming languages.
    • keybored 1 day ago
      Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.

      Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).

    • jmyeet 20 hours ago
      This an example of survivor bias dressed up as general advice that doesn't consider the entire ecosystem. And we need look no further than what's happened in Hollywood with writing in particular.

      The general progression of a Hollywood writing career is from PA (production assistant), which often starts off as a volunteer "intern" position, to writer's assistant. Assistant here usually means doing any meanial task anyone wants from fetching drycleaning to taking a dog for a grooming appointment. When you're a writer's assistant, you will oten spend time in a writer's room. You will see how the process works. You probably won't contribute anything but you may get feedback on tehings you've written from whomever you're working for.

      The next step is as a staff writer. You will be paid to produce scripts and stories for a TV show, for example. That writer's room will have a head writer. On a TV show the head writer is almost always the showrunner. The showrunner is effectively the leader of the entire project and is responsible for breaking up a season intoo storylines and making sure those scripts make sense as a collective. They might one or more of those scripts or maybe not. The showrunner will hire directors for each episode.

      The path from staff writer to showrunner often goes through being a producer. Producers are responsible for a lot of the logistics of filming a show. Hiring extras, finding locations, coordinating stunts and costumes and making sure the director has everything they need.

      As part of all this, in the 22 episode TV era, writers would often end up spending time on set while the show is being filmed. They'd learn from the process.

      Every part of this was necessary. Those writers on set are your future producers and showrunners.

      So what's happened in the streaming era is that writer's rooms got smaller (so-called "mini writer's rooms"), maybe only the showrunner is ever on set, the writers have stopped working by the time filming even begins and you might only be doing 8-12 episodes. On a 22 eipside season, that one job could support you. 8-12 episodes can't.

      But you see how this all breaks down when writers can no longer support themselves, they're no longer being trained to be future producers and showrunners, there's no feedback from set back to the writer's room and you end up with 3 year gaps between seasons. The only reason for all of this is because it's cheaper.

      So, you may be a staff engineer who tech leads dozens of other engineers. You're not formally a manager or director but you have a lot of influence about the entire project. But how did you get there? You started as a junior engineer being told what to do. You got to see how other leaders operated. You became responsible for more and more things. You might start fixing bugs under supervision to managing a feature then an entire project and so on.

      So what's going to happen here is (IMHO) we will have years of the software engineer space shrinking. There'll be very little entry-level hiring. Layoffs will reduce the entire workforce and there'll be a few tech leaders who hang on because they still produce value. Some of them will probably discover they don't produce enough value and they'll go too.

      But where do the future tech leaders come from in this scenario? AI is being used as an excuse to kill the entry-level pipeline and if you go around and say "git gud" [sic] then I'm sorry but you just don't understand the impact of what's happening or you don't care because, at least for now, you're simply not affected.

      You see the same thing with people who espouse the myth of meritocracy. Well, if a given workforce shrinks by 50%, half those people are, by definition, not going to survive. An individual may be about to reskill or skill up to survive but not everyone can. And that's how people end up in Amazon warehouses. At least until they're no longer needed there ether.

      • throw234234234 11 hours ago
        If the industry is to shrink this is the best way it can. Stop people entering while they are young and can pivot into something with better returns. Keep the experienced people who are older and may find it harder to pivot and had some "good days" to help them ride them through these bad times. I've seen similar dynamics in other industries as they slowly die/move on (e.g. manufacturing, niche trades, etc). A slow decline is better than a boom/bust. If it ends up that we need software engineers later training is an easier problem than mid career death for the juniors in a few years time.

        Eventually the market finds a new equilibrium of staff to demand ratio. You prefer that happen sooner so people don't make bad investments of their time (e.g. studying the wrong course based on inaccurate market signals).

    • surgical_fire 1 day ago
      I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.

      It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.

      Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.

      What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.

    • th1sisoldnews 15 hours ago
      [dead]
  • hibikir 23 hours ago
    In my experience, it's been the complete opposite. The very experienced engineers that are actually willing to use top of the line tooling are much better than they were before, including those that are over 40, and over 50.

    Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess. The old chess player knows chess much better than a 19 year old phenom, but they cannot calculate for that many hours at the same speed as before, so their experience eventually loses to the raw calculation. Maybe at 35, or at 45, but you are just not as good. Claude Code and Codex save you the computation, while every single instinct and 2 second "intuition", which is what you build with experience, is still online.

    It's not just that it's a more fair competition: It's now unfair in the opposite direction. The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before. Hell, it's easier to get the agent to change direction than most juniors around me, which will not be easy to correct with just plain, low-judgement feedback.

    • bastawhiz 14 hours ago
      > The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before.

      I don't see this in practice. Senior engineers can prompt Claude just as fast as junior engineers can. Claude can debug broken code at roughly the same speed regardless of who's pressing Enter.

      In fact, I've basically stopped using agents for most of my work. It's far more valuable for me to help the junior engineers develop a better sense of what's good and right, than it is for me to sit and review Claude commits all day.

    • bel8 22 hours ago
      But when a senior can do the job of 6 coworkers, what do you suppose will happen to the coworkers?

      In farming, those who were replaced by tractors did not keep their jobs. What is different now?

      • jhrmnn 21 hours ago
        Nothing, it’s that same story again. Industrialization turned peasants to blue collar workers by mechanizing agriculture. Then blue collar workers were turned to white collar workers by mechanizing all manual labor. Now AI is coming for white collar workers by mechanizing intellectual labor. The big question is what will white collar workers turn into.
        • ahel 20 hours ago
          1. People *across generations* had to skill up. 2. software being very opaque (very differently from agriculture/mechanized labour) imo is linked to having a plethora of support roles that cannot write software but "handle the human part" and help make it readable, while spreading accountability. Hopefully with a "more readable/more standardized" software development, those [product|project|people] management roles can stop being a drag/bottleneck. (code was never the bottleneck we repeat ourselves since age immemorial)
        • calderwoodra 18 hours ago
          They'll all turn in to "the managerial class". That's the higher order function that requires high judgement, low frequency work (unlike developers, data entry, designers, BDRs, compliance, etc.)
        • siriusastrebe 20 hours ago
          The globalized economy has demonstrated that a single country and supply the majority of the world's manufacturing needs, at least for a while.

          Taiwan creates most of the worlds semiconductors. China makes the majority of everything else. Silicon Valley created a majority of the tech market's value.

          But there's a cap where the world has enough stuff at least in the short term, and growth slows.

          Humans only need a certain amount to survive. With populations leveling out, industry will shift from servicing human needs, to the needs of corporations and other industries. Consumers will become a minority in the future economy.

          What will corporations value in the future, that they're willing to spend on recurring human capital expenses? I think the answer will always be: the tasks that will help companies grow.

          • ElevenLathe 55 minutes ago
            Why would companies want to grow if there is no more demand by consumers? I understand that "b2b" exists, but surely all production and commerce is ultimately about satisfying (or creating) demand for consumption? Why would companies want to keep growing if there is no demand for their products?
      • malfist 15 hours ago
        Wake me up when a peer reviewed study finds even a 2X increase in senior developer productivity
        • suzzer99 14 hours ago
          I have no idea how you could even measure that w/o influencing the outcome.
          • malfist 13 hours ago
            If the claim isn't falsifiable, than you can't do science. If you can't disprove a 2X multiplier how are you going to prove a 6X? 1000X? What's stopping the next hype man from claiming a billion X multiplier?
      • bluesnowmonkey 21 hours ago
        With farming, you couldn't just start your own farm, because it requires farmland, and there's only so much of that. But those 6 software engineers can start their own companies, fire up their own team of agents. There's no limit to how many companies can exist in the world.
        • pjmlp 3 hours ago
          Pity that as everyone is unemployeed, those companies don't have anyone to sell to.
        • forlorn_mammoth 21 hours ago
          and buy their own health insurance, and find their own customers, ...
        • ryandrake 21 hours ago
          "Just start your own company!" - Hacker News
          • ge96 20 hours ago
            It makes sense go bankrupt, start another LLC.

            No I watch/listen to a lot of entrepreneurial stuff since 2016 and I still haven't launched my own product. There's a YT channel "Starter Story" it's like "this person make $100K/mo, here is the template".

            It really is simple though, put a paypal button on a squarespace page and ask someone to pay it.

          • bluesnowmonkey 19 hours ago
            Yeah pretty much. Have you seen Polsia and its ilk? Maybe "trivial" would be too strong a word but... in 2026 it's not hard.

            That's my point. You couldn't tell an unemployed farm worker to go start their own farm. They probably don't have the land or substantial capital it takes. But an unemployed software engineer just doesn't need anything like that to go into a business built on AI.

            • tavavex 16 hours ago
              This may seem like an important change that's making things different today, but it's not. Farms back in the day were constrained by supply, software today is constrained by demand.

              A farmer couldn't create a farm from nothing, but if you had one, you very likely were able to sell its products - everyone always wants to eat. That is in addition to the natural benefit of being able to use the food grown there to survive all on their own, being your own guaranteed consumer.

              A software developer can create software from nothing, but who is going to buy it? There's not enough consumers and problems in the world for everyone to have their own specialized business that is able to thrive. Someone is always going to be left out. It's not like food.

              It's like as if a farmer was able to conjure up a farm for free, but there is such an abundance of farms that to sell the crops to anyone, you'd need the help of a bigger business, or try to cultivate very specialized and niche crops that are not being made by one of the mega-farms, yet.

              • ej88 1 hour ago
                i would argue its the opposite

                farming hit a ceiling because of demand

                software today is heavily, heavily constrained by supply. demand is basically infinite for actually good software that solves problems people have (and people always have problems).

              • ethagnawl 15 hours ago
                > A software developer can create software from nothing, but who is going to buy it?

                Especially when people or orgs who would have previously paid them for said software can now crap out an 80% solution in a few hours or days.

        • insane_dreamer 15 hours ago
          > There's no limit to how many companies can exist in the world.

          1) demand for software is not, and was never infinite

          2) the AI that put the 6 SWEs out of a job is the same AI that is supplying the demand; meaning there is no demand for 6 new software companies

      • koonsolo 20 hours ago
        There are jobs with limited work, and jobs with unlimited work.

        Since your farming land is limited, after the job is done, there is no more work.

        For software projects, there is always more work to do. It's an arms race between competitors. Imagine you fire developers to maintain your speed, and your competitor keeps their people to go faster. Good luck to you!

        • evenhash 17 hours ago
          There will always be more work that could be done, certainly.

          What’s uncertain, however, is whether the work that remains to be done is valuable enough that it makes sense to pay someone an engineer’s salary to do it.

          In your analogy, your competitors who keep their people could just as easily end up bankrupt.

          • koonsolo 8 hours ago
            If your competitor can deliver more features faster and with higher quality, I don't think it's them that will end up bankrupt.
        • suls 19 hours ago
          Great way of putting this. I certainly feel like this in smaller companies where each action (or inaction) has a direct consequence on the profitability.
        • insane_dreamer 15 hours ago
          > For software projects, there is always more work to do.

          sure, but there is not more work that people will pay you to do; that's the important question

          • koonsolo 8 hours ago
            Software developers get paid to crunch out as much code as possible. If you work twice as fast, do you think your employer will let you slack off a bit because they ran out of work?

            What really happens is that your bucket gets filled with more work, and now you are as busy as before. Or did someone else got fired because now you were working twice as fast as them?

            • pjmlp 3 hours ago
              Nope, in many countries software developers are office workers like everyone else, there is no salary tied to lines of code.
      • Schiendelman 22 hours ago
        They build tractors, or sell tractors, or work in agricultural research and development...
        • bel8 22 hours ago
          I highly doubt that a significant portion of farm labor became salesman or researchers. Builders? I could see that but robots already replaced a portion of those too.
          • therepanic 22 hours ago
            Well, if that's the case, then in your concept the issue isn't what will happen to the programmers, but rather to all the work in general.
          • Schiendelman 20 hours ago
            That's how it always happens. Technology advances, and there are more jobs than we're displaced. That's why technology keeps getting better AND the number of jobs keeps increasing with population.
          • kingleopold 22 hours ago
            less jobs creation is a almost certain for tech, but some people with high IQ get wayy more things done, they already do. This will spread to robots and other areas because robots are not automous yet, maybe will take decade(s). but meanwhile few operators will lead them in a more productive way? That's my bet. It's a clear, logical process with iterations. A lot of things are getting faster with AI, except energy production in some places in the world!
            • Schiendelman 6 hours ago
              Why do you think less jobs creation is certain? We have more jobs now than we did before tech...
      • logicchains 21 hours ago
        >what do you suppose will happen to the coworkers?

        They need to go into business for themselves, and become capital owners, who benefit from AI, not workers who are replaced by it. AI won't be able to compete at entrepreneurship unless robots are given autonomy and property rights like humans, which is quite unlikely to happen any time soon.

    • robot-wrangler 15 hours ago
      > Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess.

      Fortunately SWEs have the architect path, which frequently rewards having lots of deep intuition even as the details of calculation continue to change. So one question that's urgent but unknowable until we get there is.. are we going to get good architects if they don't come up through the trenches? I'm not sure. All I know is that everyone has a war story about the least qualified ones that got the role without that experience.

      Since intuition is what LLMs do more than calculation, it's worth mentioning that this is true but different. They have the collective unconscious of the internet, which isn't taste that comes from good/bad experience. Besides intuition what comes to mind is "good taste".. the actual foundation of good review and really the main job of senior positions in any technical field.

      • suzzer99 14 hours ago
        A software architect who doesn't actually code is worse than useless imo, because they drag actual developers down, forcing them to implement their Powerpoint apps. I say that having worked in the architecture group at one of largest companies in the world.

        "I don't care if the app is a synchronous multi-page form with zero no need for websockets. It must have them!" (because it says so on my slide)

        • mplanchard 5 hours ago
          I used to think this, and I’m sure there are plenty of bad architects who add net-negative value, but having worked on some extremely difficult systems as an IC, I would have given anything to have future me able to hand down a scalable architecture from on high, vetted by past experience and domain familiarity.

          Not having that, I developed the knowledge myself through trial and error, but we would have saved a lot of time, money, and stress doing it right the first time.

          In general, I think this kind of “architect bad” take underestimates the cost and the stress of being responsible for a system that ultimately isn’t a great fit for the domain, and needing to balance hacking another fix onto it vs migrating to what now know is the right thing.

          • suzzer99 32 minutes ago
            Yeah I should caveat I'm talking about development where the ultimate outcome is the web or web-apps (iOS, Android, smart TVs). Everything is still changing so fast in that field, an architect who doesn't code gets out of touch very quickly IME. It's possible an architect makes more sense in other more mature fields.
        • robot-wrangler 14 hours ago
          > I don't care if the app is a synchronous multi-page form with zero no need for websockets. It must have them

          Sounds exactly like the kind of intuition an LLM will have.. "best practice" that's really whatever fads/marketing hype that there is a lot of noise about, never been informed by experiments or pain.

          There was a post complaining about AI preference for god objects earlier, but the thing about stuff like that is, you could mechanically disincentivize it purely from complexity metrics or ASTs, either in training, or at the agentic layer later. I'm really much more worried about when LLMs are flooding the internet with marketing, and LLMs are consuming the marketing to determine best practice

    • insane_dreamer 15 hours ago
      > The senior that before could lead a team of 6 is now leading a team of agents

      you just confirmed the point that the author is making in the article:

      the team of 6 is no longer needed,

      and, logically, is therefore eliminated by the company

      So yes, SWE still exists as a career for some, but their numbers might be reduced by 85%.

    • QuercusMax 21 hours ago
      For me - I'm 43, and used to be an extremely productive Java/Swing developer after 15 years of experience, and I knew all my tools inside and out. But I no longer work at that company (which doesn't exist any more), and it takes me a lot longer to learn how to be effective with the new tools I'm using simply because I haven't had a decade to learn the ins and outs of the new environments I'm working with.

      So AI saves me immense amounts of time figuring out how to write proper syntax, remembering the ins and outs of unit testing frameworks, etc. If I stick around for a year or three I'm sure I'll get much much faster and learn these tools better.

    • deadbabe 20 hours ago
      Stop with the anthropomorphizing, stop playing house. There is no team of agents, they are just tools and processes running in a computer for Christ’s sake. Blow away that one senior engineer and you have nothing. Better to have a team of 6.
      • arcanemachiner 20 hours ago
        You are staring at glowing pixels at a screen, then getting angry and pressing on pieces of plastic or a slab of glass to express your emotions in response.

        You live in a world of ever-changing metaphors. Get used to it.

    • bsder 17 hours ago
      > Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess.

      I don't know if I agree that this is the bottleneck.

      What I can agree on is that as I have aged I now simply REFUSE to learn programming knowledge that has a half-life.

      Phone programming? Nope. Front-end web programming framework? Oh, hell, no. Build system of the month? Piss right off. etc.

      AI lets me fill in that kind of programming with "acceptable" (read: super crappy but I didn't have to think about it) results because that code won't exist in 5 years anyway due to its half-life.

  • Teknoman117 23 hours ago
    > AI-users thus become less effective engineers over time, as their technical skills atrophy

    Based on my experience, I think this will prove more true than not in the long run, unfortunately.

    Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.

    My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.

    We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.

    • steve_adams_86 17 hours ago
      I work with people who generate solutions without really looking at what was produced (group A). They click around the app or run some tests and decide if they're content with the result, then ship it. You can see Claude's fingerprints all over the PR and it's safe to assume they didn't change much of anything.

      Then I have coworkers who work through the problems, build harnesses to test the changes and verify results, work through multiple solutions, synthesize ideal outcomes into a single one, benchmark, refine, test the result thoroughly, and provide sane verification processes in the PR. This is group B.

      They're entirely different versions of using AI. One seems passable for now (look how fast we're going!), and the other is arguably a new version of what's possible (in a given time frame at least) and defines a totally new normal for software engineering that I virtually never saw outside of exceptionally professional contexts. You don't move as quickly as group A, but you still move faster and produce better software than most people have in virtually every company I've worked for.

      I see group A being totally pushed out of the field fairly quickly. LLMs let you work incredibly effectively if you care to learn how. That kind of rigor is going to be the default (group B), and might become the only way humans can still be a useful component in the loop. Group A is likely to become replaceable with frontier models before very long.

      • i_love_retros 16 hours ago
        Bro group B might as well write the code themselves. this is getting silly
        • rTX5CMRXIfFG 12 hours ago
          Dear god no, I actually want AI to write the code that integrates my unit-testable abstractions into the existing spaghetti. I don’t care how it does it, the thing just needs to work. The moron that wrote the mess didn’t care enough to begin with, why should I? But all work moving forward must only be made of the good stuff

          Not silly

          • le-mark 5 hours ago
            The great thing about spaghetti is you only have to be as smart as the person(s) who wrote it; there’s generally a self enforced bound on how obtuse it can be.
    • Ancalagon 22 hours ago
      > My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth.

      I notice many of the adults in my life are doing this now as well.

    • shadow28 17 hours ago
      > Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.

      Related recent article posted on HN - https://news.ycombinator.com/item?id=47913650

  • giobox 1 day ago
    Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.

    It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.

    The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.

    • tuesdaynight 20 hours ago
      Anecdotally as well, but I believe that US companies are hiring in big waves in India and other developing markets. I know people who went from zero contacts to daily messages from big tech recruiters in these countries. I've seen people saying that is the result of that specific US section that expired last year, but I'm a layman, so I don't have the knowledge to debate the reasons.
    • jayd16 1 day ago
      Honestly, AI doesn't feel like it's affecting hiring needs from the trenches. We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.

      Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.

      But bosses are happy to say it's AI because that makes you sound in control.

      • mainmailman 22 hours ago
        Thank you, it's been all but confirmed that lot of "AI layoffs" are due to reaching a workforce equilibrium from Covid era over hiring.

        Saying AI for anything, good news or bad news, is a get out of jail free card for execs who want to appease shareholders.

        • conductr 19 hours ago
          Who did they overhire? Like, Covid didn’t just increase the number of people in the field. Prior to Covid there was a so called talent shortage. The hiring that did occur was mostly net zero in aggregate. Workers got poached, grads got hired, compensation went to far up. And, that’s what they mean by overhired. They over paid. They now see the benefits of hiring cheaper talent on another continent. Cheaper talent that can use the same tools as you are going to use. AI equalized a lot of talent, the US labor doesn’t have the edge it once did. In a sense, this should have been happening at higher rates much earlier than it did but for some reason investors saw value if you paid big salary to smart people in one part of California for a very long time. Now, the thing that should have happened is happening. And they also realize it’s not limited to California, turns out even salaries in Alabama are high compared to other parts of the world.

          There is also much more productivity. But I’m not sure it’s really a driving force yet as with the new productivity people are still just trying to do more with it which doesn’t translate to efficiency. Yet. It might once AI loses its wow factor and is just status quo. I feel like this is fast approaching but still may be a few years away.

          • Lord_Zero 14 hours ago
            Yeah so most of my friends who are dealing with spike in outsourced devs in their work environments are cleaning up the AI slop churned out by offshore people who are slinging code and getting the business requirements all wrong. Their jobs are now to clean up the mountains of code coming in from people who don't really get the problems they are being asked to solve.
            • ehnto 12 hours ago
              Outsourcing seems to come in cycles, where it's tried, fails due to communication issues (resulting in quality issues), then things get inhoused again.

              I do think there is some opportunity for AI to smooth out the communication aspect, but I think what we will actually see is larger volumes of poorly guided work coming through for each feature. The AI does not fix the lack of deep systems understanding which is why inhousing is always the antidote to bad outsourcing.

              I need to make this clear, there are great devs on either side of the various oceans, the issue is usually communication between two parties with nuturally mis-aligned incentives.

              • conductr 9 hours ago
                I’ve had a lot of success in past with the Apple approach. I design and architect locally but build it overseas. I think AI and the post-WFH office work culture really helped executives get over the hump / learn to make decisions and lead without being in the same physical space daily. Also, feel like the communication gap is largely a solved problem at this point. It is incredibly common to find English speakers in this profession from any country. The trick is learning to project management. At times, you simply just give the person objective instructions of what to build and the exact rendering and color palette. Or the exact packages you they can use as dependencies. But largely the world communicates together much better than the previous wave of outsourcing.
      • le-mark 5 hours ago
        > We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.

        The idea of leadership knowing what they want is laughable, always has been. If anything AI will expose them as the bottleneck they’ve always been.

    • Daishiman 1 day ago
      > Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.

      My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.

      • dominotw 1 day ago
        where did the excess labor force suddenly materialize in covid.
        • jjmarr 1 day ago
          the "learn to code" campaign began ramping up in 2013. If you started undergrad in 2016 you would've graduated right into the covid market.

          https://en.wikipedia.org/wiki/Learn_to_Code#Policy_impact

          I think the hype peaked around 2016 where Democrats were portrayed as out of touch for saying laid off coal miners could just "learn to code". By 2019 it was a cliché used to mock laid off journalists on Twitter.

        • vineyardmike 23 hours ago
          2008 had ~30k CS graduates.

          2015 had ~50k CS graduates.

          2021 had ~100k CS graduates.

          You can extrapolate the rest.

          • dominotw 23 hours ago
            thats only a fraction of all the layoffs
            • Terr_ 22 hours ago
              Someone may graduate with that degree only once in their lifetime--or not at all--and be laid off multiple times. :p

              We might be able to make a flow-comparison for "entering the field" versus "exiting the field forever", but layoffs don't really measure the latter.

        • LPisGood 1 day ago
          This is a great question that rarely gets answered. It’s partially that a ton of student students went to school for computer science because they saw how much money could be made, another fraction is people that switched into software from related fields, maybe with a boot camp or something.
        • francisofascii 21 hours ago
          Anecdotally, our firm's Covid hires were just okay. The recent hires are better. So my hunch is the weaker CS candidates were able to get jobs back during Covid, while today, they are left out.
          • t-writescode 13 hours ago
            > So my hunch is the weaker CS candidates were able to get jobs back during Covid, while today, they are left out.

            Does this mean that your assertion is that "people currently unable to get a job are weak CS people"?

        • shimman 23 hours ago
          It didn't. The elites never want to admit that they have failed to efficiently use capital for the last 40 years. It's always the fault of workers that should never be trusted. Just continue trusting the elites as they ruined US manufacturing jobs, surely the same institutions won't fail the workers again!
    • nine_zeros 1 day ago
      [dead]
  • harimau777 1 day ago
    I keep reading about how AI will be fine because people can just retrain for different careers. However, I never read what those careers are or who is going to pay for retraining.

    I certainly don't have the money or time to go back to college and start a new career at the bottom.

    • adjejmxbdjdn 1 day ago
      The argument is that “that’s what always happened in the past”.

      Which is true, but it’s true as long as it’s not true.

      The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.

      At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.

      But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.

      • rayiner 1 day ago
        > Which is true, but it’s true as long as it’s not true.

        It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.

        In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.

        The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.

        • vips7L 21 hours ago
          Healthcare really seems like the only safe direction anymore. They're needed and a human is still required to physically do it.
          • upupupandaway 18 hours ago
            My wife is a nurse and keeps in touch with her school professors. They said that the number of people flooding into healthcare careers is more than most colleges can handle, and is starting to cause supply glut in some roles.
            • vips7L 10 hours ago
              If I was younger it’s what I would do tbh. Engineering, CS, and anything else white collar has such a foggy future right now. No chance would I risk going into something that might have wages suppressed or mass lay offs within a decade. And honestly 3 twelve hour shifts doesn’t sound that bad compared to 5 days a week of corporate bullshit.
        • bilbo0s 23 hours ago
          >It's not clear to me how that's sustainable

          If it makes you feel better, I'm pretty sure it isn't sustainable. (But I'm not an economist so take that with a block of salt.)

          I don't think anyone has the answers. It's just some of us are honest enough to concede we have no answers, while others promote an answer that aligns best with their belief system.

          "It'll all work out."

          "It's the immigrants/blacks/jews/whatever dragging us down."

          "Nothing's going to happen and we can all continue doing the work we always have."

          "Burn the rich."

          Etc etc.

          Not a lot of serious attempts out there at even getting a hand on the issues, let alone fixing the issues.

          • braebo 6 hours ago
            Socialism has been the answer since I was born in the 90s and the V shaped economy took hold.

            We have enough resources for everyone to have food, shelter, education, and healthcare without the need to extract value from their labor.

            All of these problems are self inflicted wounds. The solution is for humanity to stop stabbing itself and care for itself instead.

            The solution is to let go of greed and embrace our humanity.

      • nemomarx 1 day ago
        I'm also pretty sure in the past industrial transitions, many of the people who lost their jobs at the start of the change never found better ones. It took a generation or so for new opportunities to really be found and fine tuned and you're competing for those new roles with younger people anyway.

        If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?

        • rayiner 1 day ago
          Did the younger people find better jobs? You used to have all these jobs for people who were maybe a bit smarter than average with good judgment. In the 1990s, the local community college used to advertise associates degrees for paralegals. That's a job that doesn't exist in the same way anymore thanks to computers. Now it's become an internship for kids with top credentials before they go to law school. Which is fine for them, but what about everyone else?

          It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.

          • joe_mamba 23 hours ago
            >It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.

            Why? There will never be a shortage of sick/dying people. So medical staff, and also undertakers, aren't going anywhere.

            • rayiner 22 hours ago
              Because most healthcare spending comes from tax dollars.
              • jhrmnn 21 hours ago
                Is this a different route to the universal basic income scenario?
            • deflator 22 hours ago
              My understanding is that healthcare keeps growing because the large Boomer generation is aging. When they have passed though, then we should see a corresponding slide in healthcare growth
        • marcosdumay 23 hours ago
          Not in all past industrial transitions.

          But yes, the argument has been wrong often enough that the people still repeating it as a rule should be mocked and ashamed.

      • bobthepanda 1 day ago
        It’s also not that true, and highly dependent on a lot of factors.

        Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”

      • protocolture 11 hours ago
        >But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.

        Human labor still seems to be cheaper. And then theres opex/capex to be considered. Like if they achieve AGI, and AGI isnt affordable its not going to displace much.

        >“we have always found other jobs”

        It really depends what humans value and what they have to spend. No one could predict that Banks would need more staff to deal with ATMs. It sucks being unable to predict past a coming automation revolution, but that doesnt mean there isnt something there.

        >human labor has absolutely no advantage over automation

        Humans are currently needed to manage automation. Theres no clear reason why, even with AGI, that we wouldn't want humans in most loops. Thats before cost and quality.

        >The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.

        Malthus is a pretty great example. We did sort out food production. There are risks with the scale at which we have done it. Malthus wasnt able to predict technologies to aid food production, but he did forecast the need.

        When I see people complaining the jobs dont exist I find its the same. We dont know what the jobs are, but that doesnt mean the prediction of all of humanity being out of work are correct.

        Actually the worst part of this whole thing will be that any incoming job losses being catastrophised as if the whole world is ending due to AI.

      • rurp 1 day ago
        Totally agree, and would add another way “that’s what always happened in the past” is a terribly weak argument. Things might have always worked out at the societal level so far, but very often do not at the individual level. Countless successful craftsmen have had their livelihoods ruined by technological changes and spent their remaining years impoverished. How many people funding AI would be willing to throw their own life away for the good of some future strangers that may or may not be born? I'm pretty sure the answer is <=0.
      • antisthenes 1 day ago
        > The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.

        Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.

        • jodrellblank 14 hours ago
          > "the only reason .. industrial fertilizers"

          canned food, tractors, combine harvesters, mechanical refrigeration, freezers, chilled trucks, ocean liners, aircraft, antibiotics for farm animals, milking machines, genetically modified crops, satellite/computer weather prediction, modern pesticides, Pasteurization, vitamin and mineral fortification...

          • antisthenes 3 minutes ago
            Yes, I appreciate the expanded list.

            I should have clarified that industrial fertilizers (and other modern Ag) come from non-renewable fossil fuel exploitation, just as the 90% of the things you listed.

            Antibiotics and GMO crops are nice too, but if you don't have the energy inputs to grow and distribute the food, it all goes tits up.

      • i_love_retros 16 hours ago
        At some point there really won't be enough jobs and massive social change will happen. It has to surely? Don't call me Shirley. In star trek WW3 had to happen before the better society emerged. I think a large scale wipe out and cleaning of the slate is inevitable. What happens in a world where people who have spent so much time and effort and money to get educated and establish a career are suddenly worthless and penniless? Nothing good I suspect!
    • rayiner 1 day ago
      It's not going to happen, just as it didn't happen for skilled industrial workers whose jobs got outsourced to China. The government will pay just enough in welfare to keep the situation manageable. Then they'll demonize you in the culture, as a Luddite, etc.
    • HumblyTossed 1 day ago
      > However, I never read what those careers are or who is going to pay for retraining.

      There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.

    • keybored 1 day ago
      The most future-proof “career” right now is having money. At least multiple million dollars. That’s a skill that is very much in demand.
      • RealityVoid 23 hours ago
        Whoo, deff a field where I would try breaking in.
    • bdcravens 1 day ago
      The same is true of the industries that software disrupted.
    • djeastm 1 day ago
      At least in the US, the only major non-AI growth field seems to be healthcare to deal with the swell of baby boomers living longer than people have before.

      But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.

      • srj 14 hours ago
        Probably many legally protected professions such as medicine and law will continue to be okay. There's a cap imposed on the number of residency positions that will keep those jobs scarce.
        • badc0ffee 12 hours ago
          There are plenty of law grads unable to find work in their field.
      • bluGill 23 hours ago
        Baby boomers had already started the face of dying though. The next generation is still going to be right there. That generation is smaller. These people will always be dying. However, I wouldn't hold my breath if you're a young person in that field. Maybe but maybe not
    • ahartmetz 1 day ago
      The second part seems obvious to me: the ones who are getting retrained. If it's some kind of formal education, depending where you are, maybe the state at least for part of it.

      Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.

      • radiator 20 hours ago
        Start with the basics, perhaps? Languages, Mathematics, Geography, Economy, History.
    • kypro 1 day ago
      Also, it's not necessarily true that there will be other great careers available. This seems to just be an assumption people are making.

      Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.

      Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.

      Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.

      It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.

      A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.

      The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.

      The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.

      Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.

      • jhrmnn 20 hours ago
        When no work is safe from mechanization, surely the value of labor wrt capital must fall, and the societal pressure on redistribution will rise. The ultimate outcome of technological progress is either extreme inequality or massive redistribution
        • tavavex 18 hours ago
          I feel like by the time the societal pressure starts rising in any major way, it won't matter anymore. By that point, the people who will profit will become gods with a prepared response for every action the lower classes could take.
          • keybored 8 hours ago
            I feel like this subthread is full of passive acceptance of defeat.

            How many years did it take for HN to go from “worker unions bad because I make a lot of money” to “Of course there is a capitalist/worker war going on and in fact the capitalists might become technology-assisted gods”? I guess we’re all vulgar marxists when we think it’s too late to fight back.

            Where did the can-do hustler attitude go? Only applicable for looking out for number one?

            But now that we are all, at least the non-millionaires here, in the same boat? Curious sudden onset of catatonia

            -----

            To countermand the insane negativity for a second: LLMs are brains in a vat and therefore cannot by themselves do any tasks that goes beyond simulating keyboard & mouse in front of a computer screen.

            • tavavex 1 hour ago
              Sometimes it almost feels like there's more than one person on HN holding more than one opinion.

              I'm from the younger, lower end of the job market, far from the average HN user. You could accuse me of being excessively cynical. That's what happens when the systems you assumed would be there tell you to get bent the moment it's your turn to enter the job market. There's very few people my age I know who aren't angry at something. What you can't accuse me of is hypocrisy. I was never in the ranks of the SV money-above-all billionaire-loving population that's so common here, and I never wanted to join them. All I wanted was a career in something I enjoyed doing and an ability to sustain myself. I really don't understand why I am the one person you decided to turn your snarky, enraged tirade towards.

              And it doesn't matter that LLMs can't act on their own. You don't need to literally replace every worker to the last one to get to the place I was talking about. A productivity boost that leads to 20-30% of all white collar workers losing their job would be horrific, and it would only get worse from there. That seems entirely possible to me. Nothing I'm talking about is particularly insane, it's all just a continuation of what's already been happening, with the same people ruling over it.

        • bluefirebrand 17 hours ago
          > the societal pressure on redistribution will rise.

          And then we'll see if the wealthy and powerful people will just send automated killer drones after the poors, because culling us would be more preferable than sharing their wealth

          • keybored 8 hours ago
            Good for them that the poors on this board are too smart/polite/civilized to become NeoLuddites and sabotage their efforts.
      • tavavex 18 hours ago
        > A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not.

        I think that most advice like this is individual - not systemic. We all won't fit into the remaining fields when white collar work gets less demand, but someone who's just pivoting now still could. There's no systemic solution that will actually be implemented. The only advice left to give to people is to not be too late. There's only so many people that can be trained to do this range of work (has a physical component that is difficult to automate + can only be done here + has an education/certification moat) just based on spots in educational programs, and they'd probably be better off getting on that sooner than later if they think that their current job is going to be in the crosshairs soon.

      • srj 14 hours ago
        One way it seems different today is that wealth inequality is already quite high in the US. Even if AI delivers massive productivity gains the windfall is only likely to be more concentrated. When manufacturing was outsourced at least median housing wasn't 7x median income.

        I'm curious what you mean by prepare, to have savings?

      • whodidntante 22 hours ago
        +1

        50% of the workforce was in farming near the end of the 1800's. Today, 2% 40% of the workforce was in manufacturing early to mid 1900's. Today 8% 60+% of the current workforce is white collar. What will it be in 20 years ?

        LLM's are only a couple of years old, we have no idea where this will go. Maybe it will be a big hallucination, maybe we are looking at the very early version of farm and manufacturing machines.

        The ENIAC was larger than a person, we now have watches that are significantly more powerful. Maybe in the future, your Apple watch will have more compute than several racks of H100's.

        When they came for the farmers, no one else cared - everyone got cheap and bountiful food. When they came for the manufacturers, no one else cared - everyone got cheap and bountiful products. Now they are coming for the white collar workers, and their highly paid laptop lifestyles.

        Who is left to care ? The billionaires ?

        • keybored 7 hours ago
          There’s over a century of history of workers “caring” and fighting back.
          • whodidntante 6 hours ago
            Yes, that is true. However, each time this happens, those people are eventually labelled as getting in the way of progress, luddites, etc, and it is easy for those that benefit (cheap food, cheap products) to simply go on with their "better" lives.

            I don't think there was a lot of support, outside the farming industry, to prevent machines from taking over peoples jobs even though 1/2 of the workforce were farmers. Similar for manufacturing.

            And I do not believe the answer to those changes was to do farming or manufacturing by "hand". I also do not believe that the answer to AI is to not use it.

            But, in the same way, I also do not believe that people will really care that call center workers will be replaced, that designers will be replaced, that many programmers will be replaced, that most admins and middle managers (anyone who pushes paperwork, creates reports, that just communicate and report on work "progress") will be replaced.

            I also believe that these workers will get retrained and find better jobs is a fallacy - because it has not happened in the past. Farmers may have done this, but those who lost their well paid manufacturing, most lost their place in society. Blue collar workers in Detroit are not today's laptop warriors.

            We will be undergoing a fundamental change in how society functions. It is quite possible the end result will be good, or at least looked at as good. But there was a lot of pain in the previous transitions, and the distribution of "winners" and "losers" will undergo significant change.

      • Mezzie 20 hours ago
        > Are we all going to be really well paid cleaners or something?

        I work for a corporation that includes cleaning brands and I've got bad news...

    • insane_dreamer 23 hours ago
      > I never read what those careers are

      Exactly. I have yet to read a single logically sound argument that even gives a hint of what those professions/jobs might be (remember, they have to be plentiful enough to employ large numbers of people, so "I quit my corporate job and making more as a TikTok influencer" doesn't count). Remember that a new profession has to open up new hitherto unknown revenue streams otherwise there are no companies who will pay you.

    • realharo 1 day ago
      Yeah, that's just a copium answer from people who simply want to hand wave away the issue instead of admitting they have no good answers.

      Like a politician who's asked about this in a town hall, but thinks that "our plan is to do absolutely nothing" doesn't sound very appealing.

    • agentultra 1 day ago
      This is the story that's been written since the Luddite revolts, as far as I know. The successors in that case were the capitalists who spent a significant amount of time and money convincing the constabulary and political figures to side with them. People were shot and jailed in the worst cases. The best case, workers were left without work or sent off to work-houses where they became indentured servants to the state.

      The last work-house closed in the 1930s.

      That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.

      Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.

      But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.

      I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.

      That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.

    • dominotw 1 day ago
      AI cannot create art by itself.
      • dugidugout 20 hours ago
        Hopefully the capital owners care to tell the difference.
    • phyzix5761 1 day ago
      I think the idea of being an employee is fundamentally changing. Not saying its good or bad but it's shifting to a more entrepreneurial phase where people have to step out of their 9 to 5s and find ways to deliver value that others want to pay for.

      We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.

      • mancerayder 23 hours ago
        That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase? They're at work in corporate settings with fixed defined roles. Most workplaces are not many-hatted-donning startup environments, but restricted roles where there are deliverables, deadlines, meetings, etc. Which leaves out of hours for "entrepreneurship" whatever that is.

        Github project work on the weekends? That's not possible for most people in their mature/family years (or shouldn't be necessary - what about living life??)

        • ProfessorLayton 21 hours ago
          >That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase?

          Almost half of U.S. employment is from small businesses (250 or less employees). That's means there's a lot of entrepreneurship happening already. I have lots of family running their own small businesses (trades), and it's a lot of work, and doesn't necessarily pay as well as a cushy corporate job, but what I'm trying to say is lots of people can and do start their own enterprise.

          Yes, lots of them will fail at running their own business, but it's not like corporate jobs are getting any safer either.

          • mancerayder 15 hours ago
            And what ARE those businesses? There's delis everywhere, should I open up a 24/7 bodega? Start selling knives I learn to forge on the Internet and make YouTube videos about knife making? That's going to replace our financial commitments will it?

            Here's another problem - how do you get healthcare without a corporate group plan? In my state the ACA offers expensive in-network only plans where you wait months for appointments.

        • Terr_ 19 hours ago
          > That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase?

          Oh, you simply decide to use grit and willpower to pull yourself up by your bootstraps, placing some calls to people you met at certain parties aided by a small 6 digit loan from your family. /s

          My default expectation of "employees should be more entrepreneurial" is that's it's a kind of victim-blaming. I'm especially cynical if the concept seems is getting introduced by groups that spent the last several decades putting up barriers to entry, drafting non-compete contracts, capturing regulators, and basically pulling up the ladders behind them.

          • mancerayder 15 hours ago
            It's comfort (for the already made men) for the possibility of a dystopic economy where tech jobs become very rare. A scenario which could play out as we compete harder with each other for the pickings.
      • lowmagnet 1 day ago
        Uber and Doordash are both examples of abusing workers and their resources to externalize costs on the worker.
        • phyzix5761 23 hours ago
          What about people who have been out of work for a year and all they can do right now is deliver for Uber and Doordash so they can make rent and put some food on the table?

          Is it ideal working conditions? No, but its better than nothing, you can set your own hours, and you can leave when the next opportunity comes.

          • RealityVoid 23 hours ago
            His point is that it's not entrepreneurship, it's employment.
      • RealityVoid 23 hours ago
        > We saw this pre-ai with uber and door dash.

        Oh, yeah? Did the Uber drivers and door dashers accrue the surplus value?

      • Ifkaluva 14 hours ago
        lol what? Are you seriously suggesting the miseries and abuses of the gig economy are a suitable comparison for the future of displaced SWEs?
      • vips7L 21 hours ago
        How the hell is uber or door dash entrepreneurial?
  • joduplessis 1 day ago
    I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.
    • phpnode 1 day ago
      People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?
      • michaelrpeskin 23 hours ago
        In my experience, I'm using LLMs as my abstraction to "junior engineer". A junior engineer isn't deterministic either. I find that if you treat the LLM output like a person's output, you're good. Or at least in my projects, it's been very successful. I don't have it generate more code than I can review, or if I give it a snippet to help me fix it, if it ends up re-writing it like an ambitious engineer would do, I tell it to start over and make minimal changes.

        I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.

        To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.

        I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.

        • skydhash 22 hours ago
          The thing is you can quickly teach a Junior how to respect a specification contract, so that with very minimal oversight, you get the wanted implementation. And after a few years (or months), the communication overhead get shorter. What would have been multiple rounds of meetings and review sessions are a short email and one or two demos.
          • QuercusMax 21 hours ago
            What I've been learning as a 20% "harness engineer" is that in order to get the models to "learn" you need to add both documentation and static checks, as well as often custom skills. My main project at work has issues where the AI will often get super confused and step on itself trying to run tests - so the answer is writing better docs (AGENTS.md) and providing deterministic tools to work with the projects.

            Large software projects (I'm thinking google3) often have large amounts of both of those things, as they're always getting new developers joining.

      • AstroBen 1 day ago
        If it's not deterministic you can never fully trust it. In a deterministic abstraction I don't need to audit the lower levels.
        • ex-aws-dude 22 hours ago
          Who said you need to trust it? Reviewing code is still way faster than writing code.
          • bluefirebrand 22 hours ago
            > Reviewing code is still way faster than writing code.

            Writing code results in a much better understanding of the code than reviewing it

            In fact I would say that in large complex codebases, in order to develop the same understanding of what the code is doing might actually take longer than writing it from scratch would have

            • esafak 13 hours ago
              But it's written to your spec; there should be no surprises!
              • bluefirebrand 12 hours ago
                That's the fun part! The surprise is that it's actually not written to your spec at all! It just kinda smells similar to your spec
        • HDThoreaun 18 hours ago
          You fully trust your coworkers?
          • nozzlegear 12 hours ago
            If you don't, you may want to find a different company to work for.
      • mrbananagrabber 23 hours ago
        this is the way LLMs _should_ be used, as an assistant to create reliable, deterministic code. and honestly, they're fantastic when used this way. build the thing you need with the LLM, then put the LLM away.

        but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.

      • udave 22 hours ago
        try distributing this spec amongst your team members, ask each of them to drive it to completion. no follow up edits. deploy to individual environments and then run a rigorous test suite against all of the deployments. see if all of them behave the same way.
        • phpnode 20 hours ago
          They won't. So what? This is not how specs are used, no one is saying that they are a replacement for source code.
      • ex-aws-dude 22 hours ago
        Exactly, the argument makes sense if its about inference at runtime

        But that's not the case here

      • knivets 23 hours ago
        how do you know the artifact is correct?
    • NiloCK 23 hours ago
      I grant that there's a definition of abstraction that LLMs don't fall into. But people describing LLMs as another abstraction layer aren't all misunderstanding this. Instead, they are using the term ... more abstractly.

      EG: How did Mark Zuckerberg make software five years ago?

      He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.

      Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.

      • indiosmo 19 hours ago
        The way I frame this is that LLMs are not replacing the tools, whic are are deterministic. They are replacing the humans, which are themselves non-deterministic, as in your Zuckerberg example.
    • qnleigh 23 hours ago
      LLM's don't have to achieve perfect reliability to replace lots of work. They just have to reach the balance of reliability and cost suitable for a given task. This will depend on the task.
    • ansk 22 hours ago
      I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.

      The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.

      [1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.

      • udave 22 hours ago
        Let's be real, if you and I both ask claude to generate a feature on the same project, what are the chances that it spits out 100% replicated code? But if we are to build the project using a Dockerfile, we will get the same binary and the same image. Products around LLMs are non deterministic unlike compilers.
        • ansk 21 hours ago
          I can assure you that a fully deterministic and equally effective claude is possible to build. And yes, that would mean identical prompts would yield 100% identical output 100% of the time. It would still make the occasional logical or factual error, but it would do so deterministically. Would this solve any of the problems with building reliable programs using LLMs?
        • pzo 21 hours ago
          it's nondeterministic because we chosen it by having higher 'temperature' in settings. I bet if you run open weights model with temperature 0 and on the same device the same prompt and turn off parallelism you will have more deterministic result (excluding some floating point operations).
      • skydhash 22 hours ago
        > Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice

        Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.

    • careers_terria 16 hours ago
      Are other layers truly deterministic though? Do I know for sure whether that object has been garbage collected or not? How many cycles this instruction will take to run?
    • arecsu 22 hours ago
      There's something to be said about the fact that the very people who would use deterministic layers to build stuff are... non-deterministic. We, as humans, have our set of pros and cons, wins and failures. Even the most brilliant coders on earth will make mistakes from time to time. I often fail to see this getting accounted in any conversation when there is a critique towards LLMs, as if we humans are not flawed in our own ways, with a huge degree of variance across individuals. Good and bad code existed prior to LLMs. If you're hiring someone to do code, you're basically using some heuristics to trust this person will do a good job. But nothing is ever guaranteed 100% deterministically ever. Without thinking it that much, LLMs will sometimes produce better code and manage systems that some people who are earning salaries out there. Possibly sub-par developers if we were precise, but professionals in the meaning of the word (that are being paid to do work).

      At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.

    • danaw 1 day ago
      every time a person uses the abstraction argument, an angel dies
  • kixiQu 23 hours ago
    > If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.

    My parents were both construction workers. There is an understanding that you cannot lift heavy objects forever. You stop lifting objects and move to being a foreman, a supervisor... and if you are uncomfortable learning to get others to do work that you before have done yourself, you burn out your body entirely and the consequences are horrible.

    This is factual reality, but it is also a parable that has been important for me to internalize about delegation in my own career. It is not irrelevant to AI use, but I don't think it slots onto it totally as neatly.

    • maerF0x0 18 hours ago
      Also worth noting that, so long as it's reasonable, lifting heavy objects makes you stronger, whereas the current hypothesis on using AI is approximately the opposite.
    • ASalazarMX 23 hours ago
      Software developers are more architechs than plain programmers. You wouldn't make an architech lift heavy things, you want they to design how those heavy things are used.
  • torben-friis 1 day ago
    Unless I'm missing something, there's an obvious logic issue here.

    If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.

    Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.

    • huani 1 day ago
      why is the LLM-compiler analogy flawed? Is it only because LLM output is non deterministic?
      • torben-friis 1 day ago
        Besides probabilistic and non reproducible output, programming languages are designed to be unambiguous and explicit, and human language doesn't have that.

        For(){} it's normally either undefined or has a specific meaning. "Then iterate and do x" might mean many subtly different things.

        Most programmers never deal with a compiler bug in their whole career, and can dismiss the possibility. For LLMs it would be hard to even define what a "compiler bug" would be since there is no specification for English.

        Then there's the fact that models generally don't guarantee anything at all. Sonnet can change under your feet.

        Models also degrade as the context window gets larger. Compilers handle one line just the same as 20.

        I could keep going, there's so many fundamental differences in the process that the analogy only serves to provide a false feeling of security.

      • GrinningFool 1 day ago
        Because you don't have to coax, trick, or guide your compiler into doing the right thing.
        • mwigdahl 1 day ago
          Clearly you are not a C++ programmer. :)
          • marcosdumay 23 hours ago
            Maybe C++ compilers would benefit from asking an LLM to rewrite their messages in a way that makes the point clearer...

            But the GP stands.

      • hyper_frog 1 day ago
        I don't think its just, there's also the fact that if you're working with c or cpp or any systems level language, you typically do know how to read assembly because you've stumbled upon it for some reason, and if you're writing low level programs (which is typically what these languages are used for) you will definitely at some point need to know to read assembly and maybe even write some. But with LLMs the entire field has shifted. You don't need to know anything to write any language and you don't even need to have high level computer science knowledge nowadays to get something that works and the world increasingly just seems to want something that works.
      • layer8 18 hours ago
        You can reason with precision about how source code will behave once compiled, and how changes to the code will change the behavior. You can’t reason with precision about AI prompts in that way. This is about more than just determinism, because there are deterministic systems were you still can’t reason usefully about the input-output relationship.
      • TremendousJudge 22 hours ago
        I have mentioned it several times lately, but if the analogy was correct, people would be committing prompts and not code. High-level source code gets committed, binaries don't. If prompts were really "just a higher level of abstraction", then there wouldn't be a need for saving the code. Or at least you'd see people publish their prompts and chat history alongside the code.
      • bbg2401 1 day ago
        Compiler: "Here is an exact program. Translate it while preserving its meaning."

        LLM code generation: "Here is an intent/specification. Invent code that hopefully satisfies it."

        Does the compiler analogy provide value under those terms? I don't think it does. In fact, I think it provides negative value.

        We don't need to use tortured analogies to express excitement over these tools.

  • fooker 1 day ago
    If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.

    If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.

    • xtracto 1 day ago
      We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.

      The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.

      When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.

      We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.

      Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.

      Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.

      I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)

      • hnthrowaway0315 1 day ago
        Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.

        I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.

        But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.

        • fooker 23 hours ago
          > non-deterministic answers

          It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.

          With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.

          • shimman 23 hours ago
            It absolute makes sense to get "hung" up on something when it comes to planning society around it JFC. I'm with the other commentator, your understanding of these tools should be taken into question since you seem to be reading the tea leaves of statistical noise.
    • jerf 1 day ago
      In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.

      In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.

      Company 1 fires 90 of their programmers and does the same work with 10.

      Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.

      Who wins in the market?

      Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.

      In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.

      The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.

      • rayiner 1 day ago
        Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.

        We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.

        • fooker 1 day ago
          I think paying significantly more was a very localized thing that happened for AI researchers who were familiar with the alchemy that made GPT4 suddenly work much better than anything else seen before.
        • jerf 23 hours ago
          Of course it does. It ignores a lot of things. Mostly I just want to present the view that things aren't entirely hopeless and the entire industry is doomed to contract by 90% because of AI. Your legal system point also fits in precisely with what I'm trying to convey, just in a different direction.
      • suzzer99 14 hours ago
        The problem with this idea of feature backlog, at least everywhere I've worked, is what you really have is an idea backlog. You have some things people want to build, and maybe a business analyst or product owner has done a first pass. But they're far from the rubber-meets-the-road part where someone needs to write out a detailed spec based on the exact current state of the app/system, and the developers start asking questions as they run into unspecified edge cases--all of which usually means sitting down with the client again over a series of meetings.

        AI coding agents help speed up none of that. Meanwhile the developers are either sitting in meetings or working on something else while the product owners hash it out with the client.

        And sometimes, after all that, you realize the client can get 95% of what they're asking for if you just tweak some existing feature. Everyone's mostly happy, the app stays less complex.

        • jerf 3 hours ago
          An incorrect "reply" link, I think?
      • boredatoms 1 day ago
        This resonates strongly with me, in that all that extra margin has to be spent on something other than dividends
    • harimau777 1 day ago
      My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
      • lanstin 1 day ago
        This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
        • rayiner 1 day ago
          The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
          • lanstin 16 hours ago
            Are you saying that auto labor unions can't go into other places and organize the people there for higher salaries and less wretched conditions? Or that organizing doesn't increase the power of the hard working people? Economic facts are partially determined by human factors such as the supply curve for labor and information flow amongst sellers or buyers for various goods.
            • xyzzyz 58 minutes ago
              Yes, and by shifting the labor supply curve you can make the laborers worse off, because it will often shift the supply curve of the product they’re making. If labor unions make you pay your workers more, you might need to raise the prices, which will make you less competitive, and reduce your sales, which can result in layoffs, or in preventing new workers from gaining employment.
      • aleph_minus_one 1 day ago
        > My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living.

        You might need to relocate to a place with much lower costs of living.

        This was the idea behind remote working discussed during COVID-19 times:

        - the company can pay less money because the employee is living at a much cheaper place than the expensive city where the company is located

        - on the other hand, even with a smaller salary, the employee has more money at the end of the month because of the smaller costs of living

        So both sides win.

        • passivepinetree 1 day ago
          Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
          • Paradigm2020 3 hours ago
            Most of the time it's hcol areas because the gentry decided to nimby those who came after them...

            And the ones who managed anyway to break through that (because the gentry needed some cash and built some extra expensive houses/ condos to try and keep the riff raff at buy...) Felt they had worked hard and became nimby'ers themselves because housing is not a humane thing ... It's an investment that has to grow grow grow...

          • bluGill 23 hours ago
            More importantly, in San Francisco, there are a lot more opportunities than in coming. I've never been to either city (i'm not going to come to the conference I was at because I never left the hotel.) however, I can still tell you confidently that if you have a weird hobby, you're much lower than you can find other people with that interest, stores that sell the things you need to complete the hobby, and all those other things in life that you want. If you love doing the types of things people in Deming do, well, it's a great life, I'm sure. However, as soon as you want to do something off the wall, well, you may not even find enough people in Deming to have your cricket team, while I have no doubt that San Francisco has a team that you could join.
            • Paradigm2020 3 hours ago
              For the gilded class everything is available where everything is for sale.

              For the not so gilded class that can get a job in a lcol area that allows them to actually own a home and have emergency savings the lack of cricket won't even appear on their mind (~also mark twain convincing others to pay him to paint.)

              Re: stores that sell it well there's this thing called amazon.

              Anyway. I feel like most people here are just afraid of going down the ladder in term of remuneration... A ladder that starting from the 70s got shittier for most...

              First they came for the guy working 11h days working a blue collar job and I didn't say a thing because I was white collar...

        • Natfan 1 day ago
          but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
        • harimau777 1 day ago
          [flagged]
          • aleph_minus_one 1 day ago
            > Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live.

            This will change for the better if more and more educated people relocate there.

            • nly 23 hours ago
              And then those areas become more expensive...
          • SoftTalker 1 day ago
            But at least stereotyping happens everywhere!
            • selimthegrim 23 hours ago
              I like how the assumption was they were all white, Christian and rural.
    • Rotundo 1 day ago
      Creating more software does not solve anything if that software is mostly a functional duplicate of other software. Or, in other words, all companies re-invent the wheel many times over. It doesn't matter if you 10x the development of software that brings nothing new besides being written in a shiny new framework.

      We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.

    • ryeights 1 day ago
      s/software engineer/secretary/

      s/creating software/typing correspondence/

      In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.

      • ReptileMan 1 day ago
        It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
        • ryeights 1 day ago
          My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.

          It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.

    • kypro 1 day ago
      There will be more software in the same way there is more agricultural output today.

      The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.

    • ReptileMan 1 day ago
      Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
  • comonoid 22 hours ago
    From Reddit:

    > After being laid off, a programmer becomes a welder. One day while working, he suddenly muttered to himself, "It's been so long, I've even forgotten how to solve three sum". A coworker next to him quietly replied, "Two pointers".

  • JohnMakin 23 hours ago
    > The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties,

    This sounds ageist - I'm around 40 and feel I am at my mental peak, compared to even my mid 20's. This isn't a good analogy at all, the brain doesn't "wear out" like a professional athletes' body does, it just changes its structure. The brain is a remarkable organ.

    • sibeliuss 23 hours ago
      He just means: by this age you've probably found your preferred title and level, unless you want to rise to more C-level / executive positions, which are rarer in any case and most folks don't want.
      • JohnMakin 23 hours ago
        This is definitely more charitable, but isn't this already the case now? It seems he's saying past your mid 30's you'd no longer be viable as a software engineer. That's never been the case, and I'm not sure why it would now suddenly be the case.
        • sibeliuss 23 hours ago
          Even clearer, if you don't adapt to the changes taking place in the field there might not be a future for you. Its not about age, it is about attitude and flexibility (which are, admittedly, issues when getting older).

          In other words, if you want to continue stubbornly typing out code by hand, the person right over there has already mastered agentic tooling and is doing vastly more than you, more quickly, and with greater precision, and will simply be a more fit candidate to hire. Roles for this type of legacy stubborn personality will be less and less, and you will age out as part of the old school.

          • JohnMakin 22 hours ago
            I see what you're getting at, but if it's not about age, why use an age related analogy? I probably should have amended my first statement in this thread is that it sounds ageist, if even implying that the people who will refuse to adapt will be older. This day is already here, people are already adapting to this. He seemed to frame it as the current young 20's career people will have this limited timeframe of productivity.
            • geodel 20 hours ago
              Age analogy is fine because unlike few who are deep into technology and latest changes in the field (which btw are over represented on this site) for most IT developers age => experience without actually improving skills.

              As I interview lot of people for typical Enterprise IT jobs even at 20 years of experience they do seem to not know much beyond what they learned in first few years.

  • dakiol 22 hours ago
    We're forgetting one thing: we (mere engineers) have control over nothing. The vast majority of us are at the mercy of executives and investors. Before AI we had some sort of grip because our skills weren't so much a commodity, and yeah, dealing with code and systems architecture and data and distributed systems wasn't that easy. Now AI is a tool not for us but for the higher-ups, they can finally commoditize software engineering and need only a small fraction of us. I see engineers around here fighting and discussing who'll be left behind (the 80%) and who'll remain because they're "more than mere coders" (the 20%)... what we don't discuss here is that we're all now at the mercy of Anthropic et al, and that's bad. The irony is that the vast majority of us use Anthropic, so we are just loading the guns for them to use them. It's sad, but we call it progress. Nuts
  • woeirua 1 day ago
    Was it ever a lifetime career? Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers. Ageism is real in this industry. You either save up enough money to retire early, switch into management, or get forced out of the industry eventually. AI is just accelerating the trend. I see very few junior engineers resisting AI. I see a LOT of staff+ engineers resisting it. Just look at the comments on HN. Anti-AI sentiment is real.
    • LocalPCGuy 19 hours ago
      Every 5 years on average since the late 90s the industry has doubled in size. Add in natural attrition (and the other things you mentioned, ageism, management or other tech adjecent careers, etc.), and even accounting for a modest number of "second career devs" starting later in life rather than out of college, you still have an industry that skews younger simply by virtue of overall growth patterns.

      I think that is significantly overlooked when people ask "where are the 50+ engineers?".

    • HighGoldstein 1 day ago
      > Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers.

      I'm not discounting ageism in the industry, but how popular of a career was it 30+ years ago compared to now?

      • mikestew 1 day ago
        In 1996? Software development was the hot ticket to upper middle class in the early 80s when I was a recent CS grad, and I was already working with people who were in it for the money. By the late 90s, if you could spell “HTML”, you were making decent money as a web developer. This all came crashing down during the Dot Bomb collapse, but SW has been pretty popular for most of my career, and it just continued to get more popular, especially as salaries continued to increase.
        • hylaride 22 hours ago
          I remember seeing an article around a decade ago about a ~50 year old "web developer" claiming age discrimination because they couldn't get a job. Somebody found their resume and it was literally 1990s "html/CSS" added to some other period tooling. Said person found a niche for a new technology (the web) and then stopped upping their skills.

          I've had to change course several times in my career (graduated in 2004). UNIX admin and later network admin, DevOps, and now I'm doing a mixture of DevOps and development (despite not being a full time developer in my entire career, being able to use AI to plug into code and fix/enhance things like monitoring, leveraging cloud APIs, etc has been a game changer for me).

          Right now, as somebody in their mid 40s, I'm seeing AI as a productivity amplifier. I am able to take my experience and steer and/or fight opus into doing what's needed and am able to recognize if it looks right.

          I'm so glad I'm not fresh out of school in this environment, though people said the same thing when I graduated in the Dotcom bust...but being ready and eager to do groundwork was a door opener. Finding that first door to open was tough, though.

        • SoftTalker 1 day ago
          In retrospect the Dot Bomb was a bump in the road. Yes, some people who only knew enough HTML to be a "Webmaster" might have been filtered out, but pretty quickly anyone who could really build software had opportunities greater than before.
        • therealdrag0 17 hours ago
          In 1981 there was 15k cs grads, in 2019 there was 90k (and many non-traditional too).

          You’ll also find that engineers are sorted (by self or not) into different companies. I’ve worked at companies where 75% of engineers were over 40, and I’ve worked at places with the opposite.

    • hnthrowaway0315 1 day ago
      If you are lucky and got in early, then probably yes, it could be a lifetime career. It's like all careers, when you joined early, you got a lot of opportunities, you also rode the wave, you eventually rose to the top if you grit through.

      It's a lot easier to be early than to be smart or quick.

      • senko 23 hours ago
        If you're on the top, you probably aren't coding much. So you're more in management than getting your hands dirty.
        • hnthrowaway0315 22 hours ago
          Yeah, but you still have the choice to stay in the trench. People like Carmack/Cutler do that. But I agree the majority just go high management.
    • mancerayder 23 hours ago
      Managers are being slammed - FB, Amazon and recently Cloudflare and Coinbase.

      New grads are being slammed, "because LLMs can do that work."

      No new folks, no managers, and no olds. What a delightful career we've chosen for ourselves.

      • coffeebeqn 8 hours ago
        Feels like musical chairs. We’ll see how long until it’s just a few oligarchs with their robot software teams
      • Lord_Zero 14 hours ago
        Are we sure some of that is just cutting costs while increasing ai spend and then falsely claiming ai replaced them? What better way to justify your ai projects and ai spend than to lay off entry level data entry people and junior devs regardless of the success of your projects.
    • suzzer99 14 hours ago
      I'm a 57-year-old engineer still going strong, and I know plenty of others. This job isn't conceptually that hard if you have the experience to break problems into manageable chunks. I probably can't juggle as many things in my head as when I was 25 and proudly cranking out spaghetti code. But experience makes up for a lot of that.

      Now, would I relish looking for a programmer job right now at my age? Hell no.

    • ozim 21 hours ago
      I think you missed the part where there were much much less software devs/engineers earlier.

      Year after year it was just much more new people joining as things got easier and more accessible.

      Now you see 40 or 50 year olds far and between where most guys I see are in their 30s. Ones that are 60 yo diluted in the sea of new entrants.

      Ageism didn't came from the top it just happened with flood of young employees, there is just social dynamics where you might get 40yo not being a manager getting along with bunch of 25yolds but that's going to be an exception not the rule.

  • Backslasher 2 hours ago
    Referring purely to the article and not to the title (which is hard, because I see a lot of people are, and it is tempting), I can say that I disagree with 1 (Using AI means you don’t learn as much from your work).

    At least personally, using codegen LLMs allows me to step into areas I'm completely unfamiliar with, produce value, and learn new things along the way. I just made changes to a FOSS Android app I'm using, and I'm relatively inexperienced in mobile. However, now I know soe Kotlin keywords, I know a bit about the UI libs, and know better how to build and test Android code.

    So I think I don't learn less, maybe I learn the things that interest me.

  • simonw 21 hours ago
    This is such a misleading title. The post isn't about software engineering not being a lifetime career, it's about this:

    > If AI does turn out to make you dumber, why can’t we just keep writing code by hand? You can! You just might not be able to earn a salary doing so, for the same reason that there aren’t many jobs out there for carpenters who refuse to use power tools.

    The argument the piece makes is that being a software engineer who insists on writing code by hand may no longer be a lifetime career.

    I think the definition of "software engineer" is changing, and it's not even changing that much. We construct software to help solve human problems. We can keep on doing that, just now we get to do it more.

  • strken 1 day ago
    Argument A: AI means you don't learn as much, so even though you are more effective, it inhibits your growth and you shouldn't use it. However, on a pragmatic level, it's effective to hire a bajillion people, fire them at will, and get AI to do everything. You will get so many JIRA tickets closed and so many lines of code written.

    Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.

    Institutional and personal knowledge seem similar, but the implications of each are radically different.

  • pugworthy 1 day ago
    I'm repeating what others have essentially said, but ask yourself what's on your resume. If it says "Software Engineer" and that's all it talks about, then yea you might not find it's a lifetime career.

    But if it's a diversity of things (that use or leverage software development) then you probably have a lifetime career ahead of you.

    I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.

    • Lord_Zero 14 hours ago
      It can be incredibly difficult to find opportunities like that
  • youknownothing 15 hours ago
    People who think that "software engineering" and "writing code" are the same thing will indeed be out of a job. People who understand the difference will continue to thrive.

    Thinking that software engineers can be replaced by AI is like thinking that mathematicians can be replaced by calculators.

  • magarnicle 18 hours ago
    Even if a manager can just conjure the software they want instantly using AI, they are still going to prefer having a nerd to manage it for them - to know how to prompt engineer or even just organise it all.

    It might not look much like software engineering, but it's still going to be nerd stuff that most people don't want to bother with.

  • afavour 1 day ago
    Seems the solution here is the same it's actually always been if you want career progression: be more than just a code jockey. The true value of an engineer is to be plugged into overall roadmaps, broader thinking around product, how to achieve company goals, etc etc.

    Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.

    • harimau777 1 day ago
      My exprience has been that companies actively work to prevent people from becoming more than just code jockeys. For example, most of the places I've worked have viewed code delivered as the ONLY metric used to evaluate performance. Attempts to contribute to roadmaps or strategy are ignored at best and punished at worst.
      • randcraw 23 hours ago
        Yeah, 95% of the available advancement in computing is in people management, not technical mastery. Businesses much prefer to hire externally to serve any non-core capabilities, especially to minimize internal culpability should anything go wrong. That leaves little opportunity to think outside the box technically.
  • raffael_de 1 day ago
    The differentiator is augmenting reasoning with AI versus replacing reasoning with AI. But those who choose to replace their reasoning with AI probably weren't good at it to begin with; cause if they were, they'd choose to not replace it. Exception is that AI can actually replace reasoning (which it can't, yet) - then it's game over with a career in software engineering anyway.
  • oytis 23 hours ago
    I don't understand it. The time-limited career would work if we were born with innate ability for software engineering and would lose it over time by using AI. Most people are not born with that ability though, it needs to be developed first.

    And read Programming as Theory Building already, it's not that long

  • philipnee 1 day ago
    80% of my day to day job has never been pumping out lots of code. it is a complicated career is it? we do a lot of alignment, design and thinking. i can't even agree the idea of outsourcing thinking, i think AI is very good at helping us to think clearly, but it doesn't really "think" for us.

    if you do that then... likely very replacable.

    • azan_ 1 day ago
      If AI becomes good enough to easily produce maintainable and high quality software, then I really can't see how demand for software engineers would not plummet. Even lots of non-coding work that software engineers do, such as accurately capturing what client actually wants, will become much less valuable - e.g. currently misunderstanding of client's requirements is catastrophic and can lead to waste of months of labour; with AI it could become matter of max few hours lost. So I can understand argument that software engineering careers might be safe because AI may plateau and we might never reach level where it's actually capable of producing good software. But I absolutely don't buy that software engineering will be safe even if such AI exists. Even if your current work is just 20% actually coding, you must remember about second order effects that will take place once quality code generation is 1000 times faster.
    • hnthrowaway0315 1 day ago
      AI can also do alignment and pull from its vast training dataset for design and "thinking" -- because 99% of the problems in this world were already solved, multiple times, maybe not in the exactly same format, but in a very similar format.

      I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".

      I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.

      • randcraw 22 hours ago
        Rather than coining a new word like adaption, I'd call this acculturation. It's reshaping not only SW dev but natural language too -- how we read and write and how we speak.

        Everyone knows that AI-written slop isn't worth actually reading. So when reading mass media content we skim over each paragraph's opening phrases rather than read it deliberately, sentence by sentence. We also do this while writing notes, dropping determiners, acronymming common phrases, and making references to characters/scenes in popular media. Now with the rise of vocal interfaces and ever shorter rounds of engagement, all this abbreviating will only exponentiate.

  • Ostatnigrosh 13 hours ago
    The final verdict that software engineers wont exist x years from now is a bit contrived. Today, my team is looking to hire an AI software engineer. I reached out to my close group of developer friends to gauge interest only to find out that each one of them is ALSO trying to find/hire software engineers, all looking for this new "paradigm" of programming knowledge. Maybe how the role itself looks different today than it did 5 years ago but it seems like every company is trying to accelerate their development and finding new opportunities that didn't exist before the AI craze.

    More than anything, I believe that AI is pushing out those who enjoyed the ~act~ of programming more than the product being delivered itself. Mostly because those individuals might have the hardest time adopting this new way of getting things done.

    And honestly, I feel for them. Coding has always felt like an art form to me. Nothing feels better than someone commenting on the elegance/beauty of something youve written.

  • rcpt 15 hours ago
    I am just not seeing a future where a product manager opens their laptop and says "build me a self driving car company" and then gets one
  • nedt 8 hours ago
    Just stop calling everyone a software engineer. You can be a script kiddie, coder, developer - but all of those are not engineers. Engineering is much more than just writing the code. There is a reason why you can earn a degree in engineering - not saying a degree is automatically making you an engineer or that it's even needed.
  • AbbeFaria 1 day ago
    There’s a hierarchy amongst knowledge work and AI hasn’t yet been able to do the work that is rare and valuable.

    Over the past two decades, there have been lot of solved problems like building boring scalable web apps, UX design etc and AI is fairly good at this, enough so that good prompting can get you very far. This shouldn’t be a surprise, there’s a lot of publicly available data for this (GitHub repos etc).

    On the other hand, there are rarer Computer science problems like designing efficient Datacenters, GPUs, DL models. Think about problems that someone of Jeff Dean’s or James Hamilton’s (AWS SVP) or a skilled Computer Architecture researcher like David Paterson’s ability would solve. These are incredibly hard and rare problems and AI hasn’t been able to make much progress in these areas. That’s true for other sciences as well.

    If you’re a regular Joe like me who builds boring CRUD apps, AI is coming for you.

    What I mean is if you are working on incredibly hard and rare problems that require rare skills and also those problems don’t have publicly available data that LLMs can be trained on, you’re safe from being “automated” away. If not, you must plan accordingly. Also if you’re a skilled manager (in any field) AI cannot replace you, highly skilled managers that can get the best out of their teams have rare skills that aren’t easily replicable even amongst humans much less AI. Although, if going forward we need fewer developers we will need fewer managers too.

  • mbgerring 20 hours ago
    I don’t understand why so many people are convinced that “this time is different.” New tools raise the ceiling of what’s possible. New jobs emerge at the limit of what’s possible with new tools. The jobs doing what we do today will disappear. New jobs with greater complexity and specialization will emerge. I have watched this happen in the software industry in my lifetime. I expect that it will continue to happen.
    • jhrmnn 20 hours ago
      This can’t be sustainable, there must be a limit in human biology as to how complex jobs we can handle. More and more people will fall under that threshold.
      • mbgerring 4 hours ago
        I think there’s probably a limit to given technologies of social organization, but not biology.

        I think most people have more potential than they’re ever allowed to reach, and that capitalism is indifferent to the social structures it destroys. I don’t think neoliberal capitalism was the end of history, or that returning to feudalism is the answer, but we’re probably going to have to do something different to avoid social collapse in my lifetime.

    • borzi 20 hours ago
      It's not different. If you haven't already, read "Extraordinary Popular Delusions and The Madness of Crowds".
  • BerislavLopac 8 hours ago
    "No longer"? When was it exactly?

    The truth is that software engineering, as a profession, is not even a full hundred years old. Even if someone spent their all career with it, it has probably changed so much over time that it became a completely different job.

    So far, we have barely scratched the surface.

  • AllanSavageDev 20 hours ago
    From all I can see, things for US citizen developers are all but over.

    Not AI, offshoring combined with downsizing of US based engineering orgs.

    Corp America has figured it out finally after 2 decades of entitled developers making 2 day tasks into 2 week tasks in the name of "best practices", "architecture" and "Doing It Right!" etc, all while commanding high salaries.

    It turns out that Good Enough is in fact good enough and the people who write the checks are onto it. Even if its not quite good enough, cheap offshore resources can just be sent back to make it work. US based staff of 5 people who can be held responsible for guiding a much larger offshore group seems to be the common pattern.

    All of this was imparted to me by a CIO on a recent interview with a financially strong mid sized company in the eastern US. The developers I interviewed with where EXCEPTIONALLY COMFORTABLE and displayed zero signs of any kind of stress from maintaining their literally 20 years out of date infra. It was insinuated that the team I interviewed with "probably wont look the same in 6 months" too.

  • adrianmonk 20 hours ago
    Here's a better comparison to pro athletes. Their work output is winning games. How do they get good at (and stay good at) that? Is it by playing real games for points?

    That's a part of it, but only a small part. They don't get good at the thing mainly by doing the thing. They get good at it by training to do the thing.

    An NFL football player does a ton of things other than playing in games. They have practice scrimmages. They do drills like throwing, catching, running patterns, tackling, reading quarterbacks, stripping balls, picking up fumbles, etc. They work with coaches on their technique. They watch film. They spend many hours in the gym and on the track building their strength, speed, cardio, and stamina.

    Yes, it's true that your software skills will atrophy if you don't use them. But that doesn't mean your skills have to get worse and worse causing you to eventually quit the job. It means you need to set aside time to maintain your skills. It may no longer happen automatically as a side effect of your work, but it can happen intentionally instead.

  • xrd 20 hours ago

      "We may be in the first generation of software engineers in the same position. If so, it’s probably a good idea to plan accordingly."
    
    He compares software engineers to pro athletes. What does it mean to plan accordingly? Start working with the mob to fix poker games? I don't know what "plan accordingly" means at all but it is a thought provoking statement.
    • sowbug 17 hours ago
      Imagine AI had never happened, but you had set a personal goal to write your last line of code in exactly five years. You can manage coders, you can write a novel about coding, you can run a yoga school for coders. You just can't code anymore. What do you do?

      I know nothing about professional sports other than what I learned from Jerry Maguire, in which Rod Tidwell says "I got a shelf life of ten years, tops. My next contract's gotta bring me the dollars that'll last me and mine a long time. Shit, I'm out of this sport in 5 years. What's my family gonna live on? Huh?" That's the sentiment.

  • cmiles74 23 hours ago
    Comparing software development to carrying heavy things at a construction site feels like a real stretch to me.

    'If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.'

    On another note, for sure software developers are saying things like "this is the part of the job that I like" or "if you aren't doing the work, you won't be good at the work." But other people are saying this, too. I just saw an episode of "Hacks" ("QuickScribbl") where the writers say pretty much this exact thing when confronted with AI tooling designed to "make their job easier". Is writing comedy also like lifting heavy objects at a construction site?

  • wduquette 21 hours ago
    "You can! You just might not be able to earn a salary doing so, for the same reason that there aren’t many jobs out there for carpenters who refuse to use power tools."

    I've long regarded myself as more a master craftsman than an engineer, and I've had the pleasure of working on one-of-a-kind or first-of-a-kind things. Perhaps fortunately I'm near retirement. But I genuinely enjoy the coding: it's how I engage with the problem and learn to understand it. It's also how I ensure that I'll be able to read the code and find things in the code base when I come back to it years later. Last thing I want to do is spend my days overseeing someone (or something) else's code. If I wanted to be a manager of programmers I could have done that years ago.

  • nsoonhui 15 hours ago
    I think the polarizing response regarding AI depends on which lenses you are looking through. For junior roles, yes, the job is rapidly disappearing. But for senior roles, experience and judgment are more important than ever.

    So yes, software engineering may no longer be a lifetime career for a lot of people, much like elite sport is not a viable career for most—but still, some will, and must, make it their career.

  • rglover 18 hours ago
    Screwdriver -> power drill

    Hand-coding -> llms/agents

    Sometimes the only thing that can fit into a tricky spot is a screwdriver. The power drill didn't make screwdrivers obsolete, it just made them less necessary day-to-day.

    Same thing here. LLMs are power tools, but sometimes, the only thing that can fit into a "tricky spot" with code/systems is knowing how to do it by hand.

  • hoppp 21 hours ago
    If you don't know how to code then you can't really influence the AI technically and that can result in everything being the same.

    Maybe you want a react app and using redux for state would be the best for the specific case but the AI doesn't recommend it and you don't know, then you are missing out and can end up with something suboptimal This was just an example

  • tehnub 1 day ago
    >If the models are good enough, you will simply get outcompeted by engineers willing to trade their long-term cognitive ability for a short-term lucrative career

    > (2) AI-users thus become less effective engineers over time, as their technical skills atrophy

    Wouldn't (2) imply that if everyone just used AI there eventually would come a time when there aren't engineers who will outcompete you (because their skills are so atrophied)?

  • EliRivers 21 hours ago
    The majority of my activity today as a professional software engineer with two decades' of experience; trying to get Team Sales to express what they want. It's so hard. I see no way an LLM can do this. I could possibly be replaced with someone who spent their time begging Team Sales to type what they want into an LLM.
  • maerF0x0 18 hours ago
    Ageism is alive and well in our industry.

    People need to learn the difference between fluid intelligence and crystalized intelligence.

    People need to hear that startup success is maximal when the founders are older, not younger. VCs chasing youth are statistics deniers.

    • bsder 17 hours ago
      > People need to hear that startup success is maximal when the founders are older, not younger. VCs chasing youth are statistics deniers.

      For building a successful business, older founders are more likely to succeed. They understand the business and see something that everybody else is getting wrong that they can correct and make money on. They will grind at it and have a profitable business for decades.

      This is anathema to VCs.

      VC's want a short term lottery ticket. They want your business to go big or go home. They would rather see your business fail quickly than grind profitably for a decade. Youth feeds into this in two ways. First, youth can align with the fad of the moment in the hopes of running a cashout while the iron is hot. Second, youth can be bullied into doing stupid things that a more experienced person will flat out tell you are stupid and refuse to do them.

      This is the standard misalignment with founders and VCs.

      The interesting question is whether experienced people can leverage AI to construct actual new businesses rather than just AI bandwagoning. The fundamental problem is that most successful businesses have to deal with customer service, and AI doesn't do jack to make that better.

  • mobiuscog 1 day ago
    Software engineering today is almost nothing like the role it was 30 years ago.

    Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.

    The days of 'lifetime career' had already gone for most people, way before AI arrived.

  • somesortofthing 21 hours ago
    Agent-assisted programming is fundamentally the skill of directing and supervising agents. I don't see any reason to believe that working a job where you direct and supervise agents will make you any worse at directing and supervising agents long term.
  • dzonga 21 hours ago
    such an incoherent argument.

    > professional athletes & construction workers - work in physical fields which means there's physical limits to what they experience both in terms of what they do & what their body can do.

    > software engineering is an art & engineering. which means as long you're of sound mind - you can do it till you die of old age or even if say you go blind. Because you ability to refine / taste is not dependent on your physical capabilities.

    > llm's one shoting things - is not engineering because engineering is about compromising within constraints & using rules of thumb. so if you have no constraints u r not engineering.

  • deferredgrant 18 hours ago
    The practical lesson is probably to build adjacent judgment: product sense, domain expertise, systems taste, and communication. Pure implementation may be the most exposed slice.
    • upupupandaway 18 hours ago
      This is, to me, the "correct" answer. I am starting to see the role evolve to "software producer", which, like music producers, direct an entire problem space using tools (Claude / ProTools + presets) and occasionally bring some specialized musicians for some advanced parts. Most commercial software will be built this way, much like almost all commercial music already is.
  • osigurdson 19 hours ago
    It won't be a career if AI gets good enough that you don't have to read / understand the code - otherwise, AI won't have much impact on jobs I don't think.
  • Stevvo 23 hours ago
    I take issue with the premise that "Using AI means you don’t learn as much from your work" With AI assistance, I tackle far more tasks than I would without it. Learning per task goes down, but cumulative learning does not.
  • noashavit 14 hours ago
    Tbh I don’t know if any careers are lifetime these days. Maybe plumbers have job security. GPUs need cooling…
  • abhik24 22 hours ago
    Totally agreed and on point. Calculator operators aren't around a lot.
  • akizminet 9 hours ago
    Following this logic, mathematicians disappear first.
  • gcanyon 1 day ago
    Is anything today a lifetime career? I’ve had at least five or six job descriptions over my time, and at least a few of them pretty much don’t exist anymore, or are changed beyond recognition.
  • diebillionaires 20 hours ago
    Have you seen construction workers lately? They have machines for everything. I highly doubt they even need to lift things any more.
  • topherPedersen 21 hours ago
    It's been absolutely astonishing to see software developers pick software development as the first profession to attempt to automate away. Couldn't you geniuses have picked any other profession to start with? And it's not just the developers at Anthropic & OpenAI, even at my own company, the rockstar developers were the first to try and automate away all of the software development jobs at our company.
    • xboxnolifes 18 hours ago
      Software development is not the first profession that has been automated away by programmers...
    • simonw 20 hours ago
      I've been releasing open source software for ~25 years at this point. The goal of that was always to save other developers time, as part of a collaboration where other open source developers save me time with their own work.

      That's worked out pretty great so far!

    • Jtarii 20 hours ago
      I don't think it's that supprising that software developers would try and make tools to make software development easier.
  • coldtea 23 hours ago
    >I don’t think there’s compelling evidence that using AI makes you less intelligent overall1.

    That statement is enough of an evidence

  • moezd 12 hours ago
    This should be repeated ad nauseaum: Just because CNC tooling is available, you wouldn't get rid of your wood/metal workers. You give them more scoped tasks, expect them to finish sooner with more standardized output, and... you let them experiment a bit on the side. Perhaps not with industry grade tooling unless they want to pay for fancy tooling, but like DIY, and then, you lure nontechnical people in by telling them how much you can save with DIY. Of course it doesn't matter to no techies that some of the finishes they want are truly gruesome handiwork, but hey, they are a part of the DIY community now!

    That's the way. Anything else shows that they don't know how modern economy works. And let's admit it, as a bunch of IT/software people here we are terrible at this.

  • thegrim33 15 hours ago
    Author makes a living by working for a company that sells an AI coding product. Their livelihood is dependent on people buying AI coding products. The author just happens to write an article about how amazing AI is for coding and how software engineers will no longer be needed. Wow, I'm so shocked, every single time it's the same exact pattern.

    Almost like we learned nothing from the bitcoin period where all the people that would make money if people invest in bitcoin constantly posted stories to HN about how amazing bitcoin is.

  • mariopt 1 day ago
    Why are we upvoting this?

    Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.

    Can we stop upvoting vibe published articles? The arguments are flawed and don't even make sense to anyone who does software

    • passivepinetree 1 day ago
      I think you're being a bit harsh here.

      Yes, the blog is mostly about AI, and yes, he publishes very frequently. But his articles don't read like AI and he claims not to have used it in his writing (https://www.seangoedecke.com/avoid-ai-writing/). And regardless of how you feel about the content, the community has clearly decided it's worthwhile as a discussion point.

    • aleph_minus_one 1 day ago
      > Why are we upvoting this?

      Because people want to discuss about the topic of the headline.

  • feverzsj 23 hours ago
    So, permeant underclasses those billionaires talking about are actually just juniors that never get a chance to become a senior.
  • jowler96 8 hours ago
    Wage suppressing propaganda btw
  • boombapoom 21 hours ago
    it never was. Look around, its a young mans game from the get go
  • dusted 23 hours ago
    I'm a software engineer and architect, I love my job, I love diving into the small details, I love the grand overview.. I love identifying concepts and applying them to achieve elegant high-performing systems.

    I love thinking about what kind of assembler the compiler may generate (though honestly, I haven't got a chance), I love thinking about how languages should be more dynamic (Who's got actually-first-class functions? Like, ones that you can build, compose, combine and manipulate to the same degree you can a string or a JSON object, no LISP, you're cheating, close no point).

    And yet.. I don't care that much. Not because I'm late in my career (I'm 40, there's still some years left in me), but because I want to make computers do things, and what I enjoy doing is thinking up ways the things can happen, and sometimes the particulars that matter when making a lot of different things happen in a coherent system.. And yea, LLMs are trained on peoples output, and from what I'm seeing everywhere, is that people are overall fairly terrible at that, and most of the plumbing-type glue being written is not worth anyones time..

    And I'm not saying I don't care because LLMs can't do my job (heck, even after hours of back-and-forth spec building and refining every little nook and cranny, the stupid coding agent still cheats or gets it wrong (even after it's beautifully explained, proven even, by reasoning and example alone, and on first try even) that the words coming after the previous words makes sense, as soon as the plan is put into motion, it'll mess it up on some scale so fundamental I should just have done it myself.. And I hope that changes, I hope that I don't have to go into such detail.. I hope to become a steward of taste rather than a code-reviewer.. I hope that I will eventually not be needed for that anymore.. I want it to replace me, so I can move to telling what I want, and have it made that way..

    I hope I won't need to steward good taste, and that nobody will.. I hope the applications I use in 5 years will be a collection of one-offs, and gradually improving tools that was written _just_ for me, for my way of working, and my way of thinking.. I want to prompt the damn program to change itself as I discover new ways to do things, until it can eventually figure out how to automate the last bit of my task away.. And then I'll go do something else exciting.

  • ricardorivaldo 17 hours ago
    "Simply put, someone will always be needed to do the job—a software engineer (SWE). Today, SWEs don't punch cards to program; in the future, they may not even 'write' code. However, the job will continue to exist, even if it looks different."
  • lowbloodsugar 14 hours ago
    I love coding. AI does the boring bits. I still get to do the fun bits.
  • delusional 1 day ago
    > Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”

    I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.

    In this specific case we do have a techniques to build software without causing damage, so why change that?

    This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.

  • saltyoldman 18 hours ago
    "may no longer be a lifetime career"

    60 years total so far

    construction has what, 6k years

    I'll get into that next.

  • pvelagal 23 hours ago
    Imagine a situation where AI creates thousands of lines of code in a few repos and there is a Production issue and does get resolved by AI. How can humans jump in and resolve the bug without knowing anything about the code ?
  • jongjong 18 hours ago
    Engineering boils down to figuring out what is important and prioritizing.

    This requires having an understanding of a business domain, economics, human psychology and technology.

    The competitive aspect of it means that you need to understand these things better than most people and machines. If you don't, then your skills have no value on the market. Will generalist AI trained on public data ever understand these things better than software engineers across every possible niche?

    I don't think so because that knowledge is usually gate-kept. Nowadays, new engineers almost have to beg to be given access to knowledge of company systems. It takes at least 6 month for a skilled engineer to ramp up on large systems... And it's mostly because of institutional resistance.

    The thing is, it doesn't even require people to be withholding information... Some engineers will happily share everything they know about internal systems... But in a big company; you first have to identify this person. That can take a while... Then you need to identify other persons who will give you other information that is relevant to your specific tasks/integrations. Then there are all sorts of other constraints and restrictions to deal with.

    You can't just deploy an AI to a big company and it will magically guess all the endpoints which exist... You have to tell it what is available and enterprise systems are not designed for transparency.

    Big companies resorted to a kind of security-through-obscurity. This used to be considered bad practice 10 years ago but at some point they just gave up, let complexity run amok and started calling it "multiple layers of defence" but now this obscurity is a problem for evaluating system security (too much unknown context is required, nobody fully understands the entire system) and it slows down development and maintenance as well.

    Whoever knows the most context about a system has the advantage... And this isn't necessarily a company insider. Most likely, the people with the most context are platform providers.

    I predict that most major hacks will originate from platform providers. We already started seeing this with Axios hack (originating from GitHub/npm) and Vercel (originating from Google Workspace).

    The centralization risk is massive because each platform is servicing so many large companies. It only works when there is perfect incentive alignment but that's not usually the reality during difficult economic times. Third-party platforms cannot be trusted anymore.

    • rTX5CMRXIfFG 12 hours ago
      I don’t have anything to add to your comment, but I have no means to bookmark it either other than by replying. You’ve put into words exactly what I think.
  • yieldcrv 21 hours ago
    Month 30 of software engineering disappearing in the next 6 months

    I'm greatly anticipating the next Great Leap Forward™ with a publicly available Mythos or other new paradigm I can't currently imagine

    but at the moment, agentic coding has made me busier than ever before, while its Product Managers, UX, QA, Data Scientists and DevOps that have disappeared from the teams I'm on - across multiple organizations - and I have to do all their work and make dashboards that I didn't have to make as well

    All the projects that would have been cancelled by Q3 are being attempted in Q1, means more work

  • mystraline 21 hours ago
    It'll move, sure.

    Im looking at proper engineering in building local LLM networks, with proper firewalls, capability access, and guards around the LLM systems to allow and enable advanced use while not just "lol delete everything" happening.

    When theres a land grab, move to selling tools and how to knowledge work in maintaining the tools and proper operation and maintenance.

    I also look at upsells like local LLMs as reason to do this in house, so that companies arent liable for rug pulls and violation (consumption) of trade secrets, or breaching confidential discussions.

    And LLMs arent good at recommending tech stacks for running them. Stuff is moving faster than most data training sets have.

  • anarticle 22 hours ago
    Software is a tool to solve a problem, as long as you keep finding problems that you can solve with it, you're likely to get paid to do it.

    If your crowning achievement is: "I can 100% all leetcode hards" I have bad news for you.

  • varispeed 22 hours ago
    Software engineering was not a career long time ago. The companies have no respect for software engineers and treat them as commodity that can be replaced at any time. The traditional career "progression" also doesn't exist. You can get pay rise only so many times and become the seniorest senior or you want to fulfil the Peter's principle.

    While most developers were busy grinding, the corporations did the most ensuring the only sensible pathway to wealth and development is closed = running own business that is. In many countries, due to regulatory capture enacted by corrupt governments, making profit is next to impossible, that if you manage to jump bureaucratic hurdles that are not present for larger corporations.

    AI is just a tool. Will AI replace software engineer is like asking will hammer replace the carpenter?

  • traderj0e 23 hours ago
    Nah it is
  • dodu_ 13 hours ago
    Wake up babe it's time for your hourly venture-approved FUD-slop.
  • coolThingsFirst 1 day ago
    It never was a lifetime career, if you don't get the dough by 35 you just failed.
    • SoftTalker 1 day ago
      Absolutely untrue, you could have a solid career writing back office or internal software in financial services, insurance, higher ed, any number of industries. Would they make you a millionaire? No. But they'd pay for a nice house in the suburbs and raising a family.
    • whateveracct 23 hours ago
      i'm about 35 and i have made good money but not enough to quit. i plan on just sitting around cashing checks for another decade. with a few liquidity events along the way to sweeten the deal. should pay for my mortgage, some home renos, and fund my 401k etc. i don't foresee myself being out of work (and i don't even use AI to code! i'm just Actually Good!)
    • mythrwy 22 hours ago
      I started at age 39 though and did pretty well up until a year or two ago (16 years total).

      Like many people I've been sad about the loss of a career I spent years developing skills in and I'm 55 now and won't be quickly retraining for another high paying career. Fortunately I do have other skills I developed earlier in life and low needs so will probably limp by fine but it's still a painful adjustment.

      Point being, you could always write code as an older person. Well, back in the old days when we wrote code anyway.

  • keybored 1 day ago
    > I hope that this isn’t true. It would be really unfortunate for software engineers. But it would be even more unfortunate if it were true and we refused to acknowledge it.

    More AI Soothsaying. Not so hard on the Inevitabilism this time.

    https://news.ycombinator.com/item?id=47362178

  • pphysch 1 day ago
    On the contrary, in an efficient economy, every business operations manager (MBA) would be a skilled software engineer, able to comfortably manage data flows and design custom automated processes. There's so much potential energy there in unlocking this technical literacy.

    Less "pure" programming, but lots more programming in general.

  • the_real_cher 1 day ago
    Was it ever? It's always seemed weird to me that people even think 'software engineering' is a career.

    It's a tool for knowledge work.

    No carpenter is a specialist in drills.

    It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.

    • Aurornis 1 day ago
      It most certainly was a lifelong career.

      I’m kind of confused how you might think it wasn’t. Going through a career as a software dev until retirement was very common.

      Software engineers didn’t just disappear after age 40.

      • aleph_minus_one 1 day ago
        > Software engineers didn’t just disappear after age 40.

        At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).

        So,

        > Software engineers didn’t just disappear after age 40.

        is rather a very recent phenomenon.

        • atmavatar 23 hours ago
          Enter the carousel. This is the time of renewal.
          • selimthegrim 23 hours ago
            I'm not sure anyone under 40 is getting that reference.
            • ricardorivaldo 16 hours ago
              and no one over 30 will be alive to understand
        • GrinningFool 1 day ago
          > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game"

          That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.

          > Software engineers didn’t just disappear after age 40.

          >> is rather a very recent phenomenon.

          Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.

        • Aurornis 22 hours ago
          > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;

          This wasn't common anywhere except for maybe the Silicon Valley bubble.

          The rest of the US and even the world could see that not having a very successful company of your own is to equal to being a failure.

          • aleph_minus_one 19 hours ago
            > > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;

            > This wasn't common anywhere except for maybe the Silicon Valley bubble.

            This was a very common sentiment even in Germany at this time.

    • strken 1 day ago
      Software is wood, not drills, and if we somehow invented bacteria that gradually built an ugly but saleable house when fed on water and nutrients and nudged into shape, I bet carpenters (well, framers or whatever they're called in the US) would have an identity crisis too.
    • jdc0589 1 day ago
      I kind of disagree. You are describing a kind person who is extremely valuable, a person who is proficient in SWE but also has domain specific skills in some niche.

      That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.

      IMO AI is the first thing that has ever actually challenged that.

    • azath92 1 day ago
      Id disagree with this analogy: "No carpenter is a specialist in drills." and i think its an interesting lens through which to look at the evolution of our tools.

      I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.

      IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.

      Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.

      imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.

      • randcraw 22 hours ago
        Yeah, IEEE Spectrum has responded to the dissimilar roles in SW dev by ranking programming language popularity contextually, by separating the project domains and ranking the languages only within each domain. That's a lot more useful than allowing the single dominant project domain to silence the recessive ones, as TIOBE does.
    • jake-coworker 1 day ago
      I think the logical next step is that "XYZ knowledge worker" will become a software engineer of sorts. Not literally writing code, but at minimum encoding processes/workflows into some language.

      If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.

      And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).

    • chasd00 1 day ago
      I tell my boys (both in HS now), the combination of a specialized skill/knowledge + competent computer programming is the sweet spot. For example, my oldest wants to go into Petroleum Engineering which is great but I told him to still learn software development and get comfortable solving problems with code. Having specialized Petroleum Engineering knowledge combined with being a competent software developer is a powerful combination.
      • randcraw 22 hours ago
        Yeah, I've seen the same thing happen to data miners in the pharma industry. An increasing fraction of young biologists have skill in basic statistical DM as well as web search proficiency sufficient to gather DM code analysis examples, even without using AI. In the very near future I expect almost all R&D exploratory DM will be done by pharma domain experts (biologists and chemists) rather than served by DM experts (computer scientists or engineers).
    • delusional 1 day ago
      > No carpenter is a specialist in drills.

      There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.

      Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.

      Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.

    • suddenlybananas 1 day ago
      Software engineering isn't a tool, it's the task.
  • vasco 1 day ago
    Are people seriously thinking that you can make yourself dumber by using a chat UI?

    If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.

    • meheleventyone 1 day ago
      You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.
      • vasco 1 day ago
        Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?
        • meheleventyone 1 day ago
          I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.
        • somebehemoth 1 day ago
          The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.
        • delusional 1 day ago
          I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.

          Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.

    • shhsshs 1 day ago
      If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.
    • nathanielks 1 day ago
      I think you need to read the studies linked in the footnotes. This is a well-studied issued.
    • ramon156 1 day ago
      You can definitely feel it when you talk against an AI vs doing the churn yourself. It's comfortable, simple, it doesn't aggravate you.
    • 4ndrewl 1 day ago
      Pretty much every study says so, so I guess?
  • tayo42 1 day ago
    > The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties, at which point your body just can’t keep up with it.

    If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?

    • chomp 1 day ago
      Easy, I made the switch in my 30s, now I manage software engineers :)
      • hnuser 23 hours ago
        Software managers are being replaced by vibe coders.In the age of AI managers are irrelevant
    • slowgramming 1 day ago
      [dead]
  • dailywriterguy 20 hours ago
    I mean according to our tech overlords, no one will need to do anything and we'll just sit around and goof off all day. so, honestly, future is bright.
  • j45 22 hours ago
    Most careers evolve as technology does.

    Other professions do too, whether it's healthcare, etc.

    Software being a new field, didn't really become a standardized profession in the way engineering might be.

    The goalposts are moving because the standards are moving, because the capabilities are moving.

    Remaining a self-directed learner will remain critical.

  • yobid20 1 day ago
    terribly written article that failed to make any point. anyone whise read ai generated code from the best models and who understand how llms work, knows this statement is complete bs.
  • otabdeveloper4 1 day ago
    It will be for those fixing AI slop software. (In fact, they might need several lifetimes.)
    • pllbnk 1 day ago
      The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.

      Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.

      • ezekg 1 day ago
        In my experience, an LLM "refactoring" autonomously doesn't actually improve code quality, it simply reorganizes the mess into a new mess.
        • missedthecue 23 hours ago
          This is my experience with human developers too so I'm not sure if there's a meaningful difference.
      • bcrosby95 1 day ago
        Sure, but also, AI will always find issues. It will never be mildly satisfied with the codebase and say so.
        • missedthecue 23 hours ago
          All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue.

          If it doesn't find anything it says I didn't find anything.

        • pllbnk 1 day ago
          Not from my experience. It's true that it will always find new issues in a new session but it is happy to say so when the code is good.
      • otabdeveloper4 23 hours ago
        > AI can also fix AI slop

        No it can't.

        AI knows nothing about software engineering, all it can do is generate code.

        • platevoltage 21 hours ago
          I'm currently being paid by a client to fix his AI slop.
    • incognito124 1 day ago
      Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour.

      We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative

      • GrinningFool 1 day ago
        When everything is generated on-demand - each exploit has to be discovered anew. No more conveniences like common libraries.

        This is sarcasm, but it's probably also going to get sold as a feature at some point.

      • camdenreslink 21 hours ago
        If the AI slop software managed to get a user base, then you can't just throw it away and completely start over. You need to modify it in a way that is seamless for your users. If all code becomes single use, are users generating it for themselves? Do you think a dentist office will vibe code their own scheduling software?
        • otabdeveloper4 12 hours ago
          > Do you think a dentist office will vibe code their own scheduling software?

          They really do think that. (Absolutely bonkers.)

  • luodaint 2 hours ago
    [dead]
  • donbventures 19 hours ago
    [dead]
  • numitus 16 hours ago
    [dead]
  • alvatech 22 hours ago
    [flagged]
  • ctln 8 hours ago
    [dead]
  • lacymorrow 22 hours ago
    [flagged]
  • jocelyner 12 hours ago
    [dead]
  • kitbot 22 hours ago
    [flagged]
  • black_13 14 hours ago
    [dead]
  • player1234 22 hours ago
    [dead]