GitLab could be the perfect case study on AI-powered efficiency improvements. I have never interacted with a piece of software that, for every single problem I found, there was an open issue always at least 4-7 years old that was just being shuffled around by managers adding and removing random labels.
Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
On the other hand, most issues rot due to process overhead, not because the ticket is hard.
For example, why are you working on a four-year-old issue, and a trivial one at that, when you're already behind schedule on the tasks assigned to you? Now someone else who has their own things to get done has to review it? And even trivial changes can be annoying to truly review beyond a blind LGTM.
Just one of the many ways that pressure builds against the utopia of burning through old tickets.
Aside, watch out for the double standard we have for AI on forums like this. AI is expected to be so good that it can magically overcome the forces that keep engineers from working on old tickets (which were never related to engineer productivity) and, when AI can't, well of course it couldn't because AI sucks.
And who knows the fix to some of these issues might be a hell of a lot more worked now that the bug has been baked in and the "real" fix is herculean now.
The reason for this is: the only way to show productivity gains enabled by AI is to lay off people and pretend you are doing the same amount of work (while in reality you are severely dropping quality and accumulating technical debt).
I think that in these cases, what they need more than more engineering or AI productivity, is good management. Close issues that get shuffled around too much as "yeah this is too vague", or "nah we can't fix this", or "you know what, fuck you I'm not doing it".
Productivity gains can also be achieved by reducing scope. The coming issues will be that because of increased productivity (idea -> working code) that software is too bloated, does too much, that product managers will and can say "yes" to everything. Until it becomes unmanageable.
And that's not a new problem, it's what basically every programming adage / wisdom going back 70 years is about.
Dunno how it is these days, but that reads like Android roughly 2012-2020.
I once found a looooong bug report thread on their issue tracker 7ish years old that had all the usual waves of promises that a fix might make the next release, then silence, then repeat, and the usual challenges to the bug’s status every time a release happened, plus it saw community members correctly diagnose the problem in the first couple years, then by like year 5 there’s was a (small!) patch posted by a community member with multiple posters confirming it was good and fixed the issue, that the author and others had been begging Google to apply and get in a release for a couple years. There’d been no responses from Google folks for a while.
That might be the worst one I saw, but encountering something like that was a few-times-per-year thing in my android app dev years.
I'm certain that if they would start doing that, without a proper strategy / workflow when it comes to QA, it will be GitHub reloaded. You'll be able to watch the decline in real-time.
But that’s the issue the parent is highlighting, you can’t just throw AI at these problems because the bottleneck is decision making, it always is, and AI is bad at that.
So nothing really changes in terms of product development velocity, it’s just headcount reduction.
But that’s not what their own marketing strategy communicates.
I think what OP means is that these companies keep promising AI is exceptional for one thing but for some reason it's never used for that. The only visible outcome of AI in these companies is that they spend so much on it they end up laying off employees.
Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?
I'm going to be honest with you, I never even considered that the pinnacle of enterprise software would have a public issue tracker (do they?). If something doesn't work the way I expect I just accept it and move on.
Because an enterprise customer might decide it’s a needed fix tomorrow. I’ve seen it happen - 20 year old bug on the backlog and suddenly it jumps to the front of the line.
To be fair, any LLM project gets a lot of stupid tickets, by virtue of a) marketing to users who aren't really developers and b) bad developers being more likely to use LLMs. Both of these groups are more likely to write bogus or non-reproducible bug tickets, as well as feature requests that don't make any sense. My guess is 10% of those 10,000 open issues are actual bugs or sensible requests.
On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.
https://www.google.com/search?q=gitlab+stock shows their stock price was ~$52 a year ago and is $26 today, so down 50% in 12 months. It's quite possible this is because they weren't making enough noise about their AI strategy.
If investor fears are that AI makes GitLab's business less valuable, including this in their "GitLab Act 2" announcement makes a whole lot of sense:
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Looking at their stock it has always been going down, even before AI. How could we know that the reason it's going down now is they were not making enough noise about AI, and not whatever it was that was making it go down before?
>The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it
>Operationally, we grew into a shape that was right for the last era and isn't right for this one
To meet their largest opportunity ever, they believe they need less resources. I'm not sure I understand how that follows.
>We're rewiring internal processes with AI agents, automating the reviews, approvals, and handoffs to speed us up
Is this also in the list of "we create code twice as fast and the bottleneck is review so YOLO no bottleneck?". I've yet to see a convincing justification for this. If anything, if you're going full throttle all the more reason to watch the steering wheel, no?
That said, 8 layers of management is a lot of management, and every line of the message seems like leadership truly believes they are sinking in bureaucracy. Let's see how unneeded those 3 layers they're cutting were.
Didn't they do that? Staples only came in as CEO at the end of 2024, and I assume he has been working on a plan to restructure the company since then. Because their financials are not great, and they have been losing money every year since 2019.
I don't know about gitlab, but tech companies (Meta and Grab) tend to hack off the bottom of the management chain, instead of cutting off the top (aka as the people that created the 8 layer system).
bottom level teams are merged to form larger teams.
Yeah, they never fire the VPs and SVPs in this process. Just a bunch of the hard-working line managers who are actually involved in the day-to-day engineering work
Companies are shaped more or less like pyramids. If you want to cut a meaningful amount of people from the organization, there's just not enough of them at the top.
If one person at the top of the pyramid earns 100 times what people at the bottom earn, cutting a few of them is still meaningful. Also, cutting a single/few person(s) that are mismanaging the whole organization is extraordinarily valuable too.
> GitLab has at most eight layers in the company structure (Associate/Intermediate/Senior, Manager/Staff, Senior Manager/Principal, Director/Distinguished, Senior Director, VP/Fellow, Executives, Board).
> [...] You can skip layers but you generally never have someone reporting to the same layer (Example of a VP reporting to a VP).
So they're counting the board of directors as a layer above the CEO.
I'm speculating, but they probably also have an unbalanced tree - you'll often see the IT security chief reporting directly to the CEO (because it's important to keep on top of, and they need authority to do their job) but only having 50 people below them in the org chart.
In some corporations you also sometimes get almost-nonexistent ranks created to smooth over a reorganisation. If a level 5 bureaucrat decides to merge the departments of two of their level 4 bureaucrats, they could demote one of them. Or they could make one into a level 4.5 bureaucrat.
That’s not what layers refers to. What they mean is how many managers between the CEO and an employee. Made up Example: CEO->CTO->VP of Infrastructure—> Director of Platform-> Sr Manager of AWS platform —> infra engineer would be 6 layers
At 8 layers of management (so 9 layers total, with the bottom rung being non-management), 3 reports per manager comes out to 6561 employers on the bottom rung. At 5 reports each, that 8 layers would give you over 300k at the bottom, an 10 each would give you 100m at the bottom.
Mathematically that would work out to a lot less than 8 layers of management.
I wonder if they have 5-10 employees per manager at the bottom of the org chart, but a lot of middle managers and manager-like titles mixed through the middle.
If its anything like the other tech companies, you'll have a bunch of overworked low-level managers with 20+ reports each, and then somewhere up the chain you'll find directors and VPs chilling with 1-2 reports
Solution: fire half the line managers, and make the rest also do IC work.
If anyone has a VP-level position open, I'm willing to send you my resume. There is a salary level at which I am willing to do work entirely without shame.
That's a really crazy number of employees considering they have one product that barely seems to change and is at best on par with similar products created by comparatively miniscule teams (Phabricator, Forgejo).
I'm on board with your gut that this feels more YOLO than careful but to be fair, in the engineering world fly by wire is very much precedented. I'm specifically thinking of the B2 bomber where it's essentially unflyable without a computer between the inputs and the outputs. Partially just keeping the plane from turning into a frisbee by reacting faster than a human possibly could, but also treating the controls inputs as the intent and manipulating the control surfaces programmatically in order to make that work. It's not quite the same thing of course but I think there's some carryover.
Still. Not a huge fan of this announcement or the general ways the landscape is evolving these days.
After CVE-2023-7028 (account takeover via password reset, IIRC you just had to add a semi-colon between the correct email and the attacker email and it'd email both) was exploited against my cluster, the boasting about fully-automated changes and reviews scares me. I hope I'm far from the only one that hasn't forgotten issues like this.
I'm aware that the defective code was not written by AI but nonetheless, GitLab is what stands between many small organizations and their most precious resources. I was fortunate that 2FA stopped the damage, but what's going to happen the next time? What if my organization is permanently damaged because we taught the machines to go fast and break things, too [1]?
[1] VPN is an option but we're a non-profit with a number of non-technical users, so admittedly we're caught in a balance between making it harder to do things. As much as WireGuard is awesome, there's still a barrier.
> [1] VPN is an option but we're a non-profit with a number of non-technical users, so admittedly we're caught in a balance between making it harder to do things. As much as WireGuard is awesome, there's still a barrier.
I would love to help a non-profit and so, I am curious but what are your thoughts on authentik/authelia and others, can they might help in any use case to what you are suggesting, I would love to have a more in-depth discussion!
Also thanks for working at non-profit, although I am not entirely sure what is about but thanks to your non profits and all the other hard working people working at non profits for a better world once again!
GitLab never ceases to amaze me in terms of just how bad their product roadmap is. Practical things like CI improvements are put off over UI rebranding on unicorn colours. Yet, good tooling is exactly why people used to pay for GitLab. For better or worse maybe this finally can change and we can get more customer oriented roadmaps again
I actually like GitLab's new UI, but their frontend is far too laggy given its complexity, especially since most of their pages are actually server-side rendered. It doesn't help that they use a weird combination of Vue 2 and jQuery in their codebase.
I think that as a corporation promoting the use of AI, they should actually be AI users themselves. They should just rewrite that laggy UI in Svelte, Solid, or even vanilla JS. Any of those would work.
The new UI is terrible, and the most important change they've done this year is ... drum roll ... renaming merge requests as "work items", because reasons.
Having said that, UI gripes aside, it works fine as a less complicated replacement for github.
> Software will be built by machines, directed by people. AI is the substrate on which future software gets built. Agents will plan, code, review, deploy, and repair.
"The Machine Stops" by Forster [0], anyone?
Honestly, I can't believe how repeatedly people ignore or don't know the warning signs put up by previous people.
Yes, it's science fiction, but so is 1984, Brave New World and Pump Six.
When will we go through something between 2001[1] and Tacoma[2]? Will we ever learn?
With it’s current AI setup GitLab still couldn’t make anything that could be called great in UX so I can’t wait to see what they can do by eliminating the remaining human factor. Can’t personally wait seeing tickets like these [0] open for months with bots telling you that everything will be alright.
This is quite an aggressively optimistic vision for the future of the software industry to tuck into a "workforce reduction" announcement:
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Also notable that the workforce reduction they describe doesn't appear to target engineers - they're "nearly doubling the number of independent teams" in R&D and "removing up to three layers of management in some functions".
What is this based on? The only thing I can think of is AI coding tools but only a few companies do it properly. I don't see gitlab capturing any of that spending
Also the whole "removing layers". Today's prof g market video was about the topic. Afaik it was the Coinbase CEO telling the same. Do these people get together to discuss their talking points? Or are they signalling to investors?
Presumably based on the fact that the OpenAI/Anthropic $200/month plans are selling like hot-cakes, and it's not often that a new software category comes around which attracts those kinds of per-seat prices.
So much this. GitLab’s values and ethos were completely incompatible with becoming a public company. Sid is always seen as the good guy but it was his narrow minded greed that led here.
The fact they can't capitalize on the current trainwreck of GitHub speaks volumes. If they had the right product people would be throwing money at them.
Gitlab used to be about as reliable as github. (ignoring the security oopses they used to have)
They simply don't have (or didnt) the skills to scale. THey were talking about using ceph to run things (which gives you an idea about how green their infra team was)
Are you implying they should create more in-house solutions, or that specifically Ceph is not a good solution and there is some other 3rd party solution that could be used instead?
Its slow, large, excessively complex and not that resilient to failure.
You either want a bunch of NFS machines backed on to ZFS on nvme, with a central jumping off point that allows sharding (this is critical to allow one or more NFS server to fuck up and not kill access to everything else.)
Most companies signing up to the idea that GitHub will fix their issues, rather than going through operational pain of migration. Everyone that I know jokes about GH downtime, but have zero internal talks about migration. Obviously small data point, but GitLab going this route shows not a lot of people are switching.
I've never actually seen that status page before, and I'm not clear what it's measuring. My company pays for Enterprise Cloud, and we see all the same downtime as what gets posted to https://www.githubstatus.com/
I'm not sure there's a lot to capitalize on, considering the state of hosting OSS development. But this really is a case study on watching your biggest competitor face plant into a wall, and responding by breaking into a head first sprint.
A lot of the conclusions they're drawing in this post about the "agentic era" seem quite misguided and some don't really seem to make sense.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
Let these people keep betting their companies, futures and net competency on text autocomplete. The future is bright for me and everyone else that isn't falling for it.
Reminds me of when microwaves first came out. Investors decided to go all in on "vibe cooking" (lit. cooking with vibrations) complete with microwave ranges (no conventional oven), until the public wizened up to the fact that there was in fact no cooking (Maillard reaction) involved in their vibe cooking. Took about 15-20 years but microwaves finally took their rightful place as a utility appliance rather than what they were touted as (a centerpiece). Pick up a microwave cookbook from the 50s for some laughs.
I sure hope you're not mocking the classic "Microwave cooking for one" book!
The mallard reaction is very possible in microwaves,
but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
That book came out much later than what I am talking about, when many workarounds like turn tables (and indeed, specialized crockery) were made available. This thing [0] for example, did not even have a turn table, and yet was created in an "all in" form factor for the American home. It was in production for nine years.
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
They sure are great for reheating food though. The problem is that a lot of developers think they are Michelin chefs when in reality they are Olive Garden cooks reheating frozen meals.
But I think the argument that microwaves are basically for heating things up and for essentially steaming a lot of vegetables. (I'll do one ear of corn in the microwave with pepper and spices.) I do have a thick microwave cookbook from the 70s or 80s but I've mostly only ever used it for vegetable cooking times. And probably less since I started roasting vegetables in the oven a lot of the time. I have cooked some of the other recipes but not for a very long time.
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
And that is the reason why it is only autocomplete. You probably had less context than the poster before, so it could not mix stuff up.
The poster before either had more memory or the search searched through more topics. And btw it’s really hard to only give access to some things.
Calling the technology "text auto complete" is not productive to the discussion. Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction, but now it's common place. As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum. You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
> Calling the technology "text auto complete" is not productive to the discussion.
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
If you ignore all the complexity and discard every detail, it’s literally just a box. Yet curiously you aren’t living in a cardboard box, or an aluminum shed.
Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.
I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.
I'm pretty sure it was sold as a house. That you understand that you can think of it as a box doesn't make it not a house. That's the point of the analogy.
It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
> And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.
Is "text autocomplete" supposed to be an insult? To text auto-complete a physicist I would have to understand physics as well as them. To text-autocomplete your words I would need to model your brain.
It's not attention that's the problem, it's how we train networks offline with backprop.
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
Yes, AI should be used as a tool for very specific things. Ones it’s trained on everything it’s completely useless. Anyone who is trying to use it for everyone will fail. I predict by 2030 (if not much sooner) ai bubble will burst. The only good outcome will be all this hardware used will be lequdated for pennies. Mark this prediction it will happen ;-)
This retort doesn't make any sense. Take humanity back perhaps 40k years ago and language did not even yet exist. Our token base was 0. Put an LLM in that scenario and it will endlessly cycle on nothing and produce nothing, stuck in a snapshot in time. Put humans in that situation, and soon enough you get us.
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
If that is the argument though, current AI aren't just autocomplete - because we could reasonably show an AI an image or a video and have them call a tool rather than return text. That'd be comparable to a pre-language human.
> Take humanity back perhaps 40k years ago and language did not even yet exist.
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
Sufficiently good text autocomplete is indistinguishable from intelligence to an impartial observer, and that's the only honest criterion for intelligence.
I'm a little shocked that people discussing this topic could be so far apart! I'm completely serious.
Have you ever thought about how you would determine if an arbitrary given entity is intelligent or not? I think you'll agree it would require some kind of test. You might agree that the test would have to involve bidirectional interaction (since otherwise it would be impossible to distinguish an actual person from a recording of one).
> It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.
Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.
> Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.
You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?
Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.
What I see is a need to shit on a tool to try to inflate your sense if self worth.
I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.
The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.
> Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.
Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.
If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.
Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.
Ahh, are we there yet? Has non-deterministic computer use eroded your mind so much that you are starting to question the binary system? You know, the insight that computers are something that flips switches on and off is rather old, and I have heard it uttered (although slightly humorously) several times already, nobody ever raising any eyebrow hearing it.
Not true, I tried just now. Took 30 seconds of due diligence. You could have done this too. Do better.
The problem is they’ll do what you ask. And if you are the type of non-curious person who replies “ Autocomplete only 'knew' how to output a scraper...”, then you’ll tell it to make you a scraper instead of ask what your options are for getting HN data.
If you seriously cannot tell what is the difference between a human being and a LLM and think they are both "autocompleters", you know very little about both humans and LLMs.
This thought that “maybe we are just next token predictors too” is not particularly clever. Most of us have thought about that, but a bit of experience with LLMs make it obvious that’s not what’s going on here. I think it’s a bit like listening to a recording of a person and swearing there’s an actual person in the recording device because the audible output is indistinguishable from the real thing. Why would you do that? You wouldn’t unless you have no idea how a recording device works, in which case it seems like magic.
> a bit of experience with LLMs make it obvious that’s not what’s going on here
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
A one-way audio channel is indeed too weak for a person to distinguish a person from a recording, but a bidirectional audio channel is easily strong enough: the person can verbally ask the person-or-recording a question and see if it is acknowledged.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
Git has always been one of the biggest perf bottlenecks inside of the product.
First for any scaled deploy we recommended NFS. We were young and dumb and it was too slow. (We’ve all been there)
Then we went to an RPC model with gitaly and even unwrapped some of the git calls inside of that to speed it up.
Just a few months ago we had a large customer with thousands of devs and a large monorepo ground their deployments to a halt because of a cloning strategy change that introduced an accidental 10x in git calls. Git itself was the bottleneck because it’s not designed for this scale and speed.
For enterprises where thousands of developers are contributing code via git to a centralized system of record, who are firing off 1000s of CI jobs Git is absolutely a bottleneck.
Now with LLM technologies we should easily expect a 5-20x code volume increase on the conservative side. Git is being stretched to its perf limits.
there’s a familiar saying “Markets can remain irrational longer than you can remain solvent.” i think that applies here as well. everyone (customers) want AI; investors demand it. it may eventually calm down but i’m sure many companies will be left behind and ultimately fade away if they don’t keep up until then.
I don't think it would be absurd for them to worsen. If LLMs cause discourse to worsen, but also grow and change, then the trainers are in a conundrum of ignoring new training data or losing track of the zeitgeist.
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
The memo also says they're eliminating a lot of middle management tiers which has been a theme for a lot of companies recently. It's also been a theme historically. Really has nothing to do with AI. It's just the classic executive view that they are paying people who sit in meetings and write emails instead of writing code. Blissfully unaware that meetings and emails are how big organizations function.
> Blissfully unaware that meetings and emails are how big organizations function.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
Your argument doesn't make sense. They literally explained why they are doing it. They are looking to remove who can't or won't keep up with ai. That can be managers but also engineers. That's what most companies right now are doing.
Right but naturally that's not actually why they're doing it. In actuality, it's a layoff - they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
> they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
And people wonder why there is so much push back against AI. The last thing leadership should do when laying off people is use the term AI. It's the most tone deaf thing you can do.
We don't live in the same world as they do. Saying AI out loud makes line go up, not down. Investors are still eating this shit up, for now at least...
While hosting internal services for 4 years, Gitlab was the only service that ran hybrid. Wish they could get their act together and focus on actual engineering again.
If anyone at Gitlab management is reading this; getting your microservices to run fully stateless in a Kubernetes cluster should the #1 goal. No disclaimers about potential risk. It's been 5+ years. Get it together. Stop bolting on minor package management features no one is going to end up using anyways.
> Our transparent restructure process creates uncertainty that is real and it's hard, and I'm not going to pretend otherwise. I ask that you reflect on the why, what and how and engage your manager in a real conversation about the work, the questions and concerns you have, and what the next chapter looks like for you. Your manager may not have all the answers, because they too are going through this period of uncertainty. The conversation still matters and your input shapes how we land as a team.
Setting aside the whole "I'm not going to pretend otherwise which reads suspiciously like Claude, I don't understand how this is supposed to make employees feel any better. No one knows what's going on and through talking we'll figure it out? Mmmmmmhmmmmmm.
Wow gitlab. Right when everyone was looking to see if you could lead with all the fails at github, you basically said "We're going to throw our source at ChatGPT and see what happens"
Right? I was seriously considering migrating everything in our company from GitHub to GitLab. Now I'm seriously considering self hosting our git instead.
There are a lot of downsides to self-hosting your git as well. Especially if you need to deal with high availability, scalability beyond a single server, and/or being open to the public Internet.
I'm not saying you should never self-host your git server, but it's not for everyone.
My bar for self-hosting something isn’t “these base standard feature works”, they had fucking better.
I get self-hosting got for security, compliance, and retention reasons, but for almost everything else it seems questionable for any use I would consider normal.
I just look at the pricing and really start thinking about is it really multi hundred euro a year per seat product... Frankly as consumer those pricing levels just seems like distanced from reality.
I don't know if that really solves your problem if the main trunk of development for gitlab is being run through several AI slop machines before they push it to what they call stable, then you download that (or use a debian, redhat package for gitlab which originated from it) and self host on your own machine the results of the AI slop fest.
Oh well, they really do their best to alienate people as well. They just completely overhauled their UX, and after that update, people at my company were so confused, they couldn't even open new issues anymore, because everything was somehow renamed to "work items". I kid you not, literally two decades of UX people were used to, just thrown out the window, it's absolutely mind-boggling. The feedback to this is devastating:
As someone that is raising money from VCs, I feel really sorry for large VC backed companies right now. What you see here is the Product-VC tension of the AI era, and in a large company its devastating.
Users want a product that delivers the value they are looking for, VCs are looking for infinite AI scale, these do not meet. So founders need to present two different values and visions, one for customers and one for VCs.
In a small early stage company you can pretty easily hide each side from the other so you can deliver value to your customers while dancing the VC dance, but as you get larger its harder.
I think founders will endure and VCs will calm down at some point, but there is going to be some suffering along the way.
Oh and have you heard that they built Cluade code with only 20 people? (ignore 12 years of AI research expertise head-start and that Anthropic now has thousands of developers)
The selling point would be vertical integration and that you don't need to stitch together 12 different SaaS CI products all attached to your source code, but just deal with one vendor (GitLab).
On premise Git forge with integrated issue tracker, CI/CD platform, and probably other 20 development adjacent tools. It has a ton of features so for sure it's not easily reproducible.
One issue I have with them is that they pretty much had all the features I use a few years after they started, and they have, for the most part, just kept adding new ones of dubious value instead of polishing the core ones.
> planning to reduce the number of countries by up to 30% where we have small teams
One of the really interesting things about GitLab was that not only did they have employees in a large number of countries but they also published their employee handbook which helped show quite how much work it was to support that:
They even used to have a public payroll.md page detailing how payroll worked in multiple countries - they moved that into their private docs a few years ago but the last public version is here: https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/...
UPDATE: I got the countries piece wrong. The linked OP says:
> Reduced operational footprint: We’re reducing our country footprint because operating in nearly 60 countries does not allow us to give every team member a great experience. We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer. Team members who are in good standing and would like to relocate are welcome to do so. We'll continue to serve customers in those markets through our partner network where appropriate.
I said they operated in 18 countries, so clearly my impression was out-dated and incorrect.
Also "We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer" suggests to me that it's a 30% cut to countries with "only a handful of people", not a 30% cut to countries overall.
Indeed. I've only seen that "quality, depth and pace of innovation" is in inverse proportion to the adoption of slop machines within an org. The more sloppy they get, the more the output stinks to high heaven.
> Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services
Two big red flags here.
First git itself is distributed and built for scale.
I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
Are they going to rebuilt git??
Secondly: a big rebuilt of monolith to services. Firstly there is nothing wrong with a Modulith. Secondly “rebuilt” will cause a lot of busy work without immediate value for customers.
And first of all: this announcement is done due to the stock price not AI
The productivity increase with AI is inflated because they want their stock price up.
Sell Gitlab stock while you can.
The leadership team has no clue what they are doing.
Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.
Git is not designed to handle 1000s of clones and merges to a single repo in minute scale even.
Sure, you are right. Git allows you to keep distributed state and eventually reconcile that.
Customers are trying to do that at absurd human scales right now without AI. Git itself is a bottleneck for large enterprises with large repositories and large CI configurations.
> First git itself is distributed and built for scale.
there're different dimensions for "scale" - like handling large monorepos, orders of magnitude more commits, tighter requirements for latencies (for agentic use, e.g. for agentic history navigation)...
> Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.
It makes you have 10x more the errors if you YOLO it ;) especially at a scale even remotely comparable to gitlab :/
Doesn't really inspire the greatest of confidences when they are literally dropping the ball on one of the greatest opportunities as github is being ensloppified.
Sometimes I wonder if I am more passionate towards my 7$/yr vps's and websites running on it than 7 billion $ companies (GitLab has a market cap or net worth of $4.36 billion. The enterprise value is $3.10 billion.[0] to be exact)
break things and move fast should work when you have 1000 users on your website, not 1000 full on entreprises (probably more for gitlab)
> I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
> Are they going to rebuilt git??
These comments make me realize again how you all (who were alive ie) must have felt during the pets.com and dotcom mania. Some of these sentences are almost onion-video like titles. Its so all weird at a certain point. I am unsure how to feel about this.
There's a 'github down' post here every other day.
The ball is right there, bouncing alone in front of the goal, and they just have to position themselves as "we're the stable ones" to score that market when the exodus inevitably happens.
This is what happens, when decision makers are out of touch.
So many things they could be doing, to make people buy into their services. For example they could simply run campaigns about how they promise to never use customer and user repositories for AI training. Or they could show better uptime statistics. Their CI language is better than Github's too.
If anyone gave me a choice between Gitlab and Github, I would go with Gitlab. But if I had additionally the choice to use Codeberg, I would choose that.
Maybe they are just not looking to grow. If they made such a statement, that would actually be a pleasant surprise. No hunger for "infinite exponential growth", just to impress investors? Great! That's a fat plus in my book!
I was on gitlab up until nov last year. I don't really miss it; have yet to experience issues with github.
Gitlab pricing was bonkers. It always felt like their sales team were trying to play gotcha with us over the years with pricing schemes that would milk us for money.
> The ball is right there, bouncing alone in front of the goal
Their pitch is not to you, the dev. But, to the investor class. We are in this funny place in the market where you can make more money by catering to the investor class than to customers. In other words, an upside down world.
The big thing on their roadmap is rearchitecting for something that can handle the increased load, though. Like, they're clearly paranoid that if they don't move fast, they're going to be just as busted as Github.
TBH the open source nature of gitlab means that any sufficiently large and clued-in hosting company (think: servercentral/deft/summit, whatever it's calling itself these days, or one of its competitors) could put up gitlab instances for people to use and meet more nines of uptime than github. It doesn't have to be the gitlab company itself running servers with the httpd and back-end database.
I understand the meaning, however, in that they're well positioned by having the company name and domain name, same general way that non-technical people will pay wordpress.com to host their blog/small website because it's very easy, rather than DIYing it or paying a 3rd party.
GitLab isn't open-source. It's "open-core". Third parties hosting GitLab instances don't have access to the same range of features that GitLab-the-company does.
GitLab Community Edition (CE) is available freely under the MIT Expat license.
GitLab Enterprise Edition (EE) includes extra features that are more useful for organizations with more than 100 users. To use EE and get official support please become a subscriber.
JiHu Edition (JH) tailored specifically for the Chinese market."
Personal opinion, but I think a great deal of the people who are presently overloading github with one person created vibe coded projects would be just fine with the "CE" feature set.
I just rolled out CE in our small org, it is a nice step up from Free GitHub, there are Wikis, and no uncertainty about the runners. Founders like it better because their IP is on their own servers now.
I find it a bit concerning that this piece focusses so much on customers and shareholders... I know I don't pay, but perhaps sometime I will, and I am learning GitLab and applying at large orgs as GitLab consultant. All because of CE... So I hope it will stay. It is a nice and very complete on-ramp to EE.
GitLab was never going to be the ones to take the mantle GitHub left on the ground. They’re a “clone” company and have very few original ideas of their own.
To be fair to GitHub, "GitHub" Actions is just Azure DevOps Pipelines wearing a mask. Which I think explains a lot about it's quality as a feature. It was brought in as a rushed copy-paste of the existing Azure DevOps feature very quickly post acquisition.
I have to regularly use Azure DevOps and the whole platform is painful, and now is rotting on the vine. I hear there is internal strife at Microsoft between Azure DevOps and GitHub products.
To build good software you need to take the time to make your existing features work well, and improve or prune the ones that don't. In other words, it is craftsmanship.
The American corporation and its values are anathema to craftsmanship. You can ******* a **** all you want, it's never going to turn into gold, but your hands will be covered in crud.
> Interpersonal excellence: individuals who are good humans, embrace diversity, inclusion and belonging, assume good intent and treat everyone with respect
Yep. These companies forget that we can use AI too, to unpack these ridiculous corporate statements in record time to get right down to the point: We're going to dump all our values, and not even going to pay lip service to things like integrity, transparency, or diversity anymore.
I'm not even sure it even means "work harder, not smarder and wach yor seplling." In my experience it's more of a shibboleth to the new masters to let them know they're down with creating a top-down organization where information flows only one way.
Were I to have crafted this post, it would have included things like
"We ask our employees, customers and investors time to prove ourselves to you again as we re-commit to listening to our stake-holders and ensure our organization is properly re-positioned to execute our continued plans to deliver the best possible service..."
But instead it comes across as "someone read an article about Amazon's two-pizza team rule and we figured there were worse things to try."
If they're asking you to do more for less pay and with fewer coworkers to help, don't feel bad if the company code turns into unmaintainable, unintelligible garbage. They can't really stop you. It's just AI. Something is going to have to give.
Every IC ought to use the present day as the opportunity to build a nimble competitor to their old employer (or whatever industry incumbents they want).
Having been in some of these values meetings, I really imagine it went like this: someone wanted speed, and someone else wanted quality. Sorry, I mean Speed and Quality. Many people said there is a tradeoff between those two things, and only one thing can be first.
Some brilliant businessman: "I know, we'll combine them. We want Speed _and_ Quality." Thus, "Speed with Quality." Tada!
Values are a tradeoff: only one thing can be first. Trying to duck that is stupid.
The funny thing is you absolutely can do things which improve both speed and quality at the same time (basic good engineering), but they're like 3 or 4 orders of effect removed from those outcomes and impossible to do when you have someone breathing down your neck asking "does this make us go faster" at every step of the way.
Also "our velocity is 3x higher than it would be in the imaginary invisible universe where we made worse decisions 6 months ago" is impossible to measure, whereas "we cut a bunch of corners and shipped a piece of garbage on an arbitrary deadline" is very measurable.
There seems to be a massive push against DEI over the last few years in the tech industry globally, despite it being one of the industry's greatest strength.
How well would the tech industry do if they fired all the autistic people for "not being team players"? How many dev teams are there without at least one furry, trans person, or socially awkward geek?
The irony is that DEI promotes merit by forcing companies to justify hiring beyond basic “cultural fit” vibes.
I’ve been in the business and seen a ton of hires on vibes. DEI actually asked people to expand the talent search, not hire anyone unqualified (which is what the anti-DEI folks are desperate to have us believe it did).
I predict some major EEO lawsuits will eventually bring the pendulum back in the other direction because my sense is that the return to vibes hiring (and RIF-ing) is resulting in very actionable discrimination cases.
If DEI operated on merit, there would be no need for the special new concept of DEI.
Ive seen many cases where HR stalls hiring until the most qualified candidates move on, prefilter insufficiency "diverse" candidates from the pool presented to teams, or implement internal quotas to meet external funding or contract requirements.
Not to mention the actual external requirements for "diversity" from public tender process, government backed funding bodies, and politically protected mega wealthy.
The enthusiasm for disparaging DEI combined with a lack of articulation of how they plan to quantify 'qualifications' in a non-biased manner. My sense is that they don't plan to do this at all, they don't have a plan, and they are going to blunder into patterns of discriminatory practices that DEI frameworks were protecting them against.
> It’s not like we have a term like “individual contributor” or anything in the industry.
Perhaps I'm missing something here.
To me "individual contributor" means anyone who is NOT: A (technical) "Lead", "Chief", "Architect", or (possibly) "Staff" anything, and has no management or team-leader responsibilities.
I'm not saying there can't be very clear counter examples, I guess the overall sense though is that "being a team player' is generally considered an attractive quality in any employee. If A is a team player and B isn't, and they're otherwise equivalent, you're probably going to take/keep A.
It's not like (most) hiring managers put "not a team player" in the pro column.
Alas, I’ve learned that while everyone wants to hire them to fix their hideously fucked systems, they really don’t enjoy being told that their systems are, in fact, hideously fucked. They’d much prefer you quietly put out fires while biting your tongue about how they aren’t actually fixing any root causes.
The problem is that people are being cut for not being perceived as a team player, because they don't exactly fit the narrow perspective avoided by the dominant social culture. That doesn't mean they aren't team players.
For example: someone not always looking into your eyes while talking can be perceived as "rude". Same for wearing noise-canceling headphones in a talk-heavy environment. Oh, you don't drink alcohol during the "optional" Friday-afternoon company mixer? That's just weird. Want to have a day off for Eid rather than Christmas? Wellll, you did ask for it six months in advance and we did approve it already, buuuuut Dave planned a last-minute meeting which conflicts with the mandatory team meeting, so we moved the mandatory team meeting onto your day off... We'll just pay the hours you spent doing first-line support during Christmas in cash, okay?
heres an article that discusses how inflated diversity could possibly be a cause of social tension. the article's abstract concludes with a shrug ('too many factors!') but it does provide links to research papers arguing both for and against this case.
on the surface it seems pretty clear to me. behaviour is encoded in genetics. if one were surrounded by the same group for a few thousand years, they would share a common base of encodings, therefore social behaviours could be assumed to a higher degree. reference behavioural encodings drastically diverge across cultures (as embodied by religious value sets, or at a different meta level, the idea of low trust vs. high trust societies). based on this drastic divergence, predictions made about one's neighbour scale downwards in accuracy relative to increased cultural diversity.
so i see that jacking up societal entropy leads to lowered societal cohesion. but thats just my stance and id love to hear yours.
I couldn't disagree more. "predictions made about one's neighbour scale downwards in accuracy relative to increased cultural diversity"? I feel like this is just a fancy way of saying that you're uncomfortable with people being different from you. The social tension you're describing is in your own head. Even the article you're citing doesn't even agree with what you're saying.
your post is an ad hominem without substance to back the personal accusations within. i said there were arguments both in favor of and against diversity. the article i posted showed arguments both in favor of and against diversity. obviously some contradictions will present when looking at both sides.
diverse, millenia old, genetically encoded behavioural structures exist in our shared reality. id love to discuss this idea and the exact types of behaviours that can be encoded, down to the generational timespans required for encoding. that way we can talk about my idea in objective good faith.
'its all in your head' isnt objective good faith. applying the golden rule, you clearly accept bad faith ... man you couldnt tolerate a dissenting idea even momentarily before bringing out social ostracization and logical fallacies! sounds pretty similar to the behaviour of a racist, were you projecting?
that was said facetiously. im not trying to accuse you of anything, rather to show how it feels to be accused. to conclude i think its pretty easy to predict what my neighbours are eating for dinner at home and pretty hard in the city so youre gonna have to try a bit harder to convince me that the evidence of my eyes and ears is wrong.
Human populations dont share enough genes when they do share culture for this argument to make sense, people identifying as X culture but with Y genetics don't magically act like Y - saying "genetically encoded behavioral structures" is usually just code for "black people are dumber than white people" so you should understand why people are assuming bad faith.
thank you for clarifying why bad faith was assumed, that makes sense ... im pretty sure different levels of intelligence do present across racial/cultural borders, but assigning that to any one factor (ie. black=dumb) is unscientific
If a qualification for the role is "appreciation for certain less represented cultures/ideas/..." then sure. Otherwise, for a backend c++ engineer the benefits are significantly less obvious, to the point it's really hard to make a case for why DEI concerns should trump traditional evaluation metrics for skill.
The goal should be to hire the best team for the use case, regardless of gender/race/culture/background.
Appreciation isn't always enough, lived experience provides a lot of value as well.
See all the Falsehoods Programmers Believe About Names/Addresses/Birthdays/Phone Numbers/Time Zones/etc, for example. Do you want a backend engineer who designs a 64-character ascii text field for legal name and have everyone nod in agreement, or would you rather have one who knows that it isn't going to work for their cousin "Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso"?
> it's really hard to make a case for why DEI concerns should trump traditional evaluation metrics for skill
It doesn't. The goal of DEI has always been to attract a diversity of perspectives, all else being equal. Nobody ever proposed choosing a woefully unqualified diverse candidate over an obviously-qualified Generic White Guy. The only people who would oppose that would be the unqualified Generic White Guy who just happens to be the nephew of the CEO's golf buddy.
I don't know why someone with a cousin named Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso is that much of a better hire than someone named Jón Bergþóruson, 王小明, Sukarno (with no surname), גִּדְעוֹן בֶּן־גּוּרְיוֹן , or Karl-Theodor Maria Nikolaus Johann Jacob Philipp Wilhelm Franz Joseph Sylvester Freiherr von und zu Guttenberg. None of whom would classically qualify as diversity hires.
Hiring someone in the off chance that their ethnicity gives them some unique critical unknown unknown that will pop up half a decade down the line resides in the same mental space as a programmer writing `if (5 == i)` in case a future programmer accidentally deletes an =. It's just speculative defensiveness whose efficacy is simply not well established by actual research. And, in my view, just works to confound actual signals that, evidently, gitlab and other employers feel get unfairly overshadowed when emphasizing explicitly pro-diversity hiring policies.
McKinsey has studied this extensively had has repeatedly found that diversity is financially beneficial to companies. They've had at least 4 reports on the subject.
It's obvious why this is the case if you sit down and think about it. Echo chambers of like-minded individuals can't understand customers as well as a workforce of people who represent the diversity of those customers.
This isn't just diversity of race or gender, it's also diversity of thought and background.
Also critical and under-emphasized: the E and I in DEI, equity and inclusion. Power distance and lack of inclusion can railroad companies into giving the people with the most power the most influence on decisions, rather than giving the best ideas a chance to breathe.
In business a classic example might be "men designing women's clothing." How are you going to understand your customers if none of your employees and leadership resemble those customers? Perhaps you can figure it out and make some decent products but your competitor who has more diversity in their workforce is likely to outperform you, which is exactly what McKinsey's studies have demonstrated.
I will also point out that the only reason anyone started questioning this obviously true business concept and changing opinions into being against DEI is because the Republican Party's strategists figured out that they could appropriate and leverage the term "DEI" and attach it to the latent reactionary racism that much of the US still holds dear.
You can get away with saying "I don't like DEI" in public but if you say "I don't like black people" or "I don't think women should get hired for important roles" [1] that is obviously not acceptable, even though a large percentage of Americans feel that way. Right wing media twisted a largely innocent term into a useful dogwhistle.
Those McKinsey/HBR studies are trash. They privilege the hypothesis, overlook the obvious ecological fallacy at play and add in a bit of a sampling bias for good measure. The fact that East Asian Economies are all booming and exporting globally with ~0 diversity and unique cultures ought to refute this notion. I'm sure there is some no true scotsman line you can play here about how the true meaning of DEI, and I would agree that the stated goals of DEI are all laudable. But in practice these initiatives often amounted to unprincipled discrimination and venal power grabs, which is why they are so widely despised.
The teams in the Manhattan project, the Apollo project, the inventors of the transistor, the guys who designed the Hoover dam, who wrote Doom, etc. etc. etc. etc. were not very diverse.
You might not like it, but this is what peak performance looks like.
I think the industry's greatest strength is actually outsourcing bulk of the work to culturally homogeneous, cheaper labor countries of Eastern Europe and South [East] Asia.
> There seems to be a massive push against DEI over the last few years in the tech industry globally, despite it being one of the industry's greatest strength.
Okay, I'll bite. Why is it a strength, and why is it the greatest strength?
All people are equal, so it shouldn't matter if you have an all Asian team, an all black team, or any mix of all races.
Groups formed of people with similar life experiences have a greater tendency to fall into group-think that misses out on both giant errors and giant opportunities compared to more varied groups.
And all people aren't the same, you want a mix of minds and skills for most types of work. I'd totally hire someone that couldn't really do that much directly but was fun to be around and connected introverts that have some (potential) synergies in their ideas and generally made the group more productive over all.
Especially in business, the actual (not the managerial) judgment is the collective judgment on the whole groups output and actions by the market. Forging a high performing group out of different people is not the same as maximizing the median metric on some individual test of skill. Like quality, it's a bit undefinable, tho unmistakable when you experience it.
Big Tech CEOs having a front-row seat at Trump's 47 inauguration should give you a decent hint: they bribed the right people, so now they get to enjoy the kickbacks. There's no risk of being regulated to death right now, so there's no need to pretend having the same values the Democrats pretend to have.
Corporate DEI was never real. There's no "push against" it, simply because there was never a genuine push for it. Large companies don't have moral values - if they did their CEOs wouldn't be billionaires.
industry's greatest strength? where did that idea come from? hiring a bunch of didnt earn its based on race or sex? would you want your brain surgeon to be dei or do you want someone who is really good at the job?
Most DEI programs at big companies ended up setting goals based on things like race and sex. Zealots in HR departments then started implementing programs to change hiring and promotion and compensation to implement progressive identity politics at work, under the DEI label. These things happened in secret, because the companies didn’t like to highlight how being the wrong race or sex means your career is worse off.
That’s totally illegal and discriminatory but companies were not facing consequences for it under the Biden administration. The constant injection of DEI politics all over society - at work, in movies, in ads, etc - led to a backlash and personally I think it is one of the things that led to someone like Trump being re-elected. And this administration is very against DEI ideology. That’s one reason corporations quickly abandoned it - they didn’t want to face legal scrutiny now.
Another is that DEI culture produced no positive results, as expected. Companies already had incentives to hire the best employees they can. If you change that with other incentives thrown in, it’ll make things worse. And ten years after DEI began to appear everywhere, it was obvious it produced no benefit at best, and led to worse teams at worst.
Another reason is simply that a lot of the activists pushing this type of ideology grew out of the activist age group. And I think many of them likely don’t hold those beliefs as strongly anymore. But either way, younger people are different. Especially young males who are more conservative.
All of that and other things has led to DEI being removed or at least de emphasized.
I keep seeing this term “earned” ITT; what does this mean to you? Did you earn something which you were denied when a less-experienced other person got a job? We all have two brain cells and understand there’s a tradeoff being made here, and it sucks being on the other side of that, but i struggle to see what privilege you believe you have or should have.
Yes, MONEY. Companies and their management couldn't care less about DEI, they care about pleasing whoever in in power in order to get benefits and make as much money as they can. You could literally have Hitler in power now and you would see what companies would do for their survival.
Whenever someone at work tells you to take more ownership, the correct response is: "Sure, I'll take more RSUs". Of course, that's never what they mean. Ownership for me, responsibility for thee.
Every company I've worked at hammers the "ownership" idea and I hate it so much. It's how they drive a culture where employees are expected to invest themselves into "owning" a problem space that can be taken from them at any moment. It's how they trick you into doing extra work that's not in your job description.
Unless you're ACTUALLY an owner, don't be fooled by an "ownership" value.
It's the norm at Big Tech these days. Directors and VPs take all the glory if it goes well while ICs, team leads, and people managers get all of the blame if it doesn't. When the charlatans get exposed, they bounce on to the next company with their charlatan friends. Rinse and repeat while swapping RSUs for index funds, retire with >$10m before 50. If we stopped allowing this to work in our industry, it wouldn't be such a common thing. Unfortunately, with how everything is these days, these people are getting hired on vibes and bravado.
Ownership means just that, owning the company. The people pushing to place additional burden on workers are the actual owners (the investors and C level execs). Quite the hubris to create a fake class of "ownership" that only extends to taking responsibility and being held accountable but carries none of the benefits of actual ownership.
Conversely, you have "full ownership" and have the ability to decide the direction, as long as it's the same direction as your higher-ups have decided.
All the talk about higher-ups taking the big paychecks for "carrying so much responsibility" is in most cases just complete horseradish. When something goes wrong or doesn't run well, suddenly none of the higher-ups are taking the responsibility. Hmmmm it's strange, innit?
"Speed with quality" combined with that says a lot. Sounds to me like it will be the base expectation that their remaining developers slop out features in record time. Any failures will be theirs to "own" personally.
"And that ownership will of course automatically mean that they will work extra hard to ensure quality! Man, what a great idea! Yo, why we didn't think of that before?!"
One must really wonder, if they ever try to hear themselves talking or read their own prose. Maybe they do, but simply don't care at all?
Heh, tell me about it. This "ownership" thing is some Grade A bullshit. I see at workplace, all the autonomy on deciding any part of technical solution is taken away but on the other hand I have to take ownership of all consequences of their half-assed decisions.
I think same group of management consultants do a round of industry and in short time every company is using same duplicitous language of ownership, design thinking, customer first mindset, cloud first, cloud native, AI native, enterprise 2.0...and on and on it goes.
I read this and often think, yes, yes we know, but then I hear juniors at work taking these ideas at face value without considering things like stock splitting and preferred shares.
Owner is the one who gets the added value assigned to. At least according to the Das Kapital. So the check is easy - do you see the added value flowing onto your account or not.
I thought that the GitHub degrading would be an opportunity for them to be an alternative more focused in stability and a customer centric approach .
But it's just more slop
GitHub is already the main platform for random open-source projects, and that's unlikely to change any time soon. GitLab's selling point is essentially "Github, but not by Github". They would do Just Fine offering a highly-restricted free account for the handful of hobbyists who care enough about leaving GH but don't care enough to go to Forgejo & friends and for the people doing evaluations, offering free credits to the few high-profile FLOSS projects who accidentally end up on GL-the-SaaS instead of self-hosted GL, and for the rest just focusing on paid corporate customers.
Basically "screw any part about employees working together do what I say fast". What a shame. I love the AI bros who think utopia is coming, 4 day work weeks, etc. more like "get screwed, work more, for less, in worse environments".
> I love the AI bros who think utopia is coming, 4 day work weeks, etc. more like "get screwed, work more, for less, in worse environments".
Where do you find those, seriously? That might’ve been the case a couple of years ago, where they’ve gaslighted people and played on their feelings, but now gloves are off. AI bros are literally posting about lack of sleep, dopamine hits, vibe coding on a toilet/walk/watching TV, FOMO is through the roof everywhere, prophesying doom of SE, etc.
The code and product will turn to shit, and the company won't be able to extract itself from the mud.
Employees tasked with doing 10x more work with less help don't even have to feel bad about it happening. It'll also create employment opportunity in disrupting their old employer.
These companies are willingly signing up to become IBM.
AI code review with code ownership is fine. If people have to build working software, they'll do that, or you hold them accountable. Modern software dev at most organizations has far more code review than is needed outside of soc2 purposes.
Of course, once you have a big incident, then the value of more human review becomes obvious.
It's an important touch point for other code owners. I guess if no one is looking at the code anymore why even do an AI code review. It's kind of theatrical?
I seriously don't know how people are working like this now. I'm on my ass looking for work and in the last month it feels like everyone has completely lost their minds.
Non American here, but as I understood it DEI is an older and very broad framework, which includes handicap accessibility and hiring military veterans. There are probably still plenty of companies that support that.
I don't understand why companies are abandoning DEI so quickly and so decisively. What happens if/when a Democrat president is elected that mandates DEI and ESG all over again, are they going to add them back into their core values as swiftly as they abandoned them?
At least companies like Coinbase made principled stances against forced DEI and employee activism earlier than everyone else. Doing it now seems weird because if it does become mandated again, they're going to look so phony.
It always was unprincipled: regardless of whether someone's a fan of DEI or not, these companies are short-sighted, profit-driven, and at best reactive to trends. The only reason any person thought otherwise is that they were either desperately looking for a victory or desperately looking for an enemy to be angry about.
>What happens if/when a Democrat president is elected that mandates DEI and ESG all over again
Mandates? There is this weird revisionist history that DEI was a Biden era invention that all these companies were forced to roll out in January 2021. These programs were simply the latest evolution of prolonged and steady cultural shifts. I remember attending events trying to promote diversity in the computer science department when I was in college 20+ years ago. Killing DEI isn't wiping out four years of progress, it's attempting to wipe out decades.
The obvious decline started around 2010; coincidentally also the era of the rise of SJW-ism and nontechnical derailing drama. Once the diversity quotas started appearing, the inevitable results were obvious.
I never used the word "Biden" once in my post. You should correct your biases.
Whether or not you are left or right, the objective truth is that a Democrat added DEI mandates and Republican removed the DEI mandates. I didn't say anything about whether or not that is right or wrong, but the fact that companies seemingly embraced DEI and then once a Republican removed it, then they abandoned it so quickly means they really didn't care about DEI at all and it was all phony. It just goes to show you that when they start praising themselves for being "moral" it's not because they actually care, it's because they are forced to and they don't give a shit about anyone.
Why is this even a question? Of course they would, they're just companies, they go chase profits and cannot have real values, don't anthropomorphise your lawnmower, yada yada yada.
Trump’s entire administration is all, 100% DEI hires, he is a DEI King (DEI by its pure definition is someone getting a job while many other people are more qualified for said job)
I'm firmly not in Trump's anti-DEI camp but I have seen what can happen when you make it one of your core values. You can end up with a lot of people talking about it a lot, lots of meetings and initiatives rather than doing actual work. And usually those don't go anywhere because the people doing it don't have any power to actually change things. It's unlikely that a company like Gitlab really needs anything changing anyway.
It doesn't make sense for it to be 40% of their values, especially if they're losing money (or very close to it).
Places I've worked that actually seem to have inclusion as a core value are great places to work and seem to have high functioning teams. My impression mostly though is more that it was never really a value for management but they wasted a bunch of time talking about it. In general any mismatch between stated values and actual values has been awful to deal with and is a red flag for places to work.
> Places I've worked that actually seem to have inclusion as a core value
I am not sure if you had implied it but that would align with my experience as well: places that tout diversity were the worst places to work (as someone who is seen as 'diverse') while the ones that treated everyone the same and had the expectation everyone pulls their weight.
I absolutely despise people treating me differently because of who / what I am rather than doing good work. I will take mildly inappropriate good-nature jokes over head pats every day of the week.
I love wildly beyond mild inappropriate jokes as they are a litmus test for a thinking person. The people that take things way too sensitive are a net drag and buzz kill for doing the grinding required. It goes both ways too. I love it when people are agressive with me. So, by freedom of association, cliques form and I have no problem with nepotism because the ultimate currency in life is trust.
I lost shame a long time ago. I am not even sure what reality is. Like, am i a computation within this meat brain? Or is the brain a two way transceiver to the real dimension and this body is just an avatar a mech that im piloting for a few years. It seems like a cosmic joke. And then think about the sheer obsurdity of sex ... yeesh
That's the thing - you can have it as a lived value, or you can have HR run programs. Very few places have/had both. Given the choice, I'd pick door #1.
(Saying this as a strong advocate for diversity and inclusion, lest there's confusion)
You don't ask HR to go out and push some value if you already have it. You only ask when you want to change or want to pretend to change.
That said, some management people say it's important for a large company to write down the values that they actually practice. I can see several reasons why it's good, but I haven't ever seen anybody go and do it, so IDK.
HR run programs are costly and applied to either mandated trainings or things the org has issues with.
DEI isn't mandatory, so an org heavily invested in DEI training probably had serious issues in the first place (whether they end up on the other side at the end of the trainings being another question)
That's different from putting it as a core value though. Most companies have some kind of "make more money with less resources" stated value, and I don't think we see it as an issue ?
There are two ways to do diversity - the first is to put a brutal skill filter and take everyone that passes it no matter their skin color, body weight, religion or politics. The other is to reduce people to their demographics and push for (in)visible quotas. One of them leads to crappy results.
I just want to be clear that these are not the only two ways to do diversity. Even if you're just focused on hiring (which is a myopic way to view diversity, even at the most simplistic level you need to think about retention) hiring is complicated and I've seen people try a variety of things to get a wider pool of qualified candidates in the pipeline (offering remote work, better paternity/maternity leaves, outreach with local women in engineering groups, etc). This isn't at all my area of expertise and I've seen a lot of things outside of the dichotomy you described.
Also, idk why people view quotas as all of "diversity". I've literally never worked at a place that considered this but I see people mention them all the time on the internet.
The meritocratic delusion is that you would be in the "have" pile, rather than sitting in the back of the bus with the rest of the "have nots".
Of course, its statistically most likely that any individual would belong to the much larger latter group but stats like that only apply to other people, right?
Worse, its a zero step thinkers solution. Step zero is a merit based system, step one is for the people with motels on Boardwalk and Park Place to ensure they can never lose again by rigging the system to ignore merit in favor of capital.
I'm not a random variable, I'm a specific human. Predicting future outcomes need to take into account my personal traits. Otherwise you get into absurdities like "statistically speaking, when you join a family reunion, 15% of the people you see there will be Indians, and another 15% Chinese".
> You can end up with a lot of people talking about it a lot, lots of meetings and initiatives rather than doing actual work. And usually those don't go anywhere because the people doing it don't have any power to actually change things.
Someone I'm close to is going through this right now. They work at a place that officially highly values "inclusion", and their employer's website is dripping with virtue-signaling language related to it. But that someone is disabled, and in fact there's nobody at the organization who owns accessibility issues. Disability accommodations are haphazard, and often not timely. Why? Because no one owns them. They just get punted to an internal employee affinity group of disabled people who don't have a real chain of command, a real budget, or even a real prerogative to do accessibility work, let alone meaningful power— many of its members are routinely chastised by their bosses whenever they dedicate any time to solving access problems within the company. "That's not what we pay your for", "that's not your job", "I need you on this other thing", etc.
Meanwhile the organization receives public accolades from meaningless business press organization as a "great place to work" or even "great place to work for people with disabilities".
I think it's fine for companies to value diversity, and to value it publicly. A little virtue signaling is fine, as a treat; it may actually repel nasty people, encourage good behavior, or make employees feel more welcome sometimes. That stuff is good.
But there's also a real possibility that a company making diversity an explicit value results in lots of energy going into activities that let that company's executives pat themselves on the back about how good they are without actually doing much for inclusion. I wouldn't take any sizeable company's stated values too seriously, including that one.
On the one hand, yeah, you should respect people who are different from you. On the other hand, this is really so obvious that I doubt elevating it to a “core value” makes much of a difference. Are there marginal people who wouldn’t respect diversity unless it was a core value?
Then again I don’t even know what it means for something to be a core value. What is the practical upshot of “collaboration” being a core value of a company? Were people not collaborating before?
> Then again I don’t even know what it means for something to be a core value.
Yeah I think they're mostly useless. At least you definitely don't get core values by just declaring that they are your motto. For example Amazon is pretty widely agreed to have customer satisfaction as a core value. They didn't get it by saying "Our core values are customer satisfaction...".
I will push back on what you are saying here. I think this idea that DEI becomes "yet another annoying meeting" has been amplified by political media. This political media has successfully grown the seed of this idea in our heads that DEI is just useless nonsense, and it's associated with those "liberals who want to take your freedom and guns and tax money and jobs."
Essentially, what's happening here is that this right wing political media saw an opportunity to latch onto resentment of employees whose companies were just trying to change employee behavior for the better.
Companies are well aware that implementing DEI successfully will financially outperform other companies who don't. McKinsey has found this to be true repeatedly. But of course, people don't really want to hear these kinds of things and a lot of socially conservative people don't like being told that they need to learn how to interact with that queer looking person they'd rather just avoid. When Jim and Bob want to hire a new employee they just want to hire another Jim or another Bob and be left alone.
You know how your company puts meetings on your calendar where they preach about wellness and exercise and stuff like that? Just because they are annoying meetings doesn't mean they're wrong. You should focus on your wellness and exercise. Same deal with DEI: it's obviously beneficial to everyone, but America has a whole lot of people who really don't want it.
We are within the same lifetime as full blown segregation, redlining, of women being disallowed from opening bank accounts without spousal approval. There are people still alive from that era. Your great-great-grandparent may have been alive during legal racial slavery.
I think "inclusion" is fine as a value. "Diversity" is not, because it is an outcome and not an action one pursues. What matters is that all have equal opportunities to participate, and perfectly fair opportunities can create unequal outcomes through no fault of anyone's. Moreover, I think that fixating on the demographics of who joins the company is morally misguided. I want my teammates to be capable and enjoyable to work with, not to check someone's "we must have X number of minorities checkbox". Diversity initiatives always turn into the latter in my experience.
you say suck it fascist in response to DEI being removed, i say DEI would get canned by communists and fascists both, autofill the rest of the argument with some prose
Alas, it’s pretty obvious to everyone else that you tried to pick a fight by shoehorning your dubious, pre-formed argument into an inappropriate place. Better luck next time.
i was indeed trying to pick a fight, with gitlab, because i think its pathetic to pivot and abandon values for money, regardless as to those values, and regardless as to whether the abandonment is done by a human or a corporation. your comment was a convenient conversational entry point, as you made the scenario political, offering a chance for me to generalize to my point. thats the way i saw it anyway. did something in particular make you feel like a target?
So much of the kerfuffle about DEI has always been around the fact that people don't understand what DEI means.
Also, in the current environment, I don't see how anyone can look around and argue that merit-based hiring is a norm anywhere. Even at hotspots of anti-DEI, "merit" often means "friend of a friend" or similar.
I think the idea is that each letter in there is considered a merit, hence why it's always discussed under the "core values" section. That is to say, they're properties that they supposedly value, next to technical excellence, team fit, being a spitfire, whatever.
And that the discussed-to-death diversity hiring quotas are not its entirety, or even necessarily a part, of it.
Merit not being a threshold but a range in actuality probably also plays a role (along with how utter theater the typical job interview really is).
> I wouldn't want to be hired based on something so meaningless.
But that's kinda the point of it all, isn't it? That it's supposed to be empowering the disadvantaged / marginalized. If your background does not put you at a disadvantage, there's nothing to compensate for, then it would indeed be meaningless. But if there is, and you made it, then that is by definition extraordinary. So it is meaningful.
There's definitely a question about whether they'd be stealing your thunder by this, but I'll leave that to an actual aficionado of the topic. Not exactly the expert on all this.
For someone whinging about torturing language, you're the one asserting that compensating for racism is racism [0], while also proudly exclaiming that since your background was reasonably alright, other people from your ethnic class shouldn't be helped, so as to not hurt your precious little ego.
Tough crowd.
[0] and funnily enough, I agree! I just also think that if you believe there's a way out of this that isn't racist, you're a moron.
> Why don't you assume my fucking gender before assuming the contents of my mind?
What for? You seem to be enough of a victim already.
Or sorry, do you have a preferred slur?
---
It's incredible how far culture war has rotten the North American mind. I literally just joined in to offer my understood perspective to the guy, which I don't even necessarily find right (as I explicitly highlighted), but I do appreciate facets of.
But oh no, John Convenient-Idiot-Illiberal saw the right trigger words and had to spiral into a tirade with their sob story. You sure showed us dude. Hope that middle class money affords you a therapist. You sure could fucking use one.
> The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
No good way to execute lay-offs, my preference would be to do it like a band-aid. What use is it to do it in open unless they plan on having gladiatorial matches to keep your job. Otherwise it's just like a painful game of Duck Duck Goose.
The problem is that such voluntary separation programs tends to disproportionate attract high performers. You're losing the "10x engineer" who has stuck around because they like being here - despite getting attractive offers from the competition.
The mediocre people who dread looking for a new job during a hidden recession aren't going to leave. They can't afford the risk of not being able to find a new place of employment before the severance pay runs out.
These high performers will leave anyway if they see their environment drastically changing or feel the tide turning, except they'll do so months after you ripped the band-aid.
It's not that different from making it part of the process in the first place.
If they were thinking far ahead, they wouldn't need to do any firing at all - they would've gradually adjusted their hiring policy in time to avoid it.
if you don't like the new direction you can leave now and get the known now severance package. All in all, I think it is right to offer people voluntary severence with package when you pull the rug out from under them as far as where they thought they were working.
Unless they're going to offer offer an insane buyout, like 1+ years of pay + benefits + some accelerated vesting, nobody who doesn't already have something lined up is going to take this offer. It's much better to stay with one foot out the door and just keep cashing that paycheck and collecting your monthly vest. Especially when you know layoffs are coming, nobody expects you to do anything until they actually pull the trigger, then there's a month or two afterwards where you can slack off because morale is in the toilet, people are still trying to figure out who's left, how the company is organized, which priorities are dead, stuff like that. Ask me how I know.
It's defensible to have a voluntary separation program with clear terms. Microsoft, for example, announced on April 23 that a voluntary separation program would launch on May 7. On that day they announced the precise terms of separation, with affected employees given until June 8 to participate. Perfectly reasonable.
What Gitlab is announcing here is that employees need to apply for a separation, at a yet-to-be-determined time under still-unknown terms, without a guarantee of acceptance, in the next 7 calendar days. Much different and just so much worse.
the order here is backwards. publish the package first and let people apply without committing. right now GitLab gets the signal before employees even get the terms.
The market for people paying for Gitlab (Which feels more like GitHub but for Enterprise these days) is probably not even slightly overlapping with the market for people running Forgejo.
>removing up to three layers of management in some functions so leaders are closer to the work.
I wish them the best of luck with that plan. Middle management is where the institutional knowledge sits on how to actually get shit done despite challenges & broken processes/systems.
Middle management exists to turn conflicting marching orders from the directors into less conflicting marching orders for the line workers, and to keep any negative feedback on how fucking stupid the directors are from ever reaching them.
They don't cause the broken processes. They are the symptom of a broken executive process. A fish rots from the head down, and the people at the top get exactly the kind of company that they ask for.
Middle managment is also where most of your negative feedback is lost. I think moving fast in general needs tighter feedback loops and this is simply not possible in large organizations.
Negative feedback is not lost, it's filtered. No one at the top is equipped to deal with the actual feedback from ICs, unless your org is 10 people in a bike shed.
Do you really think that upper management wants feedback that the stupid fucking ideas they have are boneheaded? The point of middle management is to absorb it so it doesn't reach the children at the top and make them feel bad
> Middle management is where the institutional knowledge sits on how to actually get shit done despite challenges & broken processes/systems.
Really? In my experience it's the rank-and-file employees who have this knowledge of how to get on with it without ceremony and politics. And the broken processes and politics are created BY the middle managers.
Engineering has always been about more than writing code.
That's true, but it's interesting how FizzBuzz as said to be the bete noir of the average dimwitted software developer, and how much cutting-edge engineering organizations used to emphasize code in their recruitment processes.
If writing code is being replaced by "engineering judgement" it's going to need a much smaller cohort of developers. Too many opinions spoil the broth, after all.
Am I alone in being extremely sensitive to LLM-style writing, observing it in this article, and feeling a little upset about that? The letter to employees ticks several of the boxes, and if I’m not wrong that’s kinda shitty. Or perfectly aligned with the spirit of the announcement (or both).
> Great engineers are problem solvers and builders who care about system design, distributed systems, reasoning through failures, safely integrating new capability into critical systems, and making decisions under ambiguity.
Yes, and the people who are all-in on agentic AI are, in practically every example I’ve seen, not that. They’re the jackasses giving Claude root access to their prod DB and then writing a blog post about how much they’ve learned from their mistake.
>> "We've been working through some significant changes inside GitLab over the past few days"
I can't seem to get past this - all these decisions (and a work-force reduction :() are the result of a few days of pondering? I've had stomach aches that have lasted longer ..
Rather striking statements that have me somewhat concerned:
> Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did. Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services. And agent-specific APIs are being built so agents can act as first-class users of the platform, not as bolted-on consumers of human-shaped interfaces
Is there any broader consensus or information on this? Git doesn't scale? is being rebuilt for agents?! Monoliths are out and services are back? Humans are second class citizens now (human shaped interfaces - bad!!)?
What the hell are they planning to do in there at Gitlab?!
Won't this be bad for agents since now you need to provide this new API in their prompt, as opposed to just regular git which the model has seen enough in its training data
history suggests so .... people do keep trying to make agent native tools and workflows, but time and time again it turns out to be better just to expose raw inputs and tools to them and let them work with those. See skills beating MCP in most cases where their purpose overlaps for example - it's more effective just to let an agent write git commands than give it a "git tool" with a structured interface. People don't seem to grok the intuition of how heavily biased training on trillions of token of human language and existing software code makes the models towards working well with raw input.
steve yegee was saying this in december, that he had multiple people at companies with heavy AI usage telling him that git cant handle 1000+ commits per hour that these companies are producing now, that the agent can't push because between the push command initiation and its finish another push happened
GitLab's old values are for now still listed in their handbook:
> GitLab’s six core values are Collaboration, Results for Customers, Efficiency, Diversity, Inclusion & Belonging, Iteration, and Transparency, and together they spell the CREDIT we give each other by assuming good intent. We react to them with values emoji and they are made actionable below.
Since those terms don't speak for themselves individually, it's worth seeing what they're supposed to mean to get a sense of what GitLab is forsaking now. Each section is actually pretty lengthy, so you should go look and skim for yourself.
GitHub is publicly destroying itself in a desperate attempt to realize Microsoft's AI dreams, and as its main competitor your response is... to do the same?
Rather than going for a "Humans first, robot assistants welcome" approach which promises to deliver things like stability, reliability, trustworthiness, and human connections, they decide to go all-out on firing the humans and letting bots handle things like code review while explicitly shifting the existing human-first company values towards making the remaining humans responsible for the bot's mistakes.
They could've chosen to market themselves as the sane save haven for the GitHub exodus. Instead they choose to go down in history like Google abolishing "Don't be evil". But hey, I bet chanting "AI! AI! AI!" (albeit quite late to the game) will deliver a very solid lukewarm increase in shareholder value!
I'm no big-city product strategist, but this is what kills me about so many of these "we're pivoting to AI" announcements. Everyone else is doing the same thing (or already had a long time ago--like you said, GitLab is late to the game here), so squeezing yourself into an AI-shaped mold does exactly nothing to differentiate yourself from the competition. And if/when the AI hype machine sputters to a halt, the few companies that didn't do this will suddenly find themselves at an advantage, because they'll have real, actual differentiators to brag about.
Like, I know there are actual reasons and incentives here for the ever-present AI pivot. But I think they're stupid and short-sighted incentives.
It just struck me. I always thought I had writing software to fall back on, in case my main gig doesn't work out. I don't think it will still be there when I'm ready to return.
If "writing software" means you took a bootcamp and can churn out some HTML/CSS/JS using a popular framework, then yeah, it won't be too valuable in future. However, if you mean you are able to deliver software to solve real problems in a variety of languages and platforms then I think you'll be ok.
I'd hate to be their customer right now. Is this the only "corporate-scale" forge besides Github?
There's a lot of cool things happening between Gitea/Forgejo, Tangled, and Radical, but I doubt the latter two have any significant usage beyond OSS hobby projects. I'm not sure if the former two do, either.
Of course this is happening. Gitlab's values were only there for marketing - just take a look at their massive turnover of employees who get burn so fast they don't have time to update the About page fast enough.
You're wrong about the values. Since the start, GitLab's values were real and lived, unlike every other company I've been at. Only after Staples' takeover as CEO and C-suite changes, they started eroding rapidly, starting with Transparency.
Surprised at the negativity here - did most of you read the source ?
They seem to be mostly reducing headcount of managers and claim (supposedly) to be prioritising engineering.
On top of that their redesign sounds interesting - they want to adapt the platform itself (and concept) to deal specifically with how AI "users" will code and submit changes (and the rate of and interaction of that model) vs humans. We'll see how this plays out but this doesn't sound like a bad idea to me at all (assuming humans of course still get priority).
I don't understand how people can use the phrase "right-size" without a crushing sense of embarrassment and shame. Did you swallow a business consultant from 1990? That and phrases like "go forward strategy" say either 1. I do not know how to communicate like a human or 2. I am afraid of speaking naturally because it impinges on my self image as a business leader or 3. I do not want to accurately describe what I'm doing because that might expose my fragile ego to the possibility that I'm doing something which hurts people.
"We're firing a bunch of people because we think we don't need them anymore due to AI and we'll make more money without them."
There are times when businesses must fire people to stay afloat and it's a business that objectively needs to exist. This isn't one of them, so don't waste everyone's time with your BS, please.
I was finding this really interesting, that maybe a human had written it and it really reflected a vision for how we build software in this new world. I want to know the way, I'm curious!
Until I got to "One platform, three modes." and my brain just pattern matched "AI slop" and the entire post dissolved into meaningless for me.
I don't know if I can stop my mind reaching this conclusion. I'm sure someone at GitLab made some effort to carefully edit the post... But that it wasn't entirely rooted in a human who'd worked out how this stuff goes, but clearly had lots of AI writing it out... Just made my instinct go "this isn't worth paying attention to after all".
Not making sense to me at all. The AI era should be a great opportunity for Github to show their reliability and developer-first law, but they decided to all in AI. As a developer, what I need is a well-working repository, not a agent that write, review, even publish the code for me.
Can't imagine that slop is going to save them. Gitlab is a totally directionless, beyond self-hosting which I think is commendable, shoddily implemented product. I don't hate it, in that it is at least predictable, but the lack of basically any interesting view on how software should be developed or even look is such a waste.
How these companies act like these changes are for the better good and how "we are different" is just gross.
The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
Not even the balls to do the deed yourself. This reads like Shrek's "Some of you may die,... but that is a sacrifice I am willing to make."
Imagine how refreshing if the press release simply said:
"We over-hired, we're ram-packed full of managers pinging each other on Slack all day and need to cut costs to sustain our operation. We think GitHub's shit and we want to be a nimble org with a fighting chance at eating their lunch. We're also gonna provide 1000 free runner hours/mo to open source projects that move from GitHub to gitlab, and we're gonna make project namespaces on gitlab.com a first class thing like GitHub did"
Oh the irony. It was just last week I was lamenting Gitlab's lack of AI support. Best I could tell, Gitlab's solution to AI hitting servers is "block it with Anubis"
> We're rewiring internal processes with AI agents, automating the reviews, approvals, and handoffs to speed us up, and plan to right-size roles across the company to follow suit.
Ah, yes, finally gitlab will have the same uptime leves as GitHub.
I recently switched everything from bitbucket to GitHub mostly just because GitHub is more integrated with the AI tools I use. I feel like they’re probably still pretty big in Europe, but they’re losing in some markets more than before.
There’s still a big difference between “vibe coding” which is what it was called 2 years ago when people tried to “one shot” whole products and AI assisted / agentic development in more structured ways like it’s happening right now in many companies. In capable hands it’s a great tool.
I tried a self-hosted GitLab on a 64 core beast of a machine with Optane drives. Completely empty of content, there were multi-second delays everywhere. Horrified at what must lurk beneath the façade, I switched to Forgejo, Crow CI and YouTrack and couldn’t be happier.
While there are a lot of little knobs that can tweak performance, it shouldn't be slow out of the box, yet it is the number one complaint about GitLab.
This title is editorialized - the original title is "GitLab Act 2" and both the workforce reduction and CREDIT values pieces are hidden in among the details.
>Once approved, our new bonus program will give every team member who isn’t on an incentive compensation plan or bonus plan today, the opportunity to earn a cash bonus based on their individual performance, targeting 10% of salary, awarded at their manager’s discretion.
LOL. So basically buckle up and do what you're told and grind. And hope your manager likes you or you'll get nothing.
We're going to turn our infrastructure in to code slop in the hope that we can scale to host all of your code slop in the same way that GitHub's code slop has failed to host code slop.
“Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did. Git itself wasn't designed for that load .... Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services.”
Hmm, does the CEO of — checks notes — “GitLab” know what Git is?
Bleh. I was considering moving to GitLab from GitHub for future IaC work given the latter’s issues of late, but this sends me back to the drawing board.
Funny enough it’s not the agentic pivot or AI injection that’s sending me running, though, but the dropping of DEI from their values. Queer folk are still out here fighting tooth and nail for basic opportunities to put roofs over our heads, PoC still out here getting harassed and harmed by cops, disabled folk still struggling for basic accommodations so they can contribute rather than languish. DEI isn’t something you pick up when the popular movement swings towards it as a method of convenience, it’s a value you have to live by especially when times are tough and countries harass you for it.
Having used AI to write code, and seen the bs it outputs half the time, any org speed running to a parallel autonomous unreviewed code base is going to get hit with a massive rude awakening when their cluster f of a codebase melts down.
If you put the typical knee jerk reaction aside, the article is a pretty good read on where things are headed. Particularly interesting is their gut feel around problems requiring deep technical knowledge multiplying and the talent that can solve them becoming the scarcest.
What we are witnessing so far has been just the tech world’s reaction. As typical companies catch on to the agentic era, we’re going to see more layoffs. A part of it may be due to “unlocked productivity” but more of it will be to make space in their ranks for hiring more AI native workforce. Which will also be scarce at the beginning.
I think we should get ready to see a very different kind of talent war, and at a scale and pace never seen before.
The future of forges is decentralized, and I'm getting all I need now out of Forgejo/Codeberg/Codefloe. I'll be handcoding software merrily away on platforms which don't suck and aren't beholden to techbros spewing buzzwords.
You can always tell when the title is incredibly vague or bereft of details (e.g. "An update about our product") that it's going to be some flavor of either lay-offs, shutting down, or other enshittification.
I was thinking of switching to Forgejo because gitlab as great it was to this point is enormous. Small service to have git, some web ui and pipelines that run build will be enough.
When everyone is leaping head-over-heels into AI / agents, you need SOME part of your stack that is NOT that - slow, tested changes you can (mostly) trust, not "break everything quickly - again" stuff.
Imagine if gcc / clang decided to let agents implement new features without a lot of checking..
It's truly amazing that GitLab has 2,500 employees to begin with when I haven't ever encountered a single company or project using their services, besides one or two obscure open-source project once every few years.
It’s enterprise software that’s used in big corporates. I don’t know why you’d expect to see it in the wild as regularly as hobbyist projects on GitHub.
All these corporations either showing their true colors because the current admin, or they're scared to death of the current admin. Either way, it's fuck employees!
Just today we started a new cycle at work to move from GitHub to Forgejo, its such a refreshing tool... So fast, supports everything we need (and more), and no AI slop. Very happy with our decision
TLDR: Because of AI the future belongs to the engineers, so we took the noble decision to stop hoarding them on our payroll and make sure there are enough to go around for the other companies.
>Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
We've seen these tech waves several times - C and COBOL instead os ASM, CAD/4GL, template generation, Visual Basic and the likes (good old Delphi), Java (which allowed to a lot of mid-inept people to write compilable non-immediately-crashing programs), spread of python, and now AI. Every time we have an expansion of the industry, and every time glorious promises which get delivered on modestly. The point here is that they get delivered on.
And with AI i suppose it will be similar, though much better than before. In those previous waves human brain was the limit. This time we throw that limit away from the start - nobody will be able to comprehend the sheer amount of AI-generated code. Yes, that approach will hit some limit down the road of course too...
I have no doubt that AI is making some programmers quite a bit more productive. But if it is even 10% as good as all the marketing claims, we should be seeing an explosion of new tech startups, and a huge increase in feature shipping rate and number of bugs closed. Why isn't this obviously happening? Where's the next Dotcom Boom or Cloud SaaS Explosion?
What I am seeing instead is million-line AI slop pet projects whose sole "user" is its developer, and large companies falling over each other to enshittify their products. If there's no genuine user value being delivered, who's going to pay for those thousand-dollar-per-month developer tools?
>Where's the next Dotcom Boom or Cloud SaaS Explosion?
i see it isn't your first rodeo :) So, in Dotcom the companies needed huge financing for hardware and those money were the main limiter, in Cloud SaaS era small teams with relatively small financing mostly for salaries were able to deliver large - AirBnb, Uber, WhasApp, ... - and the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters. I'd say the slopped up Claude Code and OpenClaw are the examples of the new wave which is just starting.
>large companies falling over each other to enshittify their products.
Oh, yes, each wave the software is even more sh.tty than before, and this time i think we're really in for a shock to our imagination of how sh.tty it can get. All these datacenters here and later in space would need some slop to churn through :)
My bet is that we'd not have a software as a static set of bits existing for more than one execution. I think we'll have Just-In-Time software. An ephemeral one. It will be generated on the fly for specific task and discarded after. That will keep those datacenters busy at least for some time.
Another storyline i, with some horror, expect is merging of the coming boom of actual physical robots with the boom of AI-slopped software - that should be fascinating :)
I feel like "just in time" software is something we already had-- things like VBA and AppleScript showed there has always been an audience for scratch-your-own-itch tooling for work scenarios that aren't programming-centric.
It would be irresponsible to treat it as completely ephemeral though; clever tooling would make it easy when you remember "I already solved this issue 3 months ago, let me pull that back and reuse it."
What terrifies me is doing it with the current slopbox user experience. From a UI perspective, it's clumsy system that discourages developing mastery in favour of guesswork and gacha. (When you said the wrong thing in a classic command line, it at least told you so rather than trying to stagger along with it) And as an executing tool, it's simply sluggish-- once you've expressed what you want, Claude takes minutes to do what a regex does in milliseconds.
I wonder if the latter is fixable-- pre-configure the bot to generate answers as reusable code instead of slowly pumping the changes themselves.
> I think we'll have Just-In-Time software. An ephemeral one. It will be generated on the fly for specific task and discarded after.
For years I've been telling people that every office worker should be able to do at least some programming, just to avoid ever having them spend several days manually repeating the same handful of steps on a large set of data.
I can 100% see AI taking over this market. Teaching office workers to write half-decent prompts is probably easier than teaching office workers Python. But you don't need a $1000/month subscription to write barely-good-enough-to-run-once one-off scripts, and you can't build a business solely on ad-hoc scripts.
> the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters
Was it? Don't we?
There has never been a shortage of college kids willing to throw together MVPs. Sure, hacking together the bare minimum of business logic with auto-generated Rails code and a $20 Bootstrap template during a hackathon is being replaced by an afternoon talking an AI into generating a Tailwind-styled SPA in whatever Javascript framework is fashionable this week, but what does it really change? Writing MVP-level code was never the hard part.
The hard part is the engineering behind making it scalable, extendable, and durable. That's still staying the same: you're now just giving the prompt to an AI rather than a junior dev. If anything, having to deal with inept managers now sending full-blown AI slop proposals rather than blabbering a handful of buzzwords and leaving the professionals to fill in the rest is going to slow down our ability to work together.
Makes sense! I’ve worked with teams where the main bottleneck wasn’t technical complexity or even the company itself; it was a people problem.
Things like long discussions over formatting that should just be enforced by linters, pushing non-idiomatic patterns despite official docs and tooling recommending otherwise, or turning simple problems into meetings scheduled “for next week”, "in two weeks", "let's have a meeting and invite everyone" instead of just fixing the issue and opening a PR. Which sometimes takes 10 minutes!
At some point it starts to feel like responsiveness and initiative are treated as threats rather than strengths. Autonomy and ownership matter a lot more than people realize. Wonder how that'll look like!
How many people are at GitHub these days? I interviewed there just less than a decade ago and they REALLY did not like me. I kept yammering on about ensuring your KPIs are correct and making sure people felt psychologically safe. I think this was just after they were nabbed my MSFT and it felt like they were panicking, trying to figure out what would become of them now that they had been swallowed by a whale.
I've done some organizational consulting in the past, often trying to help companies understand why their employees don't trust management. I suspect the powers that be thought that post was decent, and I think the GitHub survivors will likely ignore most of it. And I don't know anything about what's going on there. But if you told me GitHub employees were made MORE nervous by that post than LESS, I would not be surprised.
Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
For example, why are you working on a four-year-old issue, and a trivial one at that, when you're already behind schedule on the tasks assigned to you? Now someone else who has their own things to get done has to review it? And even trivial changes can be annoying to truly review beyond a blind LGTM.
Just one of the many ways that pressure builds against the utopia of burning through old tickets.
Aside, watch out for the double standard we have for AI on forums like this. AI is expected to be so good that it can magically overcome the forces that keep engineers from working on old tickets (which were never related to engineer productivity) and, when AI can't, well of course it couldn't because AI sucks.
Productivity gains can also be achieved by reducing scope. The coming issues will be that because of increased productivity (idea -> working code) that software is too bloated, does too much, that product managers will and can say "yes" to everything. Until it becomes unmanageable.
And that's not a new problem, it's what basically every programming adage / wisdom going back 70 years is about.
I once found a looooong bug report thread on their issue tracker 7ish years old that had all the usual waves of promises that a fix might make the next release, then silence, then repeat, and the usual challenges to the bug’s status every time a release happened, plus it saw community members correctly diagnose the problem in the first couple years, then by like year 5 there’s was a (small!) patch posted by a community member with multiple posters confirming it was good and fixed the issue, that the author and others had been begging Google to apply and get in a release for a couple years. There’d been no responses from Google folks for a while.
That might be the worst one I saw, but encountering something like that was a few-times-per-year thing in my android app dev years.
So nothing really changes in terms of product development velocity, it’s just headcount reduction.
But that’s not what their own marketing strategy communicates.
Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?
You have never interacted with Jira?
What hope slop-maker-users have then?
On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.
(Source: I build tooling around Claude Code and have spent hours swimming in the GitHub issues based on downstream user feedback)
If investor fears are that AI makes GitLab's business less valuable, including this in their "GitLab Act 2" announcement makes a whole lot of sense:
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Wrote a bit more about this on my blog: https://simonwillison.net/2026/May/11/gitlab-act-2/
That's how I interpret the move, too.
>The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it
>Operationally, we grew into a shape that was right for the last era and isn't right for this one
To meet their largest opportunity ever, they believe they need less resources. I'm not sure I understand how that follows.
>We're rewiring internal processes with AI agents, automating the reviews, approvals, and handoffs to speed us up
Is this also in the list of "we create code twice as fast and the bottleneck is review so YOLO no bottleneck?". I've yet to see a convincing justification for this. If anything, if you're going full throttle all the more reason to watch the steering wheel, no?
That said, 8 layers of management is a lot of management, and every line of the message seems like leadership truly believes they are sinking in bureaucracy. Let's see how unneeded those 3 layers they're cutting were.
Seems like a fair assessment. Maybe they should start by getting rid of the people who put that structure in place?
bottom level teams are merged to form larger teams.
At gitlabs team size, that means every manager has 2-3 reports? Yeah, I'd be cutting layers too.
> GitLab has at most eight layers in the company structure (Associate/Intermediate/Senior, Manager/Staff, Senior Manager/Principal, Director/Distinguished, Senior Director, VP/Fellow, Executives, Board).
> [...] You can skip layers but you generally never have someone reporting to the same layer (Example of a VP reporting to a VP).
So they're counting the board of directors as a layer above the CEO.
I'm speculating, but they probably also have an unbalanced tree - you'll often see the IT security chief reporting directly to the CEO (because it's important to keep on top of, and they need authority to do their job) but only having 50 people below them in the org chart.
In some corporations you also sometimes get almost-nonexistent ranks created to smooth over a reorganisation. If a level 5 bureaucrat decides to merge the departments of two of their level 4 bureaucrats, they could demote one of them. Or they could make one into a level 4.5 bureaucrat.
I never really got why they need to be a public company in the first place.
Eight layers total
The GP miscalculated it.
I wonder if they have 5-10 employees per manager at the bottom of the org chart, but a lot of middle managers and manager-like titles mixed through the middle.
If anyone has a VP-level position open, I'm willing to send you my resume. There is a salary level at which I am willing to do work entirely without shame.
Still. Not a huge fan of this announcement or the general ways the landscape is evolving these days.
I'm aware that the defective code was not written by AI but nonetheless, GitLab is what stands between many small organizations and their most precious resources. I was fortunate that 2FA stopped the damage, but what's going to happen the next time? What if my organization is permanently damaged because we taught the machines to go fast and break things, too [1]?
[1] VPN is an option but we're a non-profit with a number of non-technical users, so admittedly we're caught in a balance between making it harder to do things. As much as WireGuard is awesome, there's still a barrier.
I would love to help a non-profit and so, I am curious but what are your thoughts on authentik/authelia and others, can they might help in any use case to what you are suggesting, I would love to have a more in-depth discussion!
Also thanks for working at non-profit, although I am not entirely sure what is about but thanks to your non profits and all the other hard working people working at non profits for a better world once again!
I think that as a corporation promoting the use of AI, they should actually be AI users themselves. They should just rewrite that laggy UI in Svelte, Solid, or even vanilla JS. Any of those would work.
Having said that, UI gripes aside, it works fine as a less complicated replacement for github.
Also their diffing, they use "..." diffing, and ".." is not apparently there in their GUI. As a git diffing tool I found this very odd.
"The Machine Stops" by Forster [0], anyone?
Honestly, I can't believe how repeatedly people ignore or don't know the warning signs put up by previous people.
Yes, it's science fiction, but so is 1984, Brave New World and Pump Six.
When will we go through something between 2001[1] and Tacoma[2]? Will we ever learn?
[0]: https://en.wikipedia.org/wiki/The_Machine_Stops
[1]: https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey
[2]: https://en.wikipedia.org/wiki/Tacoma_(video_game)
[0] https://gitlab.com/gitlab-org/gitlab/-/work_items/588806
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Also notable that the workforce reduction they describe doesn't appear to target engineers - they're "nearly doubling the number of independent teams" in R&D and "removing up to three layers of management in some functions".
What is this based on? The only thing I can think of is AI coding tools but only a few companies do it properly. I don't see gitlab capturing any of that spending
Also the whole "removing layers". Today's prof g market video was about the topic. Afaik it was the Coinbase CEO telling the same. Do these people get together to discuss their talking points? Or are they signalling to investors?
If gitlab thinks they are as famous as github i don't know what to say. They should have atleast positioned themselves as a better github alternative
None of these visionaries and thought leaders have ever had an original idea in their lives, they just ape eachother.
They simply don't have (or didnt) the skills to scale. THey were talking about using ceph to run things (which gives you an idea about how green their infra team was)
Its slow, large, excessively complex and not that resilient to failure.
You either want a bunch of NFS machines backed on to ZFS on nvme, with a central jumping off point that allows sharding (this is critical to allow one or more NFS server to fuck up and not kill access to everything else.)
Or, pay the money and use GPFS
Done correctly, Ceph is extremely reliable, resilient, and fast. Once you get over the initial learning curve, dare I say, even a joy to work with.
https://docs.github.com/en/enterprise-cloud@latest/admin/dat...
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
The mallard reaction is very possible in microwaves, but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
See here for a fun write-up: https://www.lesswrong.com/posts/8m6AM5qtPMjgTkEeD/my-journey...
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
[0] https://americanhistory.si.edu/collections/object/nmah_10880...
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
Can you imagine how silly they’d look when everyone realised.
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
The capabilities we've seen are:
- Text prediction/generation
- Inducing the Eliza effect
Your attempt at an analogy will make sense when someone tries to install a house as middle management at some company.
Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.
I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.
To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.
How?
It all still functions with text prediction
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.
Those literally work with text prediction.
If you take the text prediction out of it, nothing happens.
You stick a harness around a text predictor which then triggers the text predictor.
If you think I am missing something then please do point it out.
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
image: https://mataroa.blog/images/b5c65214.png
but it says that there are 3 e's in strawberry ;)
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
> Mark this prediction it will happen
But this historically is a very strong predictor of a poor prediction
Gemini: There is *1* "e" in the word "strawberry".
Seems fine
See: https://fediverse.zachleat.com/@zachleat/116529994444529036
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
[1] - https://en.wikipedia.org/wiki/Chinese_room
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
sources:
https://en.wikipedia.org/wiki/Origin_of_speech#Evolution_of_...
https://pmc.ncbi.nlm.nih.gov/articles/PMC5525259/
I think, therefore I am. You parrot, therefore you are... ?
Have you ever thought about how you would determine if an arbitrary given entity is intelligent or not? I think you'll agree it would require some kind of test. You might agree that the test would have to involve bidirectional interaction (since otherwise it would be impossible to distinguish an actual person from a recording of one).
Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.
It will be interesting in the next few years. Assuming we won't be in the 3rd world war thanks to the USA and will have much bigger concerns.
You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?
Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.
What I see is a need to shit on a tool to try to inflate your sense if self worth.
The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.
Trying to portray everyone who ever used a tool as the incompetent cohort is an exercise in self-delusion.
Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.
If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.
Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.
It that scrapes Hn it works. Ironically, it's why I'm here.
The problem is they’ll do what you ask. And if you are the type of non-curious person who replies “ Autocomplete only 'knew' how to output a scraper...”, then you’ll tell it to make you a scraper instead of ask what your options are for getting HN data.
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
This one stood out to me:
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
Git has always been one of the biggest perf bottlenecks inside of the product.
First for any scaled deploy we recommended NFS. We were young and dumb and it was too slow. (We’ve all been there)
Then we went to an RPC model with gitaly and even unwrapped some of the git calls inside of that to speed it up.
Just a few months ago we had a large customer with thousands of devs and a large monorepo ground their deployments to a halt because of a cloning strategy change that introduced an accidental 10x in git calls. Git itself was the bottleneck because it’s not designed for this scale and speed.
For enterprises where thousands of developers are contributing code via git to a centralized system of record, who are firing off 1000s of CI jobs Git is absolutely a bottleneck.
Now with LLM technologies we should easily expect a 5-20x code volume increase on the conservative side. Git is being stretched to its perf limits.
(Source: see my profile)
Models will only get better with time, not worse.
Demand will keep raising.
It's unlikely, but not totally improbable - Model collapse means that the subsequent models would get worse over time, not better.
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
Tons of middle management that makes no decisions what so ever.
Everytime you ask a question, they delegate, until you end up at person 1 again and they just can't decide anything.
It's like they all have decision paralysis.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
If anyone at Gitlab management is reading this; getting your microservices to run fully stateless in a Kubernetes cluster should the #1 goal. No disclaimers about potential risk. It's been 5+ years. Get it together. Stop bolting on minor package management features no one is going to end up using anyways.
Setting aside the whole "I'm not going to pretend otherwise which reads suspiciously like Claude, I don't understand how this is supposed to make employees feel any better. No one knows what's going on and through talking we'll figure it out? Mmmmmmhmmmmmm.
For some people it might actually be worth it, not to solve anything but to talk to someone. It still sucks anyway.
Forgejo is great.
I'm not saying you should never self-host your git server, but it's not for everyone.
Arguments against self hosting have to change as our SaaS overlords are decaying in front of our very eyes.
I get self-hosting got for security, compliance, and retention reasons, but for almost everything else it seems questionable for any use I would consider normal.
https://gitlab.com/gitlab-org/gitlab/-/work_items/590689
Users want a product that delivers the value they are looking for, VCs are looking for infinite AI scale, these do not meet. So founders need to present two different values and visions, one for customers and one for VCs.
In a small early stage company you can pretty easily hide each side from the other so you can deliver value to your customers while dancing the VC dance, but as you get larger its harder.
I think founders will endure and VCs will calm down at some point, but there is going to be some suffering along the way.
Oh and have you heard that they built Cluade code with only 20 people? (ignore 12 years of AI research expertise head-start and that Anthropic now has thousands of developers)
It’s not clear at all this is the wrong move.
One of the really interesting things about GitLab was that not only did they have employees in a large number of countries but they also published their employee handbook which helped show quite how much work it was to support that:
https://handbook.gitlab.com/handbook/people-group/employment... lists 18 countries right now. I guess they're losing 5 of those.
Here's a permalink to the current version of that page https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/... since it mentions that "Diversity, Inclusion & Belonging is one of our core values" and so is likely to be updated pretty soon!
They even used to have a public payroll.md page detailing how payroll worked in multiple countries - they moved that into their private docs a few years ago but the last public version is here: https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/...
UPDATE: I got the countries piece wrong. The linked OP says:
> Reduced operational footprint: We’re reducing our country footprint because operating in nearly 60 countries does not allow us to give every team member a great experience. We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer. Team members who are in good standing and would like to relocate are welcome to do so. We'll continue to serve customers in those markets through our partner network where appropriate.
I said they operated in 18 countries, so clearly my impression was out-dated and incorrect.
Also "We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer" suggests to me that it's a 30% cut to countries with "only a handful of people", not a 30% cut to countries overall.
Are they going to rectify this by laying these people off?
Yeah, sure. A couple of years ago it was Covid overhiring.
You know the one thing that is never ever going to be given as a reason for layoffs? The growing salary-productivity gap.
Yes, letting some LLMs "plan, code, review, deploy" will for sure improve quality and depth of innovation you ship.
Two big red flags here.
First git itself is distributed and built for scale.
I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
Are they going to rebuilt git??
Secondly: a big rebuilt of monolith to services. Firstly there is nothing wrong with a Modulith. Secondly “rebuilt” will cause a lot of busy work without immediate value for customers.
And first of all: this announcement is done due to the stock price not AI The productivity increase with AI is inflated because they want their stock price up.
Sell Gitlab stock while you can. The leadership team has no clue what they are doing.
Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.
Sure, you are right. Git allows you to keep distributed state and eventually reconcile that.
Customers are trying to do that at absurd human scales right now without AI. Git itself is a bottleneck for large enterprises with large repositories and large CI configurations.
It may be hard to believe but it’s true.
there're different dimensions for "scale" - like handling large monorepos, orders of magnitude more commits, tighter requirements for latencies (for agentic use, e.g. for agentic history navigation)...
It makes you have 10x more the errors if you YOLO it ;) especially at a scale even remotely comparable to gitlab :/
Doesn't really inspire the greatest of confidences when they are literally dropping the ball on one of the greatest opportunities as github is being ensloppified.
Sometimes I wonder if I am more passionate towards my 7$/yr vps's and websites running on it than 7 billion $ companies (GitLab has a market cap or net worth of $4.36 billion. The enterprise value is $3.10 billion.[0] to be exact)
break things and move fast should work when you have 1000 users on your website, not 1000 full on entreprises (probably more for gitlab)
> I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
> Are they going to rebuilt git??
These comments make me realize again how you all (who were alive ie) must have felt during the pets.com and dotcom mania. Some of these sentences are almost onion-video like titles. Its so all weird at a certain point. I am unsure how to feel about this.
[0]: https://stockanalysis.com/stocks/gtlb/statistics/
New values: Speed with Quality, Ownership Mindset, Customer Outcomes.
In other words, work harder, not smarter, and no more DEI.
The ball is right there, bouncing alone in front of the goal, and they just have to position themselves as "we're the stable ones" to score that market when the exodus inevitably happens.
Nope, full throttle and stimulants, just because.
So many things they could be doing, to make people buy into their services. For example they could simply run campaigns about how they promise to never use customer and user repositories for AI training. Or they could show better uptime statistics. Their CI language is better than Github's too.
If anyone gave me a choice between Gitlab and Github, I would go with Gitlab. But if I had additionally the choice to use Codeberg, I would choose that.
Maybe they are just not looking to grow. If they made such a statement, that would actually be a pleasant surprise. No hunger for "infinite exponential growth", just to impress investors? Great! That's a fat plus in my book!
Gitlab pricing was bonkers. It always felt like their sales team were trying to play gotcha with us over the years with pricing schemes that would milk us for money.
Their pitch is not to you, the dev. But, to the investor class. We are in this funny place in the market where you can make more money by catering to the investor class than to customers. In other words, an upside down world.
I understand the meaning, however, in that they're well positioned by having the company name and domain name, same general way that non-technical people will pay wordpress.com to host their blog/small website because it's very easy, rather than DIYing it or paying a 3rd party.
"Editions There are three editions of GitLab:
GitLab Community Edition (CE) is available freely under the MIT Expat license. GitLab Enterprise Edition (EE) includes extra features that are more useful for organizations with more than 100 users. To use EE and get official support please become a subscriber. JiHu Edition (JH) tailored specifically for the Chinese market."
Personal opinion, but I think a great deal of the people who are presently overloading github with one person created vibe coded projects would be just fine with the "CE" feature set.
I find it a bit concerning that this piece focusses so much on customers and shareholders... I know I don't pay, but perhaps sometime I will, and I am learning GitLab and applying at large orgs as GitLab consultant. All because of CE... So I hope it will stay. It is a nice and very complete on-ramp to EE.
I have to regularly use Azure DevOps and the whole platform is painful, and now is rotting on the vine. I hear there is internal strife at Microsoft between Azure DevOps and GitHub products.
The American corporation and its values are anathema to craftsmanship. You can ******* a **** all you want, it's never going to turn into gold, but your hands will be covered in crud.
We've all heard the joke about two people running from a bear and only one has to be less eaten than the other.
This is a race to the bottom. We shall see who winds.
> Interpersonal excellence: individuals who are good humans, embrace diversity, inclusion and belonging, assume good intent and treat everyone with respect
Were I to have crafted this post, it would have included things like
"We ask our employees, customers and investors time to prove ourselves to you again as we re-commit to listening to our stake-holders and ensure our organization is properly re-positioned to execute our continued plans to deliver the best possible service..."
But instead it comes across as "someone read an article about Amazon's two-pizza team rule and we figured there were worse things to try."
Every IC ought to use the present day as the opportunity to build a nimble competitor to their old employer (or whatever industry incumbents they want).
They're literally setting themselves up for this.
Having been in some of these values meetings, I really imagine it went like this: someone wanted speed, and someone else wanted quality. Sorry, I mean Speed and Quality. Many people said there is a tradeoff between those two things, and only one thing can be first.
Some brilliant businessman: "I know, we'll combine them. We want Speed _and_ Quality." Thus, "Speed with Quality." Tada!
Values are a tradeoff: only one thing can be first. Trying to duck that is stupid.
Also "our velocity is 3x higher than it would be in the imaginary invisible universe where we made worse decisions 6 months ago" is impossible to measure, whereas "we cut a bunch of corners and shipped a piece of garbage on an arbitrary deadline" is very measurable.
Let's pick: Speed-Quality
Errrh... Let's forget about: Price
Does anyone know what caused this?
Very weird to include social awkward geek in there. But my guess would be like 99% of dev teams do not have a trans or furry.
I’ve been in the business and seen a ton of hires on vibes. DEI actually asked people to expand the talent search, not hire anyone unqualified (which is what the anti-DEI folks are desperate to have us believe it did).
I predict some major EEO lawsuits will eventually bring the pendulum back in the other direction because my sense is that the return to vibes hiring (and RIF-ing) is resulting in very actionable discrimination cases.
Ive seen many cases where HR stalls hiring until the most qualified candidates move on, prefilter insufficiency "diverse" candidates from the pool presented to teams, or implement internal quotas to meet external funding or contract requirements.
Not to mention the actual external requirements for "diversity" from public tender process, government backed funding bodies, and politically protected mega wealthy.
With respect, it seems like the hiring managers you were complaining about above weren’t the only ones operating mostly on vibes.
I’ve worked with several excellent “just leave me alone” sysadmin types.
Perhaps I'm missing something here.
To me "individual contributor" means anyone who is NOT: A (technical) "Lead", "Chief", "Architect", or (possibly) "Staff" anything, and has no management or team-leader responsibilities.
It's not like (most) hiring managers put "not a team player" in the pro column.
For example: someone not always looking into your eyes while talking can be perceived as "rude". Same for wearing noise-canceling headphones in a talk-heavy environment. Oh, you don't drink alcohol during the "optional" Friday-afternoon company mixer? That's just weird. Want to have a day off for Eid rather than Christmas? Wellll, you did ask for it six months in advance and we did approve it already, buuuuut Dave planned a last-minute meeting which conflicts with the mandatory team meeting, so we moved the mandatory team meeting onto your day off... We'll just pay the hours you spent doing first-line support during Christmas in cash, okay?
https://onlinelibrary.wiley.com/doi/10.1111/padr.12641
heres an article that discusses how inflated diversity could possibly be a cause of social tension. the article's abstract concludes with a shrug ('too many factors!') but it does provide links to research papers arguing both for and against this case.
on the surface it seems pretty clear to me. behaviour is encoded in genetics. if one were surrounded by the same group for a few thousand years, they would share a common base of encodings, therefore social behaviours could be assumed to a higher degree. reference behavioural encodings drastically diverge across cultures (as embodied by religious value sets, or at a different meta level, the idea of low trust vs. high trust societies). based on this drastic divergence, predictions made about one's neighbour scale downwards in accuracy relative to increased cultural diversity.
so i see that jacking up societal entropy leads to lowered societal cohesion. but thats just my stance and id love to hear yours.
diverse, millenia old, genetically encoded behavioural structures exist in our shared reality. id love to discuss this idea and the exact types of behaviours that can be encoded, down to the generational timespans required for encoding. that way we can talk about my idea in objective good faith.
'its all in your head' isnt objective good faith. applying the golden rule, you clearly accept bad faith ... man you couldnt tolerate a dissenting idea even momentarily before bringing out social ostracization and logical fallacies! sounds pretty similar to the behaviour of a racist, were you projecting?
that was said facetiously. im not trying to accuse you of anything, rather to show how it feels to be accused. to conclude i think its pretty easy to predict what my neighbours are eating for dinner at home and pretty hard in the city so youre gonna have to try a bit harder to convince me that the evidence of my eyes and ears is wrong.
The goal should be to hire the best team for the use case, regardless of gender/race/culture/background.
It was never trumping skill. This is just a willful rewrite of history perpetuated for some political goal.
The goal was always to ensure that skill had adequate opportunity to be displayed without bias.
See all the Falsehoods Programmers Believe About Names/Addresses/Birthdays/Phone Numbers/Time Zones/etc, for example. Do you want a backend engineer who designs a 64-character ascii text field for legal name and have everyone nod in agreement, or would you rather have one who knows that it isn't going to work for their cousin "Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso"?
> it's really hard to make a case for why DEI concerns should trump traditional evaluation metrics for skill
It doesn't. The goal of DEI has always been to attract a diversity of perspectives, all else being equal. Nobody ever proposed choosing a woefully unqualified diverse candidate over an obviously-qualified Generic White Guy. The only people who would oppose that would be the unqualified Generic White Guy who just happens to be the nephew of the CEO's golf buddy.
Hiring someone in the off chance that their ethnicity gives them some unique critical unknown unknown that will pop up half a decade down the line resides in the same mental space as a programmer writing `if (5 == i)` in case a future programmer accidentally deletes an =. It's just speculative defensiveness whose efficacy is simply not well established by actual research. And, in my view, just works to confound actual signals that, evidently, gitlab and other employers feel get unfairly overshadowed when emphasizing explicitly pro-diversity hiring policies.
We should just get a representative sample of the population and give them equal say in the design of the plane, engines, etc.
https://www.mckinsey.com/featured-insights/diversity-and-inc...
Landing page:
https://www.mckinsey.com/featured-insights/diversity-and-inc...
It's obvious why this is the case if you sit down and think about it. Echo chambers of like-minded individuals can't understand customers as well as a workforce of people who represent the diversity of those customers.
This isn't just diversity of race or gender, it's also diversity of thought and background.
Also critical and under-emphasized: the E and I in DEI, equity and inclusion. Power distance and lack of inclusion can railroad companies into giving the people with the most power the most influence on decisions, rather than giving the best ideas a chance to breathe.
In business a classic example might be "men designing women's clothing." How are you going to understand your customers if none of your employees and leadership resemble those customers? Perhaps you can figure it out and make some decent products but your competitor who has more diversity in their workforce is likely to outperform you, which is exactly what McKinsey's studies have demonstrated.
I will also point out that the only reason anyone started questioning this obviously true business concept and changing opinions into being against DEI is because the Republican Party's strategists figured out that they could appropriate and leverage the term "DEI" and attach it to the latent reactionary racism that much of the US still holds dear.
You can get away with saying "I don't like DEI" in public but if you say "I don't like black people" or "I don't think women should get hired for important roles" [1] that is obviously not acceptable, even though a large percentage of Americans feel that way. Right wing media twisted a largely innocent term into a useful dogwhistle.
[1] https://journals.sagepub.com/doi/10.1177/1532673X251369844
You might not like it, but this is what peak performance looks like.
Okay, I'll bite. Why is it a strength, and why is it the greatest strength?
All people are equal, so it shouldn't matter if you have an all Asian team, an all black team, or any mix of all races.
When there is a team like that, there is invariably sniping about how "X only hire their countrymen".
And all people aren't the same, you want a mix of minds and skills for most types of work. I'd totally hire someone that couldn't really do that much directly but was fun to be around and connected introverts that have some (potential) synergies in their ideas and generally made the group more productive over all.
Especially in business, the actual (not the managerial) judgment is the collective judgment on the whole groups output and actions by the market. Forging a high performing group out of different people is not the same as maximizing the median metric on some individual test of skill. Like quality, it's a bit undefinable, tho unmistakable when you experience it.
Corporate DEI was never real. There's no "push against" it, simply because there was never a genuine push for it. Large companies don't have moral values - if they did their CEOs wouldn't be billionaires.
It’s not like all surgeons and astronauts were white males for a long time out of inherent superiority.
That’s totally illegal and discriminatory but companies were not facing consequences for it under the Biden administration. The constant injection of DEI politics all over society - at work, in movies, in ads, etc - led to a backlash and personally I think it is one of the things that led to someone like Trump being re-elected. And this administration is very against DEI ideology. That’s one reason corporations quickly abandoned it - they didn’t want to face legal scrutiny now.
Another is that DEI culture produced no positive results, as expected. Companies already had incentives to hire the best employees they can. If you change that with other incentives thrown in, it’ll make things worse. And ten years after DEI began to appear everywhere, it was obvious it produced no benefit at best, and led to worse teams at worst.
Another reason is simply that a lot of the activists pushing this type of ideology grew out of the activist age group. And I think many of them likely don’t hold those beliefs as strongly anymore. But either way, younger people are different. Especially young males who are more conservative.
All of that and other things has led to DEI being removed or at least de emphasized.
Tell this to the people enjoying unearned privilege under DEI policies.
But you don't have to dislike yourself to recognize systemic unfairness that you benefit from and want to help change it.
I've noticed that the more a company pushes on ownership the more difficult it is to actually execute it.
Every company I've worked at hammers the "ownership" idea and I hate it so much. It's how they drive a culture where employees are expected to invest themselves into "owning" a problem space that can be taken from them at any moment. It's how they trick you into doing extra work that's not in your job description.
Unless you're ACTUALLY an owner, don't be fooled by an "ownership" value.
It's the norm at Big Tech these days. Directors and VPs take all the glory if it goes well while ICs, team leads, and people managers get all of the blame if it doesn't. When the charlatans get exposed, they bounce on to the next company with their charlatan friends. Rinse and repeat while swapping RSUs for index funds, retire with >$10m before 50. If we stopped allowing this to work in our industry, it wouldn't be such a common thing. Unfortunately, with how everything is these days, these people are getting hired on vibes and bravado.
The part I'd missed was that as middle management he didnt have any real authority himself... you live and you learn I guess.
How? Did the bozo get butthurt over being exposed?
All the responsibility is still yours though.
One must really wonder, if they ever try to hear themselves talking or read their own prose. Maybe they do, but simply don't care at all?
I think same group of management consultants do a round of industry and in short time every company is using same duplicitous language of ownership, design thinking, customer first mindset, cloud first, cloud native, AI native, enterprise 2.0...and on and on it goes.
1. https://fortune.com/2026/02/13/costco-defies-trump-on-dei-bu...
GitHub is already the main platform for random open-source projects, and that's unlikely to change any time soon. GitLab's selling point is essentially "Github, but not by Github". They would do Just Fine offering a highly-restricted free account for the handful of hobbyists who care enough about leaving GH but don't care enough to go to Forgejo & friends and for the people doing evaluations, offering free credits to the few high-profile FLOSS projects who accidentally end up on GL-the-SaaS instead of self-hosted GL, and for the rest just focusing on paid corporate customers.
Where do you find those, seriously? That might’ve been the case a couple of years ago, where they’ve gaslighted people and played on their feelings, but now gloves are off. AI bros are literally posting about lack of sleep, dopamine hits, vibe coding on a toilet/walk/watching TV, FOMO is through the roof everywhere, prophesying doom of SE, etc.
Employees tasked with doing 10x more work with less help don't even have to feel bad about it happening. It'll also create employment opportunity in disrupting their old employer.
These companies are willingly signing up to become IBM.
Of course, once you have a big incident, then the value of more human review becomes obvious.
I seriously don't know how people are working like this now. I'm on my ass looking for work and in the last month it feels like everyone has completely lost their minds.
At least companies like Coinbase made principled stances against forced DEI and employee activism earlier than everyone else. Doing it now seems weird because if it does become mandated again, they're going to look so phony.
Mandates? There is this weird revisionist history that DEI was a Biden era invention that all these companies were forced to roll out in January 2021. These programs were simply the latest evolution of prolonged and steady cultural shifts. I remember attending events trying to promote diversity in the computer science department when I was in college 20+ years ago. Killing DEI isn't wiping out four years of progress, it's attempting to wipe out decades.
The obvious decline started around 2010; coincidentally also the era of the rise of SJW-ism and nontechnical derailing drama. Once the diversity quotas started appearing, the inevitable results were obvious.
Whether or not you are left or right, the objective truth is that a Democrat added DEI mandates and Republican removed the DEI mandates. I didn't say anything about whether or not that is right or wrong, but the fact that companies seemingly embraced DEI and then once a Republican removed it, then they abandoned it so quickly means they really didn't care about DEI at all and it was all phony. It just goes to show you that when they start praising themselves for being "moral" it's not because they actually care, it's because they are forced to and they don't give a shit about anyone.
It doesn't make sense for it to be 40% of their values, especially if they're losing money (or very close to it).
I am not sure if you had implied it but that would align with my experience as well: places that tout diversity were the worst places to work (as someone who is seen as 'diverse') while the ones that treated everyone the same and had the expectation everyone pulls their weight.
I absolutely despise people treating me differently because of who / what I am rather than doing good work. I will take mildly inappropriate good-nature jokes over head pats every day of the week.
I highly doubt it considering that you can’t even spell it right you incompetent pillar
(Saying this as a strong advocate for diversity and inclusion, lest there's confusion)
That said, some management people say it's important for a large company to write down the values that they actually practice. I can see several reasons why it's good, but I haven't ever seen anybody go and do it, so IDK.
DEI isn't mandatory, so an org heavily invested in DEI training probably had serious issues in the first place (whether they end up on the other side at the end of the trainings being another question)
That's different from putting it as a core value though. Most companies have some kind of "make more money with less resources" stated value, and I don't think we see it as an issue ?
Also, idk why people view quotas as all of "diversity". I've literally never worked at a place that considered this but I see people mention them all the time on the internet.
Of course, its statistically most likely that any individual would belong to the much larger latter group but stats like that only apply to other people, right?
Worse, its a zero step thinkers solution. Step zero is a merit based system, step one is for the people with motels on Boardwalk and Park Place to ensure they can never lose again by rigging the system to ignore merit in favor of capital.
I'm not a random variable, I'm a specific human. Predicting future outcomes need to take into account my personal traits. Otherwise you get into absurdities like "statistically speaking, when you join a family reunion, 15% of the people you see there will be Indians, and another 15% Chinese".
Someone I'm close to is going through this right now. They work at a place that officially highly values "inclusion", and their employer's website is dripping with virtue-signaling language related to it. But that someone is disabled, and in fact there's nobody at the organization who owns accessibility issues. Disability accommodations are haphazard, and often not timely. Why? Because no one owns them. They just get punted to an internal employee affinity group of disabled people who don't have a real chain of command, a real budget, or even a real prerogative to do accessibility work, let alone meaningful power— many of its members are routinely chastised by their bosses whenever they dedicate any time to solving access problems within the company. "That's not what we pay your for", "that's not your job", "I need you on this other thing", etc.
Meanwhile the organization receives public accolades from meaningless business press organization as a "great place to work" or even "great place to work for people with disabilities".
I think it's fine for companies to value diversity, and to value it publicly. A little virtue signaling is fine, as a treat; it may actually repel nasty people, encourage good behavior, or make employees feel more welcome sometimes. That stuff is good.
But there's also a real possibility that a company making diversity an explicit value results in lots of energy going into activities that let that company's executives pat themselves on the back about how good they are without actually doing much for inclusion. I wouldn't take any sizeable company's stated values too seriously, including that one.
Then again I don’t even know what it means for something to be a core value. What is the practical upshot of “collaboration” being a core value of a company? Were people not collaborating before?
Yeah I think they're mostly useless. At least you definitely don't get core values by just declaring that they are your motto. For example Amazon is pretty widely agreed to have customer satisfaction as a core value. They didn't get it by saying "Our core values are customer satisfaction...".
Essentially, what's happening here is that this right wing political media saw an opportunity to latch onto resentment of employees whose companies were just trying to change employee behavior for the better.
Companies are well aware that implementing DEI successfully will financially outperform other companies who don't. McKinsey has found this to be true repeatedly. But of course, people don't really want to hear these kinds of things and a lot of socially conservative people don't like being told that they need to learn how to interact with that queer looking person they'd rather just avoid. When Jim and Bob want to hire a new employee they just want to hire another Jim or another Bob and be left alone.
You know how your company puts meetings on your calendar where they preach about wellness and exercise and stuff like that? Just because they are annoying meetings doesn't mean they're wrong. You should focus on your wellness and exercise. Same deal with DEI: it's obviously beneficial to everyone, but America has a whole lot of people who really don't want it.
We are within the same lifetime as full blown segregation, redlining, of women being disallowed from opening bank accounts without spousal approval. There are people still alive from that era. Your great-great-grandparent may have been alive during legal racial slavery.
[0] https://m.youtube.com/watch?v=bEghu90QJH4
Re-read the thread. They made a joke about acronyms.
secondary comment - 'suck it fascist'
third comment - 'fascism and communism would both get rid of dei'
whereabouts did the room get misread?
Also, in the current environment, I don't see how anyone can look around and argue that merit-based hiring is a norm anywhere. Even at hotspots of anti-DEI, "merit" often means "friend of a friend" or similar.
And that the discussed-to-death diversity hiring quotas are not its entirety, or even necessarily a part, of it.
Merit not being a threshold but a range in actuality probably also plays a role (along with how utter theater the typical job interview really is).
> I wouldn't want to be hired based on something so meaningless.
But that's kinda the point of it all, isn't it? That it's supposed to be empowering the disadvantaged / marginalized. If your background does not put you at a disadvantage, there's nothing to compensate for, then it would indeed be meaningless. But if there is, and you made it, then that is by definition extraordinary. So it is meaningful.
There's definitely a question about whether they'd be stealing your thunder by this, but I'll leave that to an actual aficionado of the topic. Not exactly the expert on all this.
Tough crowd.
[0] and funnily enough, I agree! I just also think that if you believe there's a way out of this that isn't racist, you're a moron.
I don't know, looks like you're quite the natural yourself. You both manage to be ashamed of your ethnicity, and hate another.
What for? You seem to be enough of a victim already.
Or sorry, do you have a preferred slur?
---
It's incredible how far culture war has rotten the North American mind. I literally just joined in to offer my understood perspective to the guy, which I don't even necessarily find right (as I explicitly highlighted), but I do appreciate facets of.
But oh no, John Convenient-Idiot-Illiberal saw the right trigger words and had to spiral into a tirade with their sob story. You sure showed us dude. Hope that middle class money affords you a therapist. You sure could fucking use one.
> The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
No good way to execute lay-offs, my preference would be to do it like a band-aid. What use is it to do it in open unless they plan on having gladiatorial matches to keep your job. Otherwise it's just like a painful game of Duck Duck Goose.
The mediocre people who dread looking for a new job during a hidden recession aren't going to leave. They can't afford the risk of not being able to find a new place of employment before the severance pay runs out.
It's not that different from making it part of the process in the first place.
Neither of these groups are valuing long term expertise
What Gitlab is announcing here is that employees need to apply for a separation, at a yet-to-be-determined time under still-unknown terms, without a guarantee of acceptance, in the next 7 calendar days. Much different and just so much worse.
Plenty of time to whip up a dead man's switch.
Having to rewrite all my CI will suck but will be worth it.
I wish them the best of luck with that plan. Middle management is where the institutional knowledge sits on how to actually get shit done despite challenges & broken processes/systems.
It's an even worse plan than eliminating juniors.
They don't cause the broken processes. They are the symptom of a broken executive process. A fish rots from the head down, and the people at the top get exactly the kind of company that they ask for.
Really? In my experience it's the rank-and-file employees who have this knowledge of how to get on with it without ceremony and politics. And the broken processes and politics are created BY the middle managers.
- when you see the word substrate in corporate speak, you know where that’s from…
My manager has started speaking like this. He showed a slide recently which had the words AI and Quantum nearby
That's true, but it's interesting how FizzBuzz as said to be the bete noir of the average dimwitted software developer, and how much cutting-edge engineering organizations used to emphasize code in their recruitment processes.
If writing code is being replaced by "engineering judgement" it's going to need a much smaller cohort of developers. Too many opinions spoil the broth, after all.
Could someone explain it?
If you have a lot of new stuff to build, and if you're not currently losing money, why start a new initiative with a layoff?
My guess is they are doing this to prep for an acquisition. Probably by an AI company or Datadog or similar.
I think Staples is a guy you bring in to sell the company, not run the company.
https://handbook.gitlab.com/handbook/values/
Yes, and the people who are all-in on agentic AI are, in practically every example I’ve seen, not that. They’re the jackasses giving Claude root access to their prod DB and then writing a blog post about how much they’ve learned from their mistake.
I can't seem to get past this - all these decisions (and a work-force reduction :() are the result of a few days of pondering? I've had stomach aches that have lasted longer ..
> Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did. Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services. And agent-specific APIs are being built so agents can act as first-class users of the platform, not as bolted-on consumers of human-shaped interfaces
Is there any broader consensus or information on this? Git doesn't scale? is being rebuilt for agents?! Monoliths are out and services are back? Humans are second class citizens now (human shaped interfaces - bad!!)?
What the hell are they planning to do in there at Gitlab?!
All free software projects should leave GitLab immediately. Why should we support the IP thieves?
> GitLab’s six core values are Collaboration, Results for Customers, Efficiency, Diversity, Inclusion & Belonging, Iteration, and Transparency, and together they spell the CREDIT we give each other by assuming good intent. We react to them with values emoji and they are made actionable below.
Since those terms don't speak for themselves individually, it's worth seeing what they're supposed to mean to get a sense of what GitLab is forsaking now. Each section is actually pretty lengthy, so you should go look and skim for yourself.
Here's the page: https://handbook.gitlab.com/handbook/values/
And here's an archive from yesterday, for when that changes: https://web.archive.org/web/20260510150031/https://handbook....
GitLab's "internal" workings are surprisingly public, so you can just look at the git history yourself: https://gitlab.com/gitlab-com/content-sites/handbook/blob/ma...
GitHub is publicly destroying itself in a desperate attempt to realize Microsoft's AI dreams, and as its main competitor your response is... to do the same?
Rather than going for a "Humans first, robot assistants welcome" approach which promises to deliver things like stability, reliability, trustworthiness, and human connections, they decide to go all-out on firing the humans and letting bots handle things like code review while explicitly shifting the existing human-first company values towards making the remaining humans responsible for the bot's mistakes.
They could've chosen to market themselves as the sane save haven for the GitHub exodus. Instead they choose to go down in history like Google abolishing "Don't be evil". But hey, I bet chanting "AI! AI! AI!" (albeit quite late to the game) will deliver a very solid lukewarm increase in shareholder value!
Like, I know there are actual reasons and incentives here for the ever-present AI pivot. But I think they're stupid and short-sighted incentives.
"So...you decided to throw away what distinguished you from your faster, more stable competitor?"
I guess someone will be selling enterprises something that lets them say, "We're doing AI too!" Might as well be gitlab?
Email me subject “gitlab” if interested - thomas@ our domain (I am the cofounder)
There's a lot of cool things happening between Gitea/Forgejo, Tangled, and Radical, but I doubt the latter two have any significant usage beyond OSS hobby projects. I'm not sure if the former two do, either.
Gitlab is a terrible company, period.
Source: I'm ex-GitLab
They seem to be mostly reducing headcount of managers and claim (supposedly) to be prioritising engineering.
On top of that their redesign sounds interesting - they want to adapt the platform itself (and concept) to deal specifically with how AI "users" will code and submit changes (and the rate of and interaction of that model) vs humans. We'll see how this plays out but this doesn't sound like a bad idea to me at all (assuming humans of course still get priority).
Reduce the work force of 30%. I don't know, dude, you didn't convince me.
"We're firing a bunch of people because we think we don't need them anymore due to AI and we'll make more money without them."
There are times when businesses must fire people to stay afloat and it's a business that objectively needs to exist. This isn't one of them, so don't waste everyone's time with your BS, please.
Until I got to "One platform, three modes." and my brain just pattern matched "AI slop" and the entire post dissolved into meaningless for me.
I don't know if I can stop my mind reaching this conclusion. I'm sure someone at GitLab made some effort to carefully edit the post... But that it wasn't entirely rooted in a human who'd worked out how this stuff goes, but clearly had lots of AI writing it out... Just made my instinct go "this isn't worth paying attention to after all".
"Act 2" for crying out loud, get out of town.
"We over-hired, we're ram-packed full of managers pinging each other on Slack all day and need to cut costs to sustain our operation. We think GitHub's shit and we want to be a nimble org with a fighting chance at eating their lunch. We're also gonna provide 1000 free runner hours/mo to open source projects that move from GitHub to gitlab, and we're gonna make project namespaces on gitlab.com a first class thing like GitHub did"
Gogs https://gogs.io/ (this IS gitea btw)
Forgejo https://forgejo.org/
Self hosted or cloud hosted. Also excluding Github because, please just fracking don't.
Ah, yes, finally gitlab will have the same uptime leves as GitHub.
If I had any inkling of giving GitLab a try, this killed it.
"We did nothing wrong, but ended up in the wrong shape!"
Software stocks won't win longterm if their value proposition is "we vibe code now".
It's Ruby, which is pretty horrific but still I think there was probably something not quite right in your setup because it isn't normally that slow.
>Once approved, our new bonus program will give every team member who isn’t on an incentive compensation plan or bonus plan today, the opportunity to earn a cash bonus based on their individual performance, targeting 10% of salary, awarded at their manager’s discretion.
LOL. So basically buckle up and do what you're told and grind. And hope your manager likes you or you'll get nothing.
Aside, none of these announcements even attempt to make sense.
GitLab's TAM is exploding, demand is through the roof, LLM tooling is making each IC more productive, and to capatalize on this moment GitLab is
... "transparently restructuring" by asking employees to quit so they don't have to lay off as many...
Hmm, does the CEO of — checks notes — “GitLab” know what Git is?
Uh, if this is what I think it means, I wouldn't trust using a product where their company thinks that approvals for reviews can be automated.
Funny enough it’s not the agentic pivot or AI injection that’s sending me running, though, but the dropping of DEI from their values. Queer folk are still out here fighting tooth and nail for basic opportunities to put roofs over our heads, PoC still out here getting harassed and harmed by cops, disabled folk still struggling for basic accommodations so they can contribute rather than languish. DEI isn’t something you pick up when the popular movement swings towards it as a method of convenience, it’s a value you have to live by especially when times are tough and countries harass you for it.
Fuck you, GitLab.
What we are witnessing so far has been just the tech world’s reaction. As typical companies catch on to the agentic era, we’re going to see more layoffs. A part of it may be due to “unlocked productivity” but more of it will be to make space in their ranks for hiring more AI native workforce. Which will also be scarce at the beginning.
I think we should get ready to see a very different kind of talent war, and at a scale and pace never seen before.
You can always tell when the title is incredibly vague or bereft of details (e.g. "An update about our product") that it's going to be some flavor of either lay-offs, shutting down, or other enshittification.
I think you need to explain it like it’s a bash script else I don’t think you understand it.
(Ironically I don’t think if this article was the prompt, I don’t think an agent would code it up the way you are thinking)
Imagine if gcc / clang decided to let agents implement new features without a lot of checking..
What can go wrong.
Now GitLab announces it will have to fire people - the AI slop cuts away at finacnial gains here.
AI slop is killing everything.
almost like a copy of my post :) https://news.ycombinator.com/item?id=47982975
We've seen these tech waves several times - C and COBOL instead os ASM, CAD/4GL, template generation, Visual Basic and the likes (good old Delphi), Java (which allowed to a lot of mid-inept people to write compilable non-immediately-crashing programs), spread of python, and now AI. Every time we have an expansion of the industry, and every time glorious promises which get delivered on modestly. The point here is that they get delivered on.
And with AI i suppose it will be similar, though much better than before. In those previous waves human brain was the limit. This time we throw that limit away from the start - nobody will be able to comprehend the sheer amount of AI-generated code. Yes, that approach will hit some limit down the road of course too...
... so where's the delivery?
I have no doubt that AI is making some programmers quite a bit more productive. But if it is even 10% as good as all the marketing claims, we should be seeing an explosion of new tech startups, and a huge increase in feature shipping rate and number of bugs closed. Why isn't this obviously happening? Where's the next Dotcom Boom or Cloud SaaS Explosion?
What I am seeing instead is million-line AI slop pet projects whose sole "user" is its developer, and large companies falling over each other to enshittify their products. If there's no genuine user value being delivered, who's going to pay for those thousand-dollar-per-month developer tools?
i see it isn't your first rodeo :) So, in Dotcom the companies needed huge financing for hardware and those money were the main limiter, in Cloud SaaS era small teams with relatively small financing mostly for salaries were able to deliver large - AirBnb, Uber, WhasApp, ... - and the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters. I'd say the slopped up Claude Code and OpenClaw are the examples of the new wave which is just starting.
>large companies falling over each other to enshittify their products.
Oh, yes, each wave the software is even more sh.tty than before, and this time i think we're really in for a shock to our imagination of how sh.tty it can get. All these datacenters here and later in space would need some slop to churn through :)
My bet is that we'd not have a software as a static set of bits existing for more than one execution. I think we'll have Just-In-Time software. An ephemeral one. It will be generated on the fly for specific task and discarded after. That will keep those datacenters busy at least for some time.
Another storyline i, with some horror, expect is merging of the coming boom of actual physical robots with the boom of AI-slopped software - that should be fascinating :)
It would be irresponsible to treat it as completely ephemeral though; clever tooling would make it easy when you remember "I already solved this issue 3 months ago, let me pull that back and reuse it."
What terrifies me is doing it with the current slopbox user experience. From a UI perspective, it's clumsy system that discourages developing mastery in favour of guesswork and gacha. (When you said the wrong thing in a classic command line, it at least told you so rather than trying to stagger along with it) And as an executing tool, it's simply sluggish-- once you've expressed what you want, Claude takes minutes to do what a regex does in milliseconds.
I wonder if the latter is fixable-- pre-configure the bot to generate answers as reusable code instead of slowly pumping the changes themselves.
For years I've been telling people that every office worker should be able to do at least some programming, just to avoid ever having them spend several days manually repeating the same handful of steps on a large set of data.
I can 100% see AI taking over this market. Teaching office workers to write half-decent prompts is probably easier than teaching office workers Python. But you don't need a $1000/month subscription to write barely-good-enough-to-run-once one-off scripts, and you can't build a business solely on ad-hoc scripts.
> the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters
Was it? Don't we?
There has never been a shortage of college kids willing to throw together MVPs. Sure, hacking together the bare minimum of business logic with auto-generated Rails code and a $20 Bootstrap template during a hackathon is being replaced by an afternoon talking an AI into generating a Tailwind-styled SPA in whatever Javascript framework is fashionable this week, but what does it really change? Writing MVP-level code was never the hard part.
The hard part is the engineering behind making it scalable, extendable, and durable. That's still staying the same: you're now just giving the prompt to an AI rather than a junior dev. If anything, having to deal with inept managers now sending full-blown AI slop proposals rather than blabbering a handful of buzzwords and leaving the professionals to fill in the rest is going to slow down our ability to work together.
Things like long discussions over formatting that should just be enforced by linters, pushing non-idiomatic patterns despite official docs and tooling recommending otherwise, or turning simple problems into meetings scheduled “for next week”, "in two weeks", "let's have a meeting and invite everyone" instead of just fixing the issue and opening a PR. Which sometimes takes 10 minutes!
At some point it starts to feel like responsiveness and initiative are treated as threats rather than strengths. Autonomy and ownership matter a lot more than people realize. Wonder how that'll look like!
I've done some organizational consulting in the past, often trying to help companies understand why their employees don't trust management. I suspect the powers that be thought that post was decent, and I think the GitHub survivors will likely ignore most of it. And I don't know anything about what's going on there. But if you told me GitHub employees were made MORE nervous by that post than LESS, I would not be surprised.