Oh wow, a 23-page write up about how the author misunderstood AWS Lambda's execution model [1].
> It emits an event, then immediately returns a response — meaning it always reports success (201), regardless of whether the downstream email handler succeeds or fails.
It should be understood that after Lambda returns a response the MicroVM is suspending, interrupting your background HTTP request. There is zero guarantee that the request would succeed.
It took me a couple reads of the PDF but I think you're right. The author creates an HTTP request Promise, and then immediately returns a response thereby shutting down the Lambda. They have logging which shows the background HTTP request was in the early stages of being sent to the server but the server never receives anything. They also have an error handler that is supposed to catch errors during the HTTP request but it isn't executed either. The reason for both seems quite obvious: it's completely expected that a Lambda being shutdown wouldn't finish making the request and it certainly wouldn't stick around to execute error handling code after the request was cancelled.
As an aside I find it strange that the author spent all this time writing this document but would not provide the actual code that demonstrates the issue. They say they wrote "minimal plain NodeJS functions" to reproduce it. What would be the reason to not show proof of concept code? Instead they only show code written by an AWS engineer, with multiple caveats that their code is different in subtle ways.
The author intends for this to be some big exposé of AWS Support dropping the ball but I think it's the opposite. They entertained him through many phone calls and many emails, and after all that work they still offered him a $4000 account credit. For comparison, it's implied that the Lambda usage they were billed for is less than $700 as that figure also includes their monthly AWS Support cost. In other words they offered him a credit for over 5x the financial cost to him for a misunderstanding that was his fault. On the other hand, he sounds like a nightmare customer. He used AWS's offer of a credit as an admission of fault ("If the platform functioned correctly, then why offer credits?") then got angry when AWS reasonably took back the offer.
I don’t know about node but a fun abuse of this is background tasks can still sometimes run on a busy lambda as the same process will unsuspend and resuspend the same process. So you can abuse this sometimes for non essential background tasks and to keep things like caches in process. You just cant rely on this since the runtime instead might just cycle out the suspended lambda.
Absolutely, I do this at $dayjob to update feature flags and refresh config. Your code just needs to understand that such execution is not guaranteed to happened, and in-flight requests may get interrupted and should be retried.
> Looking at (this section)[https://will-o.co/gf4St6hrhY.png], it seems like you're trying to queue up an asyncronous task and then return a response. But when a Lambda handler returns a response, that's the end of execution. You can't return an HTTP response and then do more work after that in the same execution; it's just not a capability of the platform. This is documented behavior: "Your function runs until the handler returns a response, exits, or times out". After you return the object with the message, execution will immediately stop even if other tasks had been queued up.
They did (in a typical non-fault admitting way). They didn't escalate to Lambda engineering team, then said that this is a code issue, and that they should move to EC2 or Fargate, which is the polite way of saying "you can't do that on lambda. its your issue. no refunds. try fargate."
OP seems to be fixated on wanting MicroVM logs from AWS to help them correlate their "crash", but there likely no logs support can share with them. The microVM is suspended in a way you can't really replicate or test locally. "Just don't assume you can do background processing". Also to be clear, AWS used to allow the microVM to run for a bit after the response is completed to make sure anything small like that has been done.
It's an nondeterministic part of the platform. You usually don't run into it until some library start misbehaving for one reason or another. To be clear, it does break applications and it's a common support topic. The main support response is to "move to EC2 or fargate" if it doesn't work for you. Trying to debug and diagnose the lambda code with the customer is out of support scope.
It may just be such a basic tenant of the platform that no one thought to. You stop getting billed after lambda returns a response so why would you expect computation to continue? This guy expected free lunch.
I don't see that on the main AWS Lambda pages. It just says that you pay for what you use. It would make sense that the time billed would be until there is no more code to execute.
Fair enough, I guess this just seems like a bold assumption to make since an explicit handler function is a cornerstone of lambda, rather than being able to run module level code and having the end self detected.
Based on the way the document is written, it seems very likely that several people did realize exactly what the misunderstanding was and try to explain it to them.
I remember interviewing a guy who worked at Amazon - he commented that he started trusting AWS much less after getting to know some of the developers who worked on AWS.
Yep that. We are a fairly high spend customer which gives us direct access to engineering managers there. Some of the non-core products are run by what looks like two guys in a trailer protected by a layer of NDAs and finger crossing. Had some pretty crazy problems even with their core stuff over the years which absolutely knocked my confidence.
At the same time, I've had two separate Apple SREs in the last 5 years tell me that I should never trust their cloud services.
If you can't see it and can't control it yourself, then you accept these things silently.
My distrust for this goes as far as the only thing actually being subcontracted out for me is Fastmail, DNS and domains and the mail is backed up hourly.
No AWS employee knows “AWS well”. AWS has a huge surface area. If you work on one of the service teams - ie the team that maintains the different AWS services - you are very much unlikely to know the entire AWS surface area and be focused on your team and surrounding services.
It use to be the case if you were interviewing for an SDE position, you were specifically told not to mention specific AWS services in the system design rounds and speak of generic technologies.
There is a "post-invocation phase" for tidying up. It's not long, and it's enough for things like Datadog's metrics plugin to send off some numbers. Yet it will fail, if you have too many metrics. It's fuzzy.
If I'm reading this correctly, then AWS Support dropped the ball here but this isn't a bug in lambda. This is the documented behavior of the lambda runtime.
The document is long, and the examples seem contrived, so anyone is free to correct me but as I understand it the lambda didn't crash, after you returned 201, your lambda instance was put to sleep. You aren't guaranteed that any code will remain running after your lambda "ends". I am not sure why AWS Support was unable to communicate this OP.
If you are using Lambda with a function URL, you aren't guaranteed that anything after you return your http response remains running. I believe Lambda has some callbacks/signals you can listen to, to ensure your function properly cleans up before the Lambda is frozen, but if you want the lambda to return as fast as possible it seems you are better off having your service publish to an SQS queue instead.
This document is bizarre. The author is so confidently verbose about something they are clearly misunderstanding, and have been told as much dozens of times. It’s humbling, in a way, to think of times I’ve felt this strongly about something and to consider the possibility I could have been this wrong.
It's never a slam-dunk with AI-generated content, but some of the signs I notice are:
- Constant bulleted lists with snappy, non-punctuated items.
- Single word sentences to emphasize a point ("It was disciplined. Technical. Forensic.")
- The phrasing. e.g. the part about submitting to Reddit: "This response was disproportionate to the activity involved: a first-time technical post, written with precision, submitted to the relevant forum, and not yet visible to any other user." Who on earth says "written with precision" about their own writing?
To be clear, I don't think it's a fabricated account. I also don't think it was a one-shot. OP probably iterated on it with GPT for quite some time.
This has GPT written all over it: "That’s not just a runtime bug. That’s a reliability risk baked into the ecosystem."
For whatever reason, it constantly uses rhetorical reclassification like “That’s not just X. That’s Y.” when it's trying to make a point.
In GPT's own words:
```
### Why GPT uses it so often:
- It sounds insightful and persuasive with minimal complexity.
- It gives an impression of depth by moving from the obvious to the interpretive.
- It matches common patterns in blog posts, opinion writing, and analyst reports.
```
I think that last point is probably the most important.
> I am not sure why AWS Support was unable to communicate this OP.
Maybe we should be asking why the OP was not able to hear what AWS was telling them. I think there is a fairly troublesome cognitive bias that gets flipped on when people are telling you that you're wrong — especially when all your personal branding and identity seems to be about you being right.
> If I'm reading this correctly, then AWS Support dropped the ball here but this isn't a bug in lambda. This is the documented behavior of the lambda runtime.
AWS Support is generally ineffective unless you're stuck on something very simple at a higher level of the platform (e.g. misunderstanding an SDK API).
Even with their higher tier support - where you can summon a subject matter expert via Chime almost instantly - they're often clueless, and will confidently pass you misleading or incorrect information just to get you off the line. I've successfully used them as a very expensive rubber ducky, but that's about it.
Yep. I've ran into this using Bugsnag for reporting on unhandled exceptions in Python-based lambda functions. The exception handler would get called, but because the library is async by default the HTTP request wouldn't make it out before the runtime was torn down.
I sympathize with OP because debugging this was painful, but I'm sorry to say this is sort of just a "you're holding it wrong" situation.
`This wasn’t a request for clarification. It was a reset — a request for artefacts that have already been delivered, multiple times, across prior case updates, call notes, and debug summaries. The very same package AWS previously reviewed. The same logs they had already quoted. The same failure they had already confirmed.`
While the author is probably correct in this being a platform level bug (I have not ran the code to confirm myself though), they should stop being so angry at a corporation doing what corporations do which is run a big bureaucracy and require extraordinary amounts of extra communication. One of job requirements of a principle engineer, often making hundreds of thousands if not millions of dollars a year, is to communicate across levels because it is so hard to do this effectively without being angry or losing your cool.
`As a solo engineer, I outpaced AWS’s own diagnostics, rebuilt our deployment stack from scratch, eliminated every confounding factor, and forced AWS to inadvertently reproduce a failure they still refused to acknowledge.`,
I do not think that a solo engineer would have the economic clout to get AWS to pay attention to the bug you found. Finding a bug in a system does not imply any ability to fix a bug in a bureaucracy.
Plus, given the tone of the article, I have a funny feeling that the author may have rubbed people in AWS the wrong way, preventing more progress from being made on their bug.
But, a wise fellow once said, "all progress in this world depends upon the unreasonable man".
What a tremendous waste of resources. As a startup engineer and leader your job is to build value for customers and investors as quickly and effectively as possible, knowing the privilege of their participation with you is fleeting. The time spent in those meetings or writing this vitriolic pdf would have gone a long way towards shipping features, even if you had to survive with a suboptimal approach for a while. Time isn’t cheap and this cost a fortune.
The title should have been: AWS Lambda Works as Expected – An Application Architectural Bug [pdf]
If the designer expects a reliable system that guarantees processing of the request after the return of the status, they cannot rely on a server to not crash the moment the response was sent to the wire. Serverless enforces this well, yet people try to force their view of infinitely reliable infrastructure, which is not based in reality.
Whilst this is a combination of user error and shitty support (not unexpected) I’ve seen Amazon directly not understand how their own lambda stuff works before. There is a pretty horrible bug in one of their higher level products that used lambda because they didn’t know that an execution environment can be reused between invocations. Their software assumed entirely incorrectly that it was stateless resulting in us having to, as the first line in the lambda, clean up the previous invocation’s data on the filesystem. After 3 years they haven’t fixed this as far as I am aware. Getting the actual engineers who wrote this on a call was unproductive as well suggesting it would take months to sort out. We built our own in the end.
This post is a good lesson on humility, and why emotionally-heated blog posts might not be a great idea, no matter how satisfying, and you should possibly stick to dry technicalities instead. If OP had just written up their experience without all the sneering and potshots at Amazon support, people would've just gently corrected them. But as it is they look like a tremendous asshole, and certainly not fit for a CTO role, either temperamentally or skill-wise.
The poor guy has obviously wound himself in knots over this for several months. He went through all this, obviously really bothered by the impersonal Corporation, "solved" his problem by moving to Azure, but then wrote up this massive rant.
Sounds like he needs a holiday at the least.
The mere idea that reddit has a special Amazon protection team that is the cause of his account suspension is ludicrous.
I work at lambda, and my team would be the one to engage with OP, it most likely would have been me actually to investigate this.
The fact that support didn’t engage us seems odd as we have gotten engaged for far more silly stuff.
However, in this case you should have awaited for the response from event emitter.
The lambda execution model clearly does not allow you to run things past the response as execution is frozen by the sandbox / virtualization software. From there, you’ve most likely caused a timeout when issuing the request.
I didn’t bother to read this incredibly long write up past the introduction. But other commenters are asserting that the author expected their Lambda function to continue executing an async task after returning an HTTP response, despite numerous attempts by AWS staff to explain why that wouldn’t work. If true, that is _hilarious_. Easily one of the best stories of misguided anger/hubris/Dunning-Kruger I’ve ever heard. That’s like expecting a local function to continue executing after a `return` statement. It’s such a simple, basic principle of the platform they were trying to use.
But one thing bothers me…wouldn’t the author have encountered the exact same behavior in Azure? I guess I really will have to read this paper to find out.
AWS has a clear policy of not providing support for user application code. I’ve never worked at a company that had enterprise level support with a dedicated TAM. It might be different there.
However if it is worth anything I did work at AWS for 3.5 years (AWS ProServe). I knew almost immediately what the issue was as soon as I read the article and knew it was documented and expected behavior.
While I have no empathy or patience for Amazon’s support model, you’re really asking for it when you write backend code in Node.
I get that some big businesses are using JavaScript on the backend, but I’ll bet almost every single one would be better off switching to a type-safe, compiled language.
> It emits an event, then immediately returns a response — meaning it always reports success (201), regardless of whether the downstream email handler succeeds or fails.
It should be understood that after Lambda returns a response the MicroVM is suspending, interrupting your background HTTP request. There is zero guarantee that the request would succeed.
1: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-...
As an aside I find it strange that the author spent all this time writing this document but would not provide the actual code that demonstrates the issue. They say they wrote "minimal plain NodeJS functions" to reproduce it. What would be the reason to not show proof of concept code? Instead they only show code written by an AWS engineer, with multiple caveats that their code is different in subtle ways.
The author intends for this to be some big exposé of AWS Support dropping the ball but I think it's the opposite. They entertained him through many phone calls and many emails, and after all that work they still offered him a $4000 account credit. For comparison, it's implied that the Lambda usage they were billed for is less than $700 as that figure also includes their monthly AWS Support cost. In other words they offered him a credit for over 5x the financial cost to him for a misunderstanding that was his fault. On the other hand, he sounds like a nightmare customer. He used AWS's offer of a credit as an admission of fault ("If the platform functioned correctly, then why offer credits?") then got angry when AWS reasonably took back the offer.
This response seemed illuminating:
https://www.reddit.com/r/aws/comments/1lxfblk/comment/n2qww9...
> Looking at (this section)[https://will-o.co/gf4St6hrhY.png], it seems like you're trying to queue up an asyncronous task and then return a response. But when a Lambda handler returns a response, that's the end of execution. You can't return an HTTP response and then do more work after that in the same execution; it's just not a capability of the platform. This is documented behavior: "Your function runs until the handler returns a response, exits, or times out". After you return the object with the message, execution will immediately stop even if other tasks had been queued up.
OP seems to be fixated on wanting MicroVM logs from AWS to help them correlate their "crash", but there likely no logs support can share with them. The microVM is suspended in a way you can't really replicate or test locally. "Just don't assume you can do background processing". Also to be clear, AWS used to allow the microVM to run for a bit after the response is completed to make sure anything small like that has been done.
It's an nondeterministic part of the platform. You usually don't run into it until some library start misbehaving for one reason or another. To be clear, it does break applications and it's a common support topic. The main support response is to "move to EC2 or fargate" if it doesn't work for you. Trying to debug and diagnose the lambda code with the customer is out of support scope.
At the same time, I've had two separate Apple SREs in the last 5 years tell me that I should never trust their cloud services.
If you can't see it and can't control it yourself, then you accept these things silently.
My distrust for this goes as far as the only thing actually being subcontracted out for me is Fastmail, DNS and domains and the mail is backed up hourly.
It use to be the case if you were interviewing for an SDE position, you were specifically told not to mention specific AWS services in the system design rounds and speak of generic technologies.
The document is long, and the examples seem contrived, so anyone is free to correct me but as I understand it the lambda didn't crash, after you returned 201, your lambda instance was put to sleep. You aren't guaranteed that any code will remain running after your lambda "ends". I am not sure why AWS Support was unable to communicate this OP.
If you are using Lambda with a function URL, you aren't guaranteed that anything after you return your http response remains running. I believe Lambda has some callbacks/signals you can listen to, to ensure your function properly cleans up before the Lambda is frozen, but if you want the lambda to return as fast as possible it seems you are better off having your service publish to an SQS queue instead.
Edit: I see he's the CTO of an AI company.
- Constant bulleted lists with snappy, non-punctuated items.
- Single word sentences to emphasize a point ("It was disciplined. Technical. Forensic.")
- The phrasing. e.g. the part about submitting to Reddit: "This response was disproportionate to the activity involved: a first-time technical post, written with precision, submitted to the relevant forum, and not yet visible to any other user." Who on earth says "written with precision" about their own writing?
To be clear, I don't think it's a fabricated account. I also don't think it was a one-shot. OP probably iterated on it with GPT for quite some time.
For whatever reason, it constantly uses rhetorical reclassification like “That’s not just X. That’s Y.” when it's trying to make a point.
In GPT's own words:
``` ### Why GPT uses it so often:
- It sounds insightful and persuasive with minimal complexity. - It gives an impression of depth by moving from the obvious to the interpretive. - It matches common patterns in blog posts, opinion writing, and analyst reports. ```
I think that last point is probably the most important.
https://lyons-den.com/CV/David_Lyon_CTO_CV_2025.pdf
EDIT: he has added three(!) separate mentions of the same incident to his résumé
Maybe we should be asking why the OP was not able to hear what AWS was telling them. I think there is a fairly troublesome cognitive bias that gets flipped on when people are telling you that you're wrong — especially when all your personal branding and identity seems to be about you being right.
AWS Support is generally ineffective unless you're stuck on something very simple at a higher level of the platform (e.g. misunderstanding an SDK API).
Even with their higher tier support - where you can summon a subject matter expert via Chime almost instantly - they're often clueless, and will confidently pass you misleading or incorrect information just to get you off the line. I've successfully used them as a very expensive rubber ducky, but that's about it.
I sympathize with OP because debugging this was painful, but I'm sorry to say this is sort of just a "you're holding it wrong" situation.
"Failure" != "Bug"
While the author is probably correct in this being a platform level bug (I have not ran the code to confirm myself though), they should stop being so angry at a corporation doing what corporations do which is run a big bureaucracy and require extraordinary amounts of extra communication. One of job requirements of a principle engineer, often making hundreds of thousands if not millions of dollars a year, is to communicate across levels because it is so hard to do this effectively without being angry or losing your cool.
`As a solo engineer, I outpaced AWS’s own diagnostics, rebuilt our deployment stack from scratch, eliminated every confounding factor, and forced AWS to inadvertently reproduce a failure they still refused to acknowledge.`,
I do not think that a solo engineer would have the economic clout to get AWS to pay attention to the bug you found. Finding a bug in a system does not imply any ability to fix a bug in a bureaucracy.
Plus, given the tone of the article, I have a funny feeling that the author may have rubbed people in AWS the wrong way, preventing more progress from being made on their bug.
But, a wise fellow once said, "all progress in this world depends upon the unreasonable man".
If the designer expects a reliable system that guarantees processing of the request after the return of the status, they cannot rely on a server to not crash the moment the response was sent to the wire. Serverless enforces this well, yet people try to force their view of infinitely reliable infrastructure, which is not based in reality.
Sounds like he needs a holiday at the least.
The mere idea that reddit has a special Amazon protection team that is the cause of his account suspension is ludicrous.
Hopefully this exposure will help him out.
The fact that support didn’t engage us seems odd as we have gotten engaged for far more silly stuff.
However, in this case you should have awaited for the response from event emitter.
The lambda execution model clearly does not allow you to run things past the response as execution is frozen by the sandbox / virtualization software. From there, you’ve most likely caused a timeout when issuing the request.
But one thing bothers me…wouldn’t the author have encountered the exact same behavior in Azure? I guess I really will have to read this paper to find out.
However, it is still funny when you think that in regular servers you still finish when you respond to the request.
My dog keeps getting angry at the door. It won’t open automatically.
However if it is worth anything I did work at AWS for 3.5 years (AWS ProServe). I knew almost immediately what the issue was as soon as I read the article and knew it was documented and expected behavior.
I get that some big businesses are using JavaScript on the backend, but I’ll bet almost every single one would be better off switching to a type-safe, compiled language.