Hey, some feedback on those 1/24 holes that fill through scrolling. They're not matched with the steps of the scrollwheel. It seems like a step fills 1.2 holes. It pauses the animation of your demonstration between your points. Checked both Chromium and Firefox. Also checked my mouse events, and a single step matches with a single event.
It's not great on a smooth scrolling input device either. You need to carefully scroll to not leave it halfway through a transition. And there's no next/previous buttons available to step through properly. Best you can do is click the little bubbles in order.
I also notice this. Maybe it was not intended to "snap" on animation checkpoints though. Scrolling with a touch scroll or mouse middle-click-and-drag is pretty smooth.
Tried this out today and it’s surprisingly smooth for a browser-based tool. The zero-setup part really helps.
Would be great if the visualizer could also show how values flow across function calls over time, especially for recursive logic. That’s where beginners often get stuck.
Thank you for your feedback. Your suggestion makes a lot of sense, and improving data visualization has always been an area I’m continuously working on.
I really like the examples for sorting algorithms! This would have been amazing to have back when I first took Data Structures & Algorithms in college.
Thanks for building it! It can really help help my 10 yo who is just learning coding. How does it handle potentially infinite loops like below? Currently I get a parsing error after it takes a while.
```python
while exit != "yes":
print("\*")
exit = input("Exit?: ")
I'm really sorry, but the input() function isn't supported at the moment. Also, just a heads-up, print() won't have any visible effect right now—maybe I should think about how to better visualize print output.
In the Configuration box, for the Core Language selection, when I switched to c/c++, the example code didn't automatically update to a C/C++ example. At least not for me in Firefox.
this could be truly helpful if I could include it into my (large) existing codebase to help spot performance bottlenecks. that's not something I could so simply pasted into a self-contained snippet, though. Do other HN know of other static analysis tools that would be great for this?
Hey, what do you mean by "performance bottlenecks"? Do you mean finding CPU/memory hotspots in your apps? If so, APM tools like New Relic or runtime scanners like AppMap sound like a better fit than static code analysis.
However, if you want to visualize the codebase structure and reason about how coupling and design choices impact performance, static analysis becomes your friend.
If you're on .NET, you might consider joining our early testing campaign at Noesis.vision (https://noesis.vision). There are also a bunch of other tools—some more AI-based (like GitDiagram, DeepWiki), and others less or not AI-based and more language-specific (often IDE plugins). Let me know if you'd like to chat more.
A lot of our code was written by domain experts who aren't trained in algorithms/data structures. We have new relic and other performance assessment tools to see where we have long running queries etc. But looking at the realized performance will only show you the biggest problem areas, like long running queries, and will miss the "Death by 1000 papercuts" of functions that work but in a way that is unnecessary. It would be nice if there is a tool that looks more holistically at whether certain functions are designed well, both in terms of space-time complexity of the algorithms and in terms of overall design of certain features. For example, a feature that sequentially changes a lot of things in a database could not raise any red flags in a profiler, but could be unnecessarily adding a lot of time versus an approach that might pull data into memory, conduct all the operations, then bulk reinserting. Or which could be refactored to act in parallel versus sequential.
This is what is really hard to figure out because you need to know 1) what is the business logic you actually need (and what tradeoffs can you make that would be acceptable given the product), 2) algorithm design, 3) how web apps scale things horizontally, 3) which things get performed on the cpu / memory versus a database, and more.
Instead of hoping for a tool that can do all of that at once, it would be nice if a tool could at least visualize (2) within an existing project to help a human who can keep those things in their head at one time to spot problem areas with code design / system architecture that wouldn't necessarily be revealed by simply looking at logging/APM tools.
Oh that's more clear right now. Hints of such refactorings are certainly within reach of todays AI tools (if you agree to send your code to the LLM models). Have you tried asking Cursor/Windsurf this question with a prompt similar to what you've just written above?
BTW it might be an interesting feature for Noesis if it needs to be done during regular scans. Thanks for a tip ;)
Yes I've tried cursor. Currently it gives 1) high level suggestions if I ask about architecture, which may be valid but doesn't solve the issue of refactoring a large existing codebase to make architectural changes, or 2) some specific improvements on very simple functions, but it majorly falls short for 3) actually implementing improvements, because it doesn't have the context of the product and what "makes sense" as tradeoffs and choices. There are a lot of times where for us, "correctness" is a state of data calculations rather than code validity, where unit tests / integration tests don't exist and aren't trivial to generate.It is counterproductive if we make something run faster but return the wrong results. Or if a team member were to look at a task/function, they could reason "actually this feature that does X should actually be doing Y" but that isn't something the AI can reason about in practice. In those cases, it would be ideal to change the function without relying on tests, because you would actually want the behavior to change. Small example: a feature is not performant, and rather than just making that feature perform better, the better solution would be to switch to a different library that we added elsewhere in the codebase for accomplishing that work.
Also, while cursor is now able to scan terminal server logs to see errors, it doesn't come "out of the box" hooked up to app performance profiling tools -- even just running locally. There probably are some MCP servers or something to do that but I haven't set that up. Really you would want the IDE agent to have a feedback loop that goes like "optimize {speed, resource usage} subject to the constraint of {unit/integration test}" and let it run asynchronously or overnight etc. (Of course, there are tons of times that LLMs will work themselves into a dead end loop, and it would be bad to indefinitely generate LLM API calls on a dead end overnight).
Perhaps the parent means "identify the commands/procedures that would cause workload "5", and if many of them exist and rank them accordingly? So a procedure that 'prints a line on the log' 'costs' "1", but a thousand of them would 'cost' "1000" or something similar?
Totally off topic but this is the first time I've seen (or noticed) an ICP license link in a footer. I was curious so I looked it up (https://en.wikipedia.org/wiki/ICP_license) and its been in effect since 2000. I guess I'm one of the lucky 10,000 today.
Stick your site behind cloudflare, you'll get geographically distributed caching for free. It's currently very slow as if you are serving it from your basement.
The tool actually has an Annotations Config feature that allows you to customize the visualization, although the configuration options are currently quite limited.
Is there a way to run it locally? Maybe with docker?
—
If there is any way to make a small donation, buy you a coffee, I would.
It’s just a maybe, but what a fun maybe that would be.
:D
One small improvement, show the return values, not just the result, and somehow visualize if the function has not yet returned.
Maybe it's just because I'm used to debuggeres, but the vertical arrangement of variables and their values seems weird.
This is a really cool tool.
Maybe LSP integration for greater compatibility with languages would make this even more cool and useful!
Imagine visualizing a whole codebase with a tool like this.
```python
```In the Configuration box, for the Core Language selection, when I switched to c/c++, the example code didn't automatically update to a C/C++ example. At least not for me in Firefox.
However, if you want to visualize the codebase structure and reason about how coupling and design choices impact performance, static analysis becomes your friend.
If you're on .NET, you might consider joining our early testing campaign at Noesis.vision (https://noesis.vision). There are also a bunch of other tools—some more AI-based (like GitDiagram, DeepWiki), and others less or not AI-based and more language-specific (often IDE plugins). Let me know if you'd like to chat more.
This is what is really hard to figure out because you need to know 1) what is the business logic you actually need (and what tradeoffs can you make that would be acceptable given the product), 2) algorithm design, 3) how web apps scale things horizontally, 3) which things get performed on the cpu / memory versus a database, and more.
Instead of hoping for a tool that can do all of that at once, it would be nice if a tool could at least visualize (2) within an existing project to help a human who can keep those things in their head at one time to spot problem areas with code design / system architecture that wouldn't necessarily be revealed by simply looking at logging/APM tools.
BTW it might be an interesting feature for Noesis if it needs to be done during regular scans. Thanks for a tip ;)
Also, while cursor is now able to scan terminal server logs to see errors, it doesn't come "out of the box" hooked up to app performance profiling tools -- even just running locally. There probably are some MCP servers or something to do that but I haven't set that up. Really you would want the IDE agent to have a feedback loop that goes like "optimize {speed, resource usage} subject to the constraint of {unit/integration test}" and let it run asynchronously or overnight etc. (Of course, there are tons of times that LLMs will work themselves into a dead end loop, and it would be bad to indefinitely generate LLM API calls on a dead end overnight).
Sometimes it can also be limitation, but then probably you were looking for a different tool and this one isn’t aimed at you ;)