A couple of comments here mention using this in VR. Fwiw, years back I played a bit with shallow-3D UIs for software dev. Shallow like within a few cm of a laptop display, to minimize VAC eye strain for all-day use. Think more being able to layer and draw in color, but in 3D, rather than waving arms in a room.
The 3D can be wiggle 3D, or perspective from webcam head/eye tracking, or stereo from shutter glasses, or XR HMDs. Wiggle is easiest - just move the object orientation back and forth. Cute but distracting. Well, cross/parallel-eye gaze is easier, but limited - ok for little UI test swatches. Perspective is more subtle, less intrusive. Can be simple with a head tracker driving a single orientation, or go all in with eye pose (for distance) and window locations, to do an accurate 3D render. App stereo pairs can be "I give you two windows Left/Right-eye", or "alternating L/R view, labeled/synced/polled". Other possibilities. Many of these need window system/manager/desktop support. I found a lot of leverage in using a stack of electron and X.
It's fun to displace text in 3D. Like colorization, but more so. And if you don't mind a cluttered appearance, you can add secondary information layers segregated by depth. And... etc. Emacs with characters-have-a-depth finally gets you something LispMs didn't have. Fun aside, to explore possibilities with code text, with anything not inherently 3D, far easier to prototype UX with fg/bg colors, fonts, unicode, and animation. Or in browser, overlaid divs and transparent 2D/3D canvases.
I have a working fully 3D glyph based text rendering system I can't seem to get people to look at.
It's this. Every character is a 3d placed quad, instanced rendered, so you get tens of millions and then some. They are individually addressable and mutable like any polygon. I use it to render entire GitHub repos in one go. I have two versions, native Apple and web. Web has the basics of an ide setup. Would love insight or thoughts.
Is there a reliable way to pan around? Middle click + drag doesn't pan if my screen is covered by an object, it just moves the object itself. Scrollwheel pans up and down but I can't figure out how to go left and right. The minimap is too coarse - when I'm zoomed in close enough to read any text, very tiny movements of my mouse on the minimap pan around massively, too massively to be useful.
Two things, one is wrote, the other is direct:
Thank you, sincerely, an immeasurably appreciative amount for trying something new, sharing your time and opinion, and being honest with it. This is how we become better tool builders and engineers: different perspectives, different ways of thinking, and honesty with others. Again - thank you.
Ok, now for you:
Interaction is absolutely not ideal. I've tried a number of 'defaults' across the years and across platforms, and this isn't the one I want to settle on. At the moment, the 'shortcuts' and bindings page is also a bit out of date, but you can change scroll / zoom directions with keyboard modifiers, and they keyboard itself (if on a device that has one) can be used to navigate around. Agreed that scaled interaction is the trick to this when reading individual files, and that's why I have tried to allow this to be configurable and dynamic while testing. Some people want big picture motions first to get a mental map, some want to focus on small groups of individual files to read. I'll be taking this into account!
I opened and it said to use a repo tab. There are no tabs so I pushed a button. It had a list with Linux so I clicked that. It redirected to a 404 page on github and I gave up.
I don’t know what this is supposed to do, let alone how to use it. But I looked at it for you!
Two things, one is wrote, the other is direct:
Thank you, sincerely, an immeasurably appreciative amount for trying something new, sharing your time and opinion, and being honest with it. This is how we become better tool builders and engineers: different perspectives, different ways of thinking, and honesty with others. Again - thank you.
For you:
The 'tab' layout is pretty atrocious even as a one-shot run through of fitting most of the control areas to mobile and desktop screens. It's not easy, and a lot of 'feature' bloat makes it worse. Knowing what your first-time drop in was like and how you found that link is incredibly useful insight, and I'll be updating the layouts more to accompany first timers and instructions like the original source material I'm rebuilding from on the Apple side.
it opens source files in an unreadable small size, presumably to fir the whole file into the window. i can zoom in, but i can't properly scroll around or select text.
and i don't see the benefit of using 3d here. it doesn't seem useful.
Two things, one is wrote, the other is direct:
Thank you, sincerely, an immeasurably appreciative amount for trying something new, sharing your time and opinion, and being honest with it. This is how we become better tool builders and engineers: different perspectives, different ways of thinking, and honesty with others. Again - thank you.
For you:
You're probably right about "not seeming useful", but I do wanna gently nudge you toward what this is a proof of concept about again. Most folks look at this like it's a bigger, flatter emacs/vim/Sublime/VSCode or whatever. I do support editing in my current workbranch, as well as command-based selection, but most of the work spins out because the tool is half "adopt what tools are useful to an interactive development environment of today" and "allow the display of the canvas to overlap with the spatial relationships of directories, files, and colocation to help generate mental mappings of a code space".
These things have often been in conflict, and years (decades) of prior art show this. This is my attempt at it, and since it's my 3rd attempt in twice as many years to make it work in a new environment, you're hitting this particular instance's walls. Would love more feedback or questions!
Two things, one is wrote, the other is direct:
Thank you, sincerely, an immeasurably appreciative amount for trying something new, sharing your time and opinion, and being honest with it. This is how we become better tool builders and engineers: different perspectives, different ways of thinking, and honesty with others. Again - thank you.
Second up:
I'd love some input as to what didn't work! Did a shader fail to load? Desktop or mobile? Did you load a repository that was too large and OOM'd out in the browser? Did it cause your monitor to spin 360 degrees and speak tongues? Do tell!
Two things, one is wrote, the other is direct:
Thank you, sincerely, an immeasurably appreciative amount for trying something new, sharing your time and opinion, and being honest with it. This is how we become better tool builders and engineers: different perspectives, different ways of thinking, and honesty with others. Again - thank you.
Second:
Thanks for the confirmation! If you've got any thoughts or feedback, please be as direct as you'd like - I've already started cleaning up some of the more 'user friendly' notes I've been neglecting in an effort to stick to the internals.
So you're saying that the Xerox workstation didn't have inline 3d graphics rendering capabilities? And in fact this isn't an instance of UNIX trying to catch up to Xerox workstations' REPL from yester-decade?
Later rebranded as Mirai. I remember playing with a pirated copy of Nichimen Mirai somewhere in 2001 (I think), it looked weirdly Ediacaran in the Cambrian explosion of the late 90s.
that's a poorly chosen counter-complaint. before SGI, symbolics owned the market for 3d graphics. this was a world where you could also just do (create-window), see the window, and get back a handle you could use to draw in it. starting with X10 afterwards for me was like drowning in mud.
I've dug around the TempleOS codebase a bit, and while it certainly is impressive for a single guy's work, I think there's been an overcorrection where people act like Terry was some hyper genius instead of "a pretty smart guy".
I kind of got the impression that whenever Terry didn't know how to do something, he would just convince himself that that's not what God wanted anyway and stop doing it.
Most of the people we think of geniuses are not smarter than the average smart person, but they persevered more. Terry had the ultimate driver of perseverance: severe mental illness.
Given that Terry described the manic episodes as "a revelation from God" I think theopneustos is an accurate description. It just means "God Breathed" or "Inspired by God"
The announcement blog post (https://blog.orhun.dev/introducing-ratty/), which would've been a better submission URL, unsurprisingly says that TempleOS was the direct inspiration of the project.
I like this. No reason the terminal should only support text. Data science notebooks show one way the terminal can evolve. Lots of interesting stuff happening in this space, with Kitty probably being the most aggressive innovator here [1]. I'm not sure there is an overall vision, though.
No evolution necessary! With my project, euporie [1], you can have use your data science notebooks with graphical image outputs, HTML, LaTeX, etc, all in the terminal.
You mention using this over ssh. Is there any way to get this working in tmux or anything similar by any chance? Or is the idea that euporie itself is acting like a multiplexer?
This is such an amazing project. I find it so awesome that I can bump on such projects (and their creators, Hi!) on hackernews.
I wish to ask a question if I may (and as such pardon my ignorance on jupyter kernel, I don't know much about it and I hope you can tell me more about it :-D)
but my question is, is there a way to swap the jupyter kernel within euphorie to something else more minimalist?
And when you run a project with ssh, there are ways to give access to other users with user:password if I may ask?
I didn't know that there were ways to run jupyter kernels in terminal, I don't know when I might need it but I am prepared with this information now, this feels so nice to me, thanks for making it!!
This is like a checklist of a thing I didn't know that I needed/existed but the second I know that it has existed, it feels like my mind has checked it off and just a satisfaction from knowing projects like these existing.
(I think in some sense this is a bit of same reaction to me on Ratty too), Its just so good seeing projects in these spaces :-D
Edit: just remembered the one time I think I was using some websites which gave me jupyter and then I tried to use browsh to run jupyter to run jupyter in terminal so that it can be controlled by terminal but it had some issues and I wasn't able to run it.
I also wish to ask if there is a way to sign in to jupyter instance like that itself perhaps? (IIRC it was a jupyterhub instance)
> is there a way to swap the jupyter kernel within euphorie to something else more minimalist?
You can use euporie-console for a REPL-like terminal experience (still with rich outputs) if you don't want the full notebook experience.
You can also select the `local-python` kernel in euporie to run code using the local Python interpretor which runs euporie, instead of connecting to a Jupyter kernel.
> And when you run a project with ssh, there are ways to give access to other users with user:password if I may ask?
> I also wish to ask if there is a way to sign in to jupyter instance like that itself perhaps?
euporie-hub supports spawning notebook instances for connected users, but I haven't implemented collaborative editing like JupyterLab supports (yet). I believe that jpterm [1] might support this.
Or for that matter, the magic that was a Tektronix storage scope terminal (and compatibles. At school there was a vt10x that had been modified to act like a Tek 4014 by some third party).
I managed to get `pyvista` to render arbitrary 3D shapes directly to the terminal using kitty graphics.
It's a giant hack, only way to make it performant is using shm.
Joking apart, the whole thing was both an exercise in madness and genius. Sometimes I wonder what he would have done if he had not gone crazy. We will never know...
He'd probably be writing poison pill generators for AI, obfuscation tools (in the vein of public key crypto, but using entirely plaintext, in a style similar to Cockney rhyming slang) for social media posting.
He was pretty anarchistic and antiestablishment. I'm sure we'd still see that coming through.
> Sometimes I wonder what he would have done if he had not gone crazy.
At what point do you consider he had "gone crazy" relative to the development of TempleOS? Only when he committed suicide? Shortly before then? Last ____ years of his life?
Without trying to sound insensitive, I'd personally argue the entire OS was the byproduct of a "crazy" individual.
The inspiration may have been all "crazy" but the implementation was still really neat, and it takes a lot of effort and skill to get to the point he did before his death. The thing about people who lose touch with reality is that their efforts to create or express something often make no sense to the rest of us. TempleOS, however, works. Terry create an OS from scratch, an entire new language (or variant of a language) in the form of HolyC, and not only does it all work together in a way that requires no disconnect from reality, it works well for his goals and philosophy.
The entire thing may be the result of a person suffering from schizoaffective disorder, but that person still held a great deal of skill to implement that idea and enough of a touch with the reality of computer hardware to make it happen.
I wonder if something like this could work for thumbnails in the terminal; I prefer to browse my filesystem from a terminal rather than the point and click file manager typically, and it would be really useful if I could have a grid-style `ls` with terminal based renders of the 3d models (thinking STL/STEP, 3D printing) in that directory. Bonus points if I could preview/rotate the model to inspect it.
as a compromise i started using nemo/n̶a̶u̶t̶i̶l̶u̶s̶ with a plugin that puts a terminal at the bottom of each tab. so i have a graphical view of the terminal but a commandline in the same folder right next to it. the two don't interact other than being able drag and drop filenames from the filemanager into the terminal, so it is far from what we really want, but it's a small start.
Do you mind sharing a little more about the plugin you use? A quick online search wasn't very helpful to me but I've also been hoping for something like this.
fedora has a package for it. just installing it will make the plugin available so it can be activated within nemo preferences.
one problem is that common terminal shortcuts are captured by the filemanager. ctrl-c for example will copy a file from the file manager and not kill a process in the terminal if you have something selected (there is no shortcut to unselect everything (you can do ctrl-a,shift-ctrl-i (select all/invert selection))).
if any shortcuts bother you, these keys can be changed in ~/.gnome2/accels/nemo
i wish the shortcuts would work based on where your focus is.
dolphin also supports builtin terminal, but it shares the same terminal between all tabs which is a bit less convenient. it handles control keys a bit better though.
despite its shortcoming this integration has changed the way i work and got me interested in exploring better solutions.
now when i want to run a command i go to the right tab, the visual presentation of the contents tell me that i am in the right directory, and i can run the command in the right context.
i do a lot of stuff in the terminal, but i prefer a visual orientation. i normally use tmux everywhere, and i have a tmux window open for each directory that i operate in. but ls or terminal file managers are not visual/interactive enough. sorting for example depends on the use case. in a file manager i can have different tabs sorted as i like, in tmux i would have to remember the right ls command and then still don't see everything i need, especially selecting multiple files for opening at once in the terminal is a lot of typing, whereas in the file manager it is a few clicks. a separate terminal and file manager window would make it difficult to keep the two connected. (although a window manager feature that allows me to connect windows would be cool)
Mix this 3d graphics, with data science notebooks, with local LLMs, and perhaps an integrated coding harness, with visibility over your personal data and you’d have something absurdly good.
This might overtake “a haiku+macOS mashup” as my idealised computing future.
Greenspun’s Tenth Rule of Programming states that any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
well, almost. if emacs offers a graphical file manager i'll consider using it. this seems to be a start: https://github.com/emacs-eaf/eaf-file-manager. the file manager needs to also integrate with a terminal though so i can run unix commands in the same directory. and it needs to support mouse-based operations too. finally, and that's the real kicker, i'd like a better integration of the terminal output and the graphical display by supporting the passing of structured data that the display knows how to handle without terminal escape codes. those need to go away. (which is why sixels are not a solution either)
What's overlooked here are the insane political and economic forces that were required to get anywhere close to the (sort of!) consistent implementation of plain text we have today. These projects try to piggyback off that success yet only contribute back harm. We have standards for a reason.
I'm not saying people can't have fun, but don't try to start a cyberpunk-inspired revolution and then blame the side effects of groupthink and software rot on everyone else when it goes sideways.
Exactly this. They are slowly turning the terminal into a web browser, just for attention. We already have web browsers. If you want something at the midpoint, make it, but please don’t call it a terminal & destroy one of the few non-trojan-horse standards that we have left.
- rendering capabilities of this seem like it should also be able to handle 2d well, or am I mistaken? every solution I see for getting high quality 2d images or rasterization in terminal is all pretty bad. Could this do better than other solutions or is there a fundamental limit being hit somewhere?
- What happens with ssh given that this is gpu accelerated?
Always has been meme incoming. Also, more seriously, the purpose of a tool is to do a job. The question becomes whether this tool can be made useful. I.. honestly don't know, but I will be finding out soon:D
I remember those good old days. I had virtualbox running win xp on its dedicated face on the compiz cube. It felt magical to switch between windows and Ubuntu specially with all the compiz animation goodies
It's very interesting to learn about the newly proposed glyph protocol [1] in the linked blog post.
I was bemoaning the lack of exactly this here about 6 months ago [2]!
The idea that every application should ship their own glyphs because some proprietary systems do not have normal fonts, is not great. Fix the terminal instead.
Also, if you want advanced GUI with icons, maybe you should just write a GUI app.
As for shipping custom icons, this is not very bright idea as well. If you switch between several applications on one terminal, then one application can redefine glyphs from another application. Also, when the application terminates, nobody cleans up its glyphs. Also, this increases attack surface because font standards are pretty complicated and one will be able to attack the system by just providing a glyph. We already have programs that can break the terminal, which should never happen.
Also, as for icons, I find emoji characters too distracting (and too large). They stand out too much and take away user's attention, breaking any visual hierarchy. The icons in terminal should be monochrome, and with thin lines, so that they do not distract you from the text and its structure.
I'm specifically interested in querying for support of particular glyphs (e.g. the symbols for legacy computing block), so applications can use a different fallback if it is know that a particular glyph cannot be rendered and will break the interface.
I agree that the addition of sending custom glyphs to the terminal is potentially problematic.
Oh hey, that's a nice idea! Unlike some of the terminal projects I've seen recently, it addresses a problem without entirely reinventing the idea of what terminals can do.
This is kinda possible already today with the Kitty graphics protocol, I made a demo here of rendering 3D graphics[1] with kitty. The actual important missing thing (and which ratty seems to also not include) is vsync.
If rendering is not aligned then it's possible for the terminal emulator to read the framebuffer while the application is writing to it, causing visual artifacts.
This is pure Hollywood OS - hackers feverishly entering obscure incantations like “upload virus”…but now with the terminal twisted into a Moebius strip!
The terminal is keystroke-driven. It's character-selectable. It's reliable in a way that the GUI is not. When I drop frames, I can still enter the commands to rescue myself with some assurance they'll be interpreted, eventually.
I agree, a REPL isn't Unixy in the streams of text kind of way... or is it?
It's a bit more abstract and useful than "character-selectable" when viewed at the byte-level abstraction.
The ability to chain together utilities with no complicated data structures is extremely flexible. One of my favorite current use-cases is using FFmpeg to process RTSP streams that send output (e.g. high quality stream for recording, low quality low FPS for processing, max quality low FPS for stills, etc) to separate file descriptors. FFmpeg doesn't care whats on the other end (e.g. redirect to file, read via Python, etc) due to these lovely abstractions.
Reliability translates directly to scriptability. Yes, you can create monsters, but through the use of sub-shells and pipes I think it's the fastest, cheapest, most concise way to pull off some really cool multiprocessing tricks.
> The question is - why do we still need the terminal abstraction at all?
Because nobody is willing to put in the work to create a GUI toolkit that doesn't suck ass.
It's not that people want the "terminal abstraction". What people want is "Put <thing> on screen without me needing a PhD in graphics programming." That's why the dominant desktop interface paradigms have become TUIs and a Browser-In-A-Trenchcoat.
Lack of imagination doesn't mean this isn't innovation.
It's the ability to convey more information in less space.
Top-of-my-head notion: The cursor spins (or changes in another way) to reflect CPU use, or bandwidth use, instead of taking up space elsewhere on the screen.
The same was said about Compiz, but it turned out to be a passing gimmick that looked flashy but didn’t really add anything. Sure you could always make up reasons why it’s useful, I remember the same about Compiz, but… is it really? I could be proven wrong, of course, but it hasn’t been demonstrated yet.
It’s a solution in search of a problem. OP should have presented it with a real use case or benefit, not just flashy graphics, if it’s meant to be anything other than a fun oddity (which, to be fair, is perfectly fine).
I built DeepSteve (https://github.com/deepsteve/deepsteve) with a similar itch but went the other way. Instead of adding graphics to the terminal, I put the terminal in a place that already has graphics.
I kept trying to optimize my terminal layout and realized I could just run my terminals inside of the browser, and let Claude Code write JavaScript in the same browser tab to customize the experience however I want. It's kind of a terrible idea, but it's my terrible idea, and I love it.
I haven't seen any performance issues for Claude Code, even when I'm running like 20 in one browser tab and looking at them all at the same time (rendered with xterm.js), but Gemini and OpenCode flicker a lot even if you have one open.
I have been thinking about this for a while. It's not as crazy as it may sound, especially in light of the other comments making a parallel between terminals and notebooks.
A few thoughts:
1. Linux VTs kind of have this feature already: there is the normal buffer, the alternate buffer (that something like htop would draw on), and an IOctl can change them to/from graphics mode.
2. It makes sense for interactivity. Kitty's graphics protocol is quite useful for static shapes, can be abused for animations, but doesn't really cut it for interactivity (say, pan a graph around). Wayland is designed for this.
3. Wayland would be a good fit: isolate each command from another, let them request buffers, but keep control of where to display them, do not update them when off screen, etc.
4. One downside is that terminals excel for one-shot tasks. What's the purpose of the display when you are done with it? Should you kill the process driving it? Due to this, it may make more sense to delegate more features to the terminal emulator (displaying the 3D model, etc). Or maybe just allow the app to temporarily take over the window.
5. Once you have it up and running, have it talk directly to the direct rendering manager. Your "kmscon" is now your compositor / desktop environment. That's a fun thought! Add some basic terminal features like tabs and tiling, and you've inverted the usual setup.
6. One downside is accessibility. I really like that I can copy-paste any part of the interface for reference, "screenshots", etc. It's good for screen readers, too. You lose these advantages by going to Wayland.
7. Another current terminal limitation is fonts. Power line, yazi & other make use of custom fonts for drawing part of the interface, logos, etc. AFAIK there is no good way to query their availability (which is also an issue for color emoji). Custom fonts or a new protocol could be useful, but client apps could draw it themselves if given a surface (that can already do that with the kitty graphics protocol, mind you)
Obviously I am not seriously considering to make such a terminal emulator, but it would be an interesting experiment (heck, maybe something I should try this "vibe coding" with, since I wouldn't want to spend too much time on it).
I have wobbly windows on whenever I use KDE. I like how it gives the movements more momentum, though I have it turned down by a lot so it isn't distracting
I’m not sure why I’d use it, but I enjoyed the visual and loved the brutalist design of the website -- it brought back some fond memories of the good old days.
there was a project that rendered firefox to the terminal through box drawing characters. When libweb is more complete I kinda want to do something similar
I was going to comment how it reminded me of TempleOS and the author should look into that, but the accompanying blog post explains how it was inspired by it https://blog.orhun.dev/introducing-ratty/
So just to be clear, a human being can't think properly / straight anymore, has issues forming a coherent worldview, has regularly crazy maniac phases were he would drive like 100 miles, dismantle his car, throw away his keys but you do not accept any of this as a reasonable excuse that that particular human is not able to break out or even manifests stereotypical thoughts?
The mental base mode you are born, is a community of christians, parents forming your mind etc. and you have to break out of this, formulate your own independet worldview. A lot of people can't do that today. All religios people in fact.
Plenty of woman can't break out of absuve relationships, familys protecting someone inside the family even if they are rapists due to "family is family; what would others think of us" etc. and thats were you draw the line for that Human being?
> Plenty of woman can't break out of absuve relationships, familys protecting someone inside the family even if they are rapists due to "family is family; what would others think of us" etc. and thats were you draw the line for that Human being?
I think you're being fair overall, but I would also say that OP in this thread reply is highlighting something worthwhile. If Terry were a misogynist, I don't think this thread would have taken as long to recall his abnormal behavior. But that's just, like, my opinion.
Really fun project! Dude, I spent the last week implementing Kitty Graphics and Clipboard protocols in ghostty-web in the Canvas render.
Then I added WebGL and WebGPU renderers [1], including support for Kitty.
Then I see this this project on a Monday morning... so now I have to implement Ratty Graphics Protocol?!?! [2].
ETA: I looked into this; Ghostty would need patched to support Ratty since Ghostty-Web now defers APC handling there. It would also require pulling in a 3D engine like three.js or otherwise implementing file parsing, lighting, etc. Finally, since local filenames are part of the protocol, a browser would need some file resolver helper, either to get the data over the APC channel or via a URL.
Glyph rendering in three.js, fully instanced and addressable and positionable instances. Handles tens of millions. Sample app loads up full GitHub repositories in the web in a few seconds.
That's pretty handy, thanks for the links. IDE is slick!
Given the structure, I think one could make a threejs backend on for ghostty-web. Makes sense if one will pull in more of three.js anyway. I'm adding it to my backlog to explore.
Second: I would love to offer any assistance during your perusal. Happy to share ideas, what I tried, point out parts of the code that are rough and tumble, whatever helps. I'm in a place where any outside feedback and prodding is precious, so thanks very much for taking a look and keeping it in mind!
> When I first got introduced to [TempleOS], I was shocked and impressed by the flashy colors, graphical sprites and uncomprehensible UI. There are so many things that makes it so unique, weird and fascinating at the same time, somehow.... Basically, the command line becomes the direct interface for everything. You can write code, interact with the system and render graphics all in the same place, which is why TempleOS feels so unusual compared to conventional operating systems.
I think this could be a really cool approach. I enjoy tools like Chafa, imgcat, etc but something always feels a little clunky about the separation between text and images. Paradoxically having text and non-text all jumbled up like this feels better somehow.
Seriously, though, when are we going to see the convergence of terminals and GUI remoting protocols? People have already departed far from Unix pipeline utilities. "TUI" programs are already GUIs in disguise. Why keep pretending that the terminal (as used by TUI programs) is a different kind of thing?
Has anyone tried to create 3D fonts? It sounds like a ton of work but might look cool if done correctly.
You could also do really cool text highlights by working with light sources and shader effects
Another feature I'm looking for is smooth scrolling when you hit enter. I've had debates before where they claim it's not possible, that the text must jump one line. But I think it's possible, by shifting the frame buffer up.
last night I was pondering if there was a ghostty plugin that can make my terminal like the opening scroll from a Star Wars movie. Can we make that happen?
This is a great idea. I always wanted KDE konsole to e. g. show images inlined as is. This is possible via magick six:-, but I wanted this to be natively. I want the terminal to be able to work with any data and display it in any way. No need to simulate the 1980s era anymore (except for backwards/legacy support). So great idea here really.
- Compile tut, it just requires Go, it will run on any modern OS.
- Login with tut
- Set a 'tview' shell (sh) script as
#/bin/sh
chafa -f sixel --fit-width "$@" | less -r"
reset
- Configure tut, set program=tview in the [media.image] section.
- Then launch XTerm as 'xterm -ti 340'. Edit ~/.Xresources so you have nice fonts:
xterm*background: black
xterm*foreground: white
xterm*loginShell: true
xterm*faceName: Monospace
xterm*faceSize: 10
xterm*geometry: 100x32
xterm*metaSendsEscape: true
xterm*decTerminalID: vt340
xterm*numColorRegisters: 256
xterm*sixelScrolling: 1
xterm*sixelScrollsRight: 1
Done. Edit the facesize value to a bigger font if you have a big resolution. Run "xrdb ~/.Xdefaults" to get the changes.
Also, you can run chafa locally with images such as "chafa -f sixel --fit-with foo.img', no need to login into a VPS, of course, it just was a proof of concept that you could see images over SSH. This can be really useful for instance to read graph/plots with Gnuplot or similar tools.
If any, subscribe to T3X's news letter and get some books, as these small tools will pay a lot in near future. No AI crap, small enough to run on some sets, from Statistics to semi-advanced math (even Zenlisp being crap can do complex numbers, and you can adapt the code for instance for S9 so that interpreter Scheme understand complex numbers and a much faster speed).
Yeah, Python+SAGEMATH, CUDA with number crunching and the like. How much are the GPU's, CPU's and SSD's going nowadays in dollars?
> inserted 3D objects in the demo above are actually from the TempleOS codebase itself
Brilliant. The dream lives on! This is the best form of paying respects.
It's walking a fine line between madness and genius, and who knows if it'll ever be practical, but more important is the sense of wonder and "fuck yeah" as King Terry expressed so eloquently.
Excited to see others equally inspired by TempleOS’ 3D feature :)
I tried something similar a few months ago that acts more as a library to ratatui than a separate terminal emulator [0].
Was surprised how far one can get using some off the shelf characters like half-block when rasterizing.
The Glyph protocol mentioned in the blog post is interesting … perhaps custom glyphs could help smooth some of the (literal) rough edges from the low effective resolution of a terminals character grid.
The 3D can be wiggle 3D, or perspective from webcam head/eye tracking, or stereo from shutter glasses, or XR HMDs. Wiggle is easiest - just move the object orientation back and forth. Cute but distracting. Well, cross/parallel-eye gaze is easier, but limited - ok for little UI test swatches. Perspective is more subtle, less intrusive. Can be simple with a head tracker driving a single orientation, or go all in with eye pose (for distance) and window locations, to do an accurate 3D render. App stereo pairs can be "I give you two windows Left/Right-eye", or "alternating L/R view, labeled/synced/polled". Other possibilities. Many of these need window system/manager/desktop support. I found a lot of leverage in using a stack of electron and X.
It's fun to displace text in 3D. Like colorization, but more so. And if you don't mind a cluttered appearance, you can add secondary information layers segregated by depth. And... etc. Emacs with characters-have-a-depth finally gets you something LispMs didn't have. Fun aside, to explore possibilities with code text, with anything not inherently 3D, far easier to prototype UX with fg/bg colors, fonts, unicode, and animation. Or in browser, overlaid divs and transparent 2D/3D canvases.
It's this. Every character is a 3d placed quad, instanced rendered, so you get tens of millions and then some. They are individually addressable and mutable like any polygon. I use it to render entire GitHub repos in one go. I have two versions, native Apple and web. Web has the basics of an ide setup. Would love insight or thoughts.
https://ivanlugo.dev/ide
Ok, now for you: Interaction is absolutely not ideal. I've tried a number of 'defaults' across the years and across platforms, and this isn't the one I want to settle on. At the moment, the 'shortcuts' and bindings page is also a bit out of date, but you can change scroll / zoom directions with keyboard modifiers, and they keyboard itself (if on a device that has one) can be used to navigate around. Agreed that scaled interaction is the trick to this when reading individual files, and that's why I have tried to allow this to be configurable and dynamic while testing. Some people want big picture motions first to get a mental map, some want to focus on small groups of individual files to read. I'll be taking this into account!
I don’t know what this is supposed to do, let alone how to use it. But I looked at it for you!
For you: The 'tab' layout is pretty atrocious even as a one-shot run through of fitting most of the control areas to mobile and desktop screens. It's not easy, and a lot of 'feature' bloat makes it worse. Knowing what your first-time drop in was like and how you found that link is incredibly useful insight, and I'll be updating the layouts more to accompany first timers and instructions like the original source material I'm rebuilding from on the Apple side.
firefox 150 here.
For you: You're probably right about "not seeming useful", but I do wanna gently nudge you toward what this is a proof of concept about again. Most folks look at this like it's a bigger, flatter emacs/vim/Sublime/VSCode or whatever. I do support editing in my current workbranch, as well as command-based selection, but most of the work spins out because the tool is half "adopt what tools are useful to an interactive development environment of today" and "allow the display of the canvas to overlap with the spatial relationships of directories, files, and colocation to help generate mental mappings of a code space".
These things have often been in conflict, and years (decades) of prior art show this. This is my attempt at it, and since it's my 3rd attempt in twice as many years to make it work in a new environment, you're hitting this particular instance's walls. Would love more feedback or questions!
Second up: I'd love some input as to what didn't work! Did a shader fail to load? Desktop or mobile? Did you load a repository that was too large and OOM'd out in the browser? Did it cause your monitor to spin 360 degrees and speak tongues? Do tell!
Second: Thanks for the confirmation! If you've got any thoughts or feedback, please be as direct as you'd like - I've already started cleaning up some of the more 'user friendly' notes I've been neglecting in an effort to stick to the internals.
Inline graphics from 1981,
https://youtu.be/o4-YnLpLgtk?t=376
For those who haven't watched it: https://www.youtube.com/watch?v=yJDv-zdhzMY
Here is another video, this time with S-PACKAGE used to develop Nintendo 64.
https://www.youtube.com/watch?v=gV5obrYaogU
Which given the REPL capabilities, you can easily embedd them on it, just like the other video.
"Here's this new thing that can Ⓧ!" "Pfft, Y could do X years ago."
Well, Ⓧ ≠ X. Come on now, we're programmers here.
I kind of got the impression that whenever Terry didn't know how to do something, he would just convince himself that that's not what God wanted anyway and stop doing it.
also smalltalk
we used oberon in one class in university. i don't remember much unfortunately.
more like theopneustos
https://www.youtube.com/watch?v=4K8IEzXnMYk
[1]: https://sw.kovidgoyal.net/kitty/protocol-extensions/
[1] https://github.com/joouha/euporie
You mention using this over ssh. Is there any way to get this working in tmux or anything similar by any chance? Or is the idea that euporie itself is acting like a multiplexer?
I wish to ask a question if I may (and as such pardon my ignorance on jupyter kernel, I don't know much about it and I hope you can tell me more about it :-D)
but my question is, is there a way to swap the jupyter kernel within euphorie to something else more minimalist?
And when you run a project with ssh, there are ways to give access to other users with user:password if I may ask?
I didn't know that there were ways to run jupyter kernels in terminal, I don't know when I might need it but I am prepared with this information now, this feels so nice to me, thanks for making it!!
This is like a checklist of a thing I didn't know that I needed/existed but the second I know that it has existed, it feels like my mind has checked it off and just a satisfaction from knowing projects like these existing.
(I think in some sense this is a bit of same reaction to me on Ratty too), Its just so good seeing projects in these spaces :-D
Edit: just remembered the one time I think I was using some websites which gave me jupyter and then I tried to use browsh to run jupyter to run jupyter in terminal so that it can be controlled by terminal but it had some issues and I wasn't able to run it.
I also wish to ask if there is a way to sign in to jupyter instance like that itself perhaps? (IIRC it was a jupyterhub instance)
You can use euporie-console for a REPL-like terminal experience (still with rich outputs) if you don't want the full notebook experience.
You can also select the `local-python` kernel in euporie to run code using the local Python interpretor which runs euporie, instead of connecting to a Jupyter kernel.
> And when you run a project with ssh, there are ways to give access to other users with user:password if I may ask?
> I also wish to ask if there is a way to sign in to jupyter instance like that itself perhaps?
euporie-hub supports spawning notebook instances for connected users, but I haven't implemented collaborative editing like JupyterLab supports (yet). I believe that jpterm [1] might support this.
[1] https://github.com/davidbrochart/jpterm
Terminals on other operating systems that grew up with a framebuffer don't have this limitation.
https://git.theresno.cloud/panki/kglobe
https://www.youtube.com/watch?v=o48KzPa42_o
Joking apart, the whole thing was both an exercise in madness and genius. Sometimes I wonder what he would have done if he had not gone crazy. We will never know...
At what point do you consider he had "gone crazy" relative to the development of TempleOS? Only when he committed suicide? Shortly before then? Last ____ years of his life?
Without trying to sound insensitive, I'd personally argue the entire OS was the byproduct of a "crazy" individual.
The entire thing may be the result of a person suffering from schizoaffective disorder, but that person still held a great deal of skill to implement that idea and enough of a touch with the reality of computer hardware to make it happen.
fedora has a package for it. just installing it will make the plugin available so it can be activated within nemo preferences.
one problem is that common terminal shortcuts are captured by the filemanager. ctrl-c for example will copy a file from the file manager and not kill a process in the terminal if you have something selected (there is no shortcut to unselect everything (you can do ctrl-a,shift-ctrl-i (select all/invert selection))).
if any shortcuts bother you, these keys can be changed in ~/.gnome2/accels/nemo
i wish the shortcuts would work based on where your focus is.
as for nautilus it appears that it no longer supports the APIs needed for the terminal: https://github.com/flozz/nautilus-terminal
dolphin also supports builtin terminal, but it shares the same terminal between all tabs which is a bit less convenient. it handles control keys a bit better though.
despite its shortcoming this integration has changed the way i work and got me interested in exploring better solutions.
now when i want to run a command i go to the right tab, the visual presentation of the contents tell me that i am in the right directory, and i can run the command in the right context.
i do a lot of stuff in the terminal, but i prefer a visual orientation. i normally use tmux everywhere, and i have a tmux window open for each directory that i operate in. but ls or terminal file managers are not visual/interactive enough. sorting for example depends on the use case. in a file manager i can have different tabs sorted as i like, in tmux i would have to remember the right ls command and then still don't see everything i need, especially selecting multiple files for opening at once in the terminal is a lot of typing, whereas in the file manager it is a few clicks. a separate terminal and file manager window would make it difficult to keep the two connected. (although a window manager feature that allows me to connect windows would be cool)
[1]: github.com/eza-community/eza
[0]: https://github.com/sxyazi/yazi
This might overtake “a haiku+macOS mashup” as my idealised computing future.
https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
That, or eshell and emacs-ipython-notebook
What's overlooked here are the insane political and economic forces that were required to get anywhere close to the (sort of!) consistent implementation of plain text we have today. These projects try to piggyback off that success yet only contribute back harm. We have standards for a reason.
I'm not saying people can't have fun, but don't try to start a cyberpunk-inspired revolution and then blame the side effects of groupthink and software rot on everyone else when it goes sideways.
Questions:
- rendering capabilities of this seem like it should also be able to handle 2d well, or am I mistaken? every solution I see for getting high quality 2d images or rasterization in terminal is all pretty bad. Could this do better than other solutions or is there a fundamental limit being hit somewhere?
- What happens with ssh given that this is gpu accelerated?
Is that what you're looking for?
https://github.com/fathyb/carbonyl
https://hyper.is/
Super slow, but I guess thats what web devs want.
https://github.com/vadimdemedes/ink
Which is what Claude Code CLI uses (or was using?) and it caused many issues such as flickering, thrashing, and latency.
Rendered as instanced quads in 3d space. Tens of millions at >60fps, and an entire class of "grid based text rendering" evaporates.
https://github.com/tikimcfee/glyph3d-js
edit: But your spirit lives on ( based on the project:D )
So anyway, being that guy, I immediately installed it.
[1] https://rapha.land/introducing-glyph-protocol-for-terminals/
[2] https://news.ycombinator.com/item?id=45805072
Also, if you want advanced GUI with icons, maybe you should just write a GUI app.
As for shipping custom icons, this is not very bright idea as well. If you switch between several applications on one terminal, then one application can redefine glyphs from another application. Also, when the application terminates, nobody cleans up its glyphs. Also, this increases attack surface because font standards are pretty complicated and one will be able to attack the system by just providing a glyph. We already have programs that can break the terminal, which should never happen.
Also, as for icons, I find emoji characters too distracting (and too large). They stand out too much and take away user's attention, breaking any visual hierarchy. The icons in terminal should be monochrome, and with thin lines, so that they do not distract you from the text and its structure.
I agree that the addition of sending custom glyphs to the terminal is potentially problematic.
https://github.com/tikimcfee/glyph3d-js
That had me in stitches.
If rendering is not aligned then it's possible for the terminal emulator to read the framebuffer while the application is writing to it, causing visual artifacts.
[1] https://x.com/zack_overflow/status/2035921425341763756?s=20
Still giving me goosebumps
I agree, a REPL isn't Unixy in the streams of text kind of way... or is it?
It's a bit more abstract and useful than "character-selectable" when viewed at the byte-level abstraction.
The ability to chain together utilities with no complicated data structures is extremely flexible. One of my favorite current use-cases is using FFmpeg to process RTSP streams that send output (e.g. high quality stream for recording, low quality low FPS for processing, max quality low FPS for stills, etc) to separate file descriptors. FFmpeg doesn't care whats on the other end (e.g. redirect to file, read via Python, etc) due to these lovely abstractions.
Reliability translates directly to scriptability. Yes, you can create monsters, but through the use of sub-shells and pipes I think it's the fastest, cheapest, most concise way to pull off some really cool multiprocessing tricks.
Because nobody is willing to put in the work to create a GUI toolkit that doesn't suck ass.
It's not that people want the "terminal abstraction". What people want is "Put <thing> on screen without me needing a PhD in graphics programming." That's why the dominant desktop interface paradigms have become TUIs and a Browser-In-A-Trenchcoat.
Compiz 3d effects were ultimately a useless gimmick and I predict this is too.
It's the ability to convey more information in less space.
Top-of-my-head notion: The cursor spins (or changes in another way) to reflect CPU use, or bandwidth use, instead of taking up space elsewhere on the screen.
It’s a solution in search of a problem. OP should have presented it with a real use case or benefit, not just flashy graphics, if it’s meant to be anything other than a fun oddity (which, to be fair, is perfectly fine).
So were iPods.
I kept trying to optimize my terminal layout and realized I could just run my terminals inside of the browser, and let Claude Code write JavaScript in the same browser tab to customize the experience however I want. It's kind of a terrible idea, but it's my terrible idea, and I love it.
And have you run into any other issues, maybe like performance?
I feel like web-ified terminals get nerfed pretty hard and I'm not sure if/how people overcome that.
I like the idea of customizing multiplexed terminals with on-the-fly JavaScript, tho.
[0]https://news.ycombinator.com/item?id=45341683
A few thoughts:
1. Linux VTs kind of have this feature already: there is the normal buffer, the alternate buffer (that something like htop would draw on), and an IOctl can change them to/from graphics mode.
2. It makes sense for interactivity. Kitty's graphics protocol is quite useful for static shapes, can be abused for animations, but doesn't really cut it for interactivity (say, pan a graph around). Wayland is designed for this.
3. Wayland would be a good fit: isolate each command from another, let them request buffers, but keep control of where to display them, do not update them when off screen, etc.
4. One downside is that terminals excel for one-shot tasks. What's the purpose of the display when you are done with it? Should you kill the process driving it? Due to this, it may make more sense to delegate more features to the terminal emulator (displaying the 3D model, etc). Or maybe just allow the app to temporarily take over the window.
5. Once you have it up and running, have it talk directly to the direct rendering manager. Your "kmscon" is now your compositor / desktop environment. That's a fun thought! Add some basic terminal features like tabs and tiling, and you've inverted the usual setup.
6. One downside is accessibility. I really like that I can copy-paste any part of the interface for reference, "screenshots", etc. It's good for screen readers, too. You lose these advantages by going to Wayland.
7. Another current terminal limitation is fonts. Power line, yazi & other make use of custom fonts for drawing part of the interface, logos, etc. AFAIK there is no good way to query their availability (which is also an issue for color emoji). Custom fonts or a new protocol could be useful, but client apps could draw it themselves if given a surface (that can already do that with the kitty graphics protocol, mind you)
Obviously I am not seriously considering to make such a terminal emulator, but it would be an interesting experiment (heck, maybe something I should try this "vibe coding" with, since I wouldn't want to spend too much time on it).
- Atari ST GEM OS
- AmigaDOS
- Smalltalk
- Interlisp-D
- Genera
- Oberon
- MS-DOS (mode 13h and VESA)
- TempleOS
- Mac OS classic
It is just another graphical application window on the OS.
https://youtu.be/dFUlAQZB9Ng?si=3fE-vE8xF5rSVhRR
Any technical reason for such a strong opinion?
Why are you so invested in TempleOS?
The mental base mode you are born, is a community of christians, parents forming your mind etc. and you have to break out of this, formulate your own independet worldview. A lot of people can't do that today. All religios people in fact.
Plenty of woman can't break out of absuve relationships, familys protecting someone inside the family even if they are rapists due to "family is family; what would others think of us" etc. and thats were you draw the line for that Human being?
I think you're being fair overall, but I would also say that OP in this thread reply is highlighting something worthwhile. If Terry were a misogynist, I don't think this thread would have taken as long to recall his abnormal behavior. But that's just, like, my opinion.
I, Beldar, approve.
Then spend their tokens on abominations like this
Make it make sense
It's not hypocrisy when different people do different things.
Then I added WebGL and WebGPU renderers [1], including support for Kitty.
Then I see this this project on a Monday morning... so now I have to implement Ratty Graphics Protocol?!?! [2].
ETA: I looked into this; Ghostty would need patched to support Ratty since Ghostty-Web now defers APC handling there. It would also require pulling in a 3D engine like three.js or otherwise implementing file parsing, lighting, etc. Finally, since local filenames are part of the protocol, a browser would need some file resolver helper, either to get the data over the APC channel or via a URL.
[1] https://github.com/NimbleMarkets/ghostty-web/tree/nm-webgpu
[2] https://github.com/orhun/ratty/blob/main/protocols/graphics....
Glyph rendering in three.js, fully instanced and addressable and positionable instances. Handles tens of millions. Sample app loads up full GitHub repositories in the web in a few seconds.
https://github.com/tikimcfee/glyph3d-js https://ivanlugo.dev/ide
Second: I would love to offer any assistance during your perusal. Happy to share ideas, what I tried, point out parts of the code that are rough and tumble, whatever helps. I'm in a place where any outside feedback and prodding is precious, so thanks very much for taking a look and keeping it in mind!
> When I first got introduced to [TempleOS], I was shocked and impressed by the flashy colors, graphical sprites and uncomprehensible UI. There are so many things that makes it so unique, weird and fascinating at the same time, somehow.... Basically, the command line becomes the direct interface for everything. You can write code, interact with the system and render graphics all in the same place, which is why TempleOS feels so unusual compared to conventional operating systems.
I think this could be a really cool approach. I enjoy tools like Chafa, imgcat, etc but something always feels a little clunky about the separation between text and images. Paradoxically having text and non-text all jumbled up like this feels better somehow.
This is a game engine.
Seriously, though, when are we going to see the convergence of terminals and GUI remoting protocols? People have already departed far from Unix pipeline utilities. "TUI" programs are already GUIs in disguise. Why keep pretending that the terminal (as used by TUI programs) is a different kind of thing?
https://github.com/tikimcfee/glyph3d-js
[0]https://en.wikipedia.org/wiki/File_System_Visualizer
You could also do really cool text highlights by working with light sources and shader effects
Another feature I'm looking for is smooth scrolling when you hit enter. I've had debates before where they claim it's not possible, that the text must jump one line. But I think it's possible, by shifting the frame buffer up.
If you render your text like a polygon, you get full 3d animations for free.
That's how I read images under a remote pubnix with tut using a Mastodon account over plain SSH.
Chafa and XTerm. It works.
Also, you can run chafa locally with images such as "chafa -f sixel --fit-with foo.img', no need to login into a VPS, of course, it just was a proof of concept that you could see images over SSH. This can be really useful for instance to read graph/plots with Gnuplot or similar tools.
If any, subscribe to T3X's news letter and get some books, as these small tools will pay a lot in near future. No AI crap, small enough to run on some sets, from Statistics to semi-advanced math (even Zenlisp being crap can do complex numbers, and you can adapt the code for instance for S9 so that interpreter Scheme understand complex numbers and a much faster speed).
Yeah, Python+SAGEMATH, CUDA with number crunching and the like. How much are the GPU's, CPU's and SSD's going nowadays in dollars?
You'll soon may be able to implement overlapping graphics windows in TUI within GUI.
This is stupid af.
My second reaction: "Oh wait is that TempleOS being cited? This is either awesome or terrible."
Brilliant. The dream lives on! This is the best form of paying respects.
It's walking a fine line between madness and genius, and who knows if it'll ever be practical, but more important is the sense of wonder and "fuck yeah" as King Terry expressed so eloquently.
I tried something similar a few months ago that acts more as a library to ratatui than a separate terminal emulator [0].
Was surprised how far one can get using some off the shelf characters like half-block when rasterizing.
The Glyph protocol mentioned in the blog post is interesting … perhaps custom glyphs could help smooth some of the (literal) rough edges from the low effective resolution of a terminals character grid.
[0] https://github.com/limlabs/ratatui-3d
2. 3D rat: +100 points.
3. Outdated 80s UI paradigm: +100 points.
4. Uses Rust: +100 points.