I do like the way your UI strips out some of the ambiguity in how your initial prompt is interpreted by an LLM and allows for editing and parameterisation and IFTTT outputs. Especially since exactly how LLMs parse complex instructions is hard to test. There's definitely a sweet spot between "type instruction and hope for magical understanding" and "write the program yourself"
(though as it still involves LLMs I'd probably want to use it for stuff that sounds less mission critical than some of your examples!)
Completely agree -- LLMs are super powerful, but code is predictable. We found the best loops are ones that are almost entirely code (built with the LLM's help of course!) and only use LLM blocks for very specific and repeatable tasks, if at all.
We see a future where the code can be "self-healing" as well, with user approval of course.
+1, I love how the blocks provide transparency. I have been mulling over the UX for a somewhat similar app that I have been working on, and I may borrow some of how this works.
(though as it still involves LLMs I'd probably want to use it for stuff that sounds less mission critical than some of your examples!)