-
live-coding is great, but somehow hard to apply to "production systems", why?
- anecdotally when working on "production systems" I rarely play with constants live by changing single numbers or colors - maybe only when polishing some UI, but even then it often feels like working at the wrong abstraction level
-
most of the time, I try different code paths, which often means toggling multiple lines on/off across multiple files; "wiggling" the structure, not a single value
- this sometimes feels close to trying to have multiple
git
branches visible and editable at the same time
-
if I live-code a generative system, and see only the final result, I still have to play computer in my head (Simulator) - it's only that I have live-feedback from my actions - which is great, but for more complex things I want more visibility into sub-parts of the system
-
which begs the question of how to get that visibility? - some ideas:
- explicitly through some form of tracing/probing - like explored for example in the Live Coding Livebook research, and many other places
-
by designing the programming system in a way where "chunking" is part of what you do in that system, and other than a way of composition, gives you also visibility into the system, for example: