Dacein ↗ is an experimental creative coding IDE combining a few different ideas that I've been thinking about:
It was created during January and February 2019 as a part of bimonthly projects challenge that I'm running this year.
Creating graphics in Dacein is done using single sketch
function which accepts a few parameteres:
size
- size of the canvasinitialState
- initial state of the sketchupdate
- update function which produces new statedraw
- drawing function which consumes current statestate
mutations are only allowed within the update
function, and even though the state
itself is immutable, it can be modifed with familiar imperative JS API thanks to the immer ↗ library.
In addition to providing simple mental model - all changes are done in the update function - this technique also allows for easy implementation of time travel.
This fixes one of the issues I have with languages like Processing ↗, where drawing and updating are often intertwined, and in complex programs it's usually hard to follow where the values come from, and where they are changed.
Drawing happens through a small declarative API, borrowing ideas from React ↗ - instead of imperatively accessing the canvas API, the user returns an object representing current state of the drawing.
This is actually much simpler to implement for canvas vs DOM, because we can just keep redrawing the screen - there's no reconciliation ↗ step that needs to happen.
The object returned from draw
function is an array of arrays, idea borrowed from hiccup ↗.
So, combining all of these ideas together, a simple sketch of pulsating circle looks like this:
sketch({
size: [600, 600],
initialState: {
x: 0,
},
update: (state) => {
state.x++;
return state;
},
draw: (state) => {
const r = (Math.sin(state.x) + 1) * 100;
return [
["background", { fill: "#ffffff" }],
["ellipse", { pos: [300, 300], size: [r, r] }],
];
},
});
Compared to similar sketch implemented in p5 ↗, which even though shorter, intertwines (global) state manipulation with producing output:
let x = 0;
function setup() {
createCanvas(600, 600);
}
function draw() {
background(255);
x++;
const r = (Math.sin(state.x) + 1) * 100;
ellipse(300, 300, r, r);
}
To deal with user input, the update
function accepts second argument, which is an array of events that happened between the last and current frame.
This array contains objects with source
key (borrowed from DOM event sources: mousedown
, mousemove
, keydown
, etc.), and additional event metadata (mouse position, clicked key, etc.).
This allows us to create predictable, declarative sketches, where the output drawing is a function of state, and events that happened in the system.
It's easy to imagine that reasoning and testing such sketches is much simpler than their imperative counterparts.
This part of Dacein project could potentially leave on it's own, without the rest of the IDE. One of the ideas that I didn't get around to, was to allow users to export standalone sketches, that don't require Dacein to run.
If you're interseted in forking Dacein and ripping this part out, let me know.
You should also check out hdom-canvas
↗ for a much more developed similar idea created by Karsten Schmidt ↗.
Because draw
and update
parts of sketch
are immutable, we can record changes made to the state
before each draw
call, store them, and then allow users to pause, replay, and scrub through time:
The current state
value is also visualised, so it's easier to debug and understand what's happening.
The actual implementation of that feature was pretty straightforward.
currentState
using immer
library:currentState = immer(currentState, (draft) => sketch.update(draft, events));
state
and events
(mouse, keyboard, etc.), as well as current index inside of those arrays, which we use for scrubbing:setHistory((draft) => {
draft.stateHistory.push(currentState);
draft.eventsHistory.push(events);
// we make sure we only keep MAX_HISTORY_LEN elements in the arrays
while (draft.stateHistory.length > MAX_HISTORY_LEN + 1) {
draft.stateHistory.shift();
draft.eventsHistory.shift();
}
draft.idx = Math.min(MAX_HISTORY_LEN, draft.idx + 1);
});
draw
function:currentState = stateHistory[index];
The actual implementation is a bit more complex because of how the sketch container is set up, you can check the relevant parts here ↗ and here ↗.
Livecoding can mean different things in different context. For Dacein, by livecoding I understand ability to immediately see code changes on the screen, as well as custom pickers to scrub through numbers and colors without typing.
The hot reload of code is again piggybacking on the immutability.
When the user changes the code, we can evaluate it again, and swap current draw
and update
functions to see the changes on the screen immediately (well, by next frame).
This approach becomes problematic when user makes a change to initialState
.
I tried to automatically patch-in the new initialState
on top of current state
but it was finicky and was breaking the history scrubbing, so in the end I decided that any change to how the state
object looks would automatically reset the sketch (start from initial values).
Here's how it looks like when modifying code:
And here's how the user can play with constants using just mouse:
I'm doing a bit of not always correct detection of what kind of thing is under a cursor, and if I detect a number or string that looks like a color, I'm displaying a custom picker on top of the editor. That picker returns a new value, which is replaced in code ↗ which in turn is re-evaluated, leading to new value on screen. This could be optimised of course with some additional engineering time, but I hope it shows how easy this can be to implement in a text editor.
The last interesting thing implemented here in the livecoding space is custom highlight from result, to the code that produced it:
Implementing this was a bit harder.
I'm traversing the sketch AST, looking for everything that looks like a drawing command - that is, an array with first item being a string of one of the draw commands.
So ["circle", {}]
is a draw command, but [10, 20]
is not.
When I find an array like that, I augment the second argument with __meta
field which holds lineStart
and lineEnd
values grabbed from the AST.
The relevant code is here ↗.
Knowing the origin of a current drawing is only half of the battle.
I needed a way to know what the user is hovering over, and since the library uses canvas, this wasn't as easy as adding custom onMouseOver
handlers.
What is possible though, is encoding the items identity in color, picking the current color under the mouse, and working back to get to the __meta
field.
We can encode and decode limited (but huge) number of values in color using these two functions:
const encodeInColor = (num) => {
const hex = num.toString(16).substr(0, 6);
return `#${leftPad(hex, 6, "0")}`;
};
const decodeFromColor = (hex) => {
return parseInt(`0x${hex}`);
};
We then create another, offscreen canvas, and draw on it using this encoded color, instead of the one selected by user:
const drawInspector = (state) => {
let i = 0;
const operations = sketch.draw(state);
for (const operation of operations) {
const [command, args] = operation;
// replace fill and stroke with index encoded in color
const inspectorArgs = Object.assign(args, {
fill: args.fill ? encodeInColor(i) : undefined,
stroke: args.stroke ? encodeInColor(i) : undefined,
});
if (COMMANDS[command]) {
COMMANDS[command](ctx, inspectorArgs);
}
i++;
}
};
We can then pick the color from where the mouse is:
const onHover = (x, y) => {
const data = ctx.getImageData(x, y, 1, 1).data.slice(0, 3);
const hex = Array.from(data)
.map((n) => leftPad(n.toString(16), 2, "0"))
.join("");
return decodeFromColor(hex);
};
And then, once we have the index back, we can grab a relevant item from what sketch draw
function returns, and pick out it's __meta
value, to highlight the code.
It's a bit of long-winded process, but worth it, as it removes the need to keep a tab on where which piece of graphic comes from — something that in my opinion is unnecessary mental requirement while writing code.
Direct manipulation is how most of non-developer software works. If you want to move something in Photoshop, you just drag it over directly on the canvas; if you want to change a volume in Ableton you just drag a slider in the mixer. Same when changing fonts in Word, making slides in Powerpoint, etc.
Unfortunately, with programming we are stuck with indirect manipulation, the infamous cycle of change code, reload, test, change code...
With Dacein I tried to bring a bit of direct manipulation when working with sketches, mainly it shouldn't be required to go back to code, just to move something on the screen.
When the sketch is paused, anything that has a pos
attribute can be selected, and dragged around (with some caveats).
Dacein tries to come up with new code constants within the draw
function that fit where the dragged object should be.
This is achieved using unconstrained optimisation (thanks to uncmin
from numeric.js ↗).
The optimiser is run on a draw function, and tries to minimise the distance between the mouse and currently dragged object.
For uncmin
to work, we need a few things:
In this case, the function we're optimising is the draw
part of the sketch, and the thing we're optimising are the numeric constants within it.
Usually premature optimisation is not a good idea, but here I knew that replacing the constants in text, and evaluating the modified code again to get the results, would slow down the process a lot.
I wrote another AST transform, that walks through all the numbers in draw
, and pulls them out into an array, transforming:
const draw = (state) => {
const a = 20;
return [["rect", { pos: [a, 10] }]];
};
Into:
const draw = (state, constants) => {
const a = constants[0];
return [["rect", { pos: [a, constants[1]] }]];
};
Basically parametrising the code constants.
This is invisible to the user, but allows me to run uncmin
more efficiently.
The relevant AST transform code is here ↗.
The actual magic itself is a fairly straightforward, short piece of code:
const minimised = uncmin((newConstants) => {
// "draws" the sketch with current state, and constants provided by uncmin
const drawCalls = sketch.draw(state, newConstants);
// simple wrapper to get the position of a draw call with given index
const position = getPosition(drawCalls, id);
return dist(position, mousePosition);
}, currentConstants);
// minimised.solution is a new set of constants,
// where the distance between position and mousePosition is closest to zero
With some optimisations this works surprisingly well, and makes me wonder why most of our software is not made this way.
If you're interested in working with constraint optimisation, a good place to start would be cassowary ↗, and for not only coming up with parameters, but actually generating new code, miniKanren ↗ is worth taking a look at.
Finally, some smaller technical notes from making Dacein.
eval
was obviously a huge part of the project, and working with it safely was important, so the IDE doesn't crash.
There are three try {} catch()
blocks happening before the running code is actually replaced:
try
as the code might be malformed, and the parser might faileval
is also placed in try
to make sure nothing goes wrongsketch()
command is executed it first tries to run draw(update(initialState))
in hopes of finding some runtime errorsThe relevant code is here ↗.
To make the system a bit more open, I implemented a way to use require
, thanks to d3-require
↗ and bundle.run ↗.
I wanted to allow users to use familiar const _ = require("lodash")
calls, but d3-require
returns a Promise
instead of happening synchronously.
I wrote yet another small AST transform that turns this code:
const _ = require("lodash");
const vec2 = require("gl-vec2");
sketch({
// ...
});
Into:
require("lodash").then((_) => {
require("gl-vec2").then((vec2) => {
sketch({
// ...
});
});
});
And that was all that was needed to make this work; everything works on client side, and there's no need for a special server and bundling.
You can test it out with the particle system
example which uses gl-vec2
for some calculations.
Least importantly, the whole app was written using still fresh React hooks, excluding the actual sketch container, where combining internal state, function calls to parent, and orchestrating requestAnimationFrame
was easier for me to be done using React.Component
.
You can browse this part of code here ↗, it's using a lot of discouraged techniques (like componentWillReceiveProps
or shoulComponentUpdate() { return false }
), but sometimes workarounds are required to get the proper user experience.
Dacein is yet another project that is nowhere near being production ready, but hopefully, shows some interesting ideas, and also shows that it's possible to play with those things in relatively short timeframes. It took about 35 hours to build, working after hours and on the weekends.
Dacein is also the first thing made in my 2019 meta-project of bimonthly projects. Unsurprisingly, it feels much more relaxed that my monthly projects in 2017, and also more focused than the last year where I was just trying to do something every day.
I had a lot of fun putting a few different ideas together, and seeing them work well.
There are some leftover lower-hanging fruits, like a future-parity of the sketch
library to what 2d canvas exposes and squashing some bugs.
For now, I'm moving to the next project, leaving this one open-sourced: szymonkaliski/dacein ↗, and available online: szymonkaliski.github.io/dacein ↗.
2429 words published on 2019-03-01 — let me know what you think