current computing happens, at best, around files, at worst - around apps
csv
file that can be opened/imported in a couple of different appsideally, users could bring their own tools for given job - for example editing vector image inside of a text document would open inline "vector editing tool" - instead of current workflow of editing in another app, and then exporting
The idea in the UI is that it will sprout in a frame around the object when you go into it
— How did drawing work on the Alto? ↗ - Alan Kay
Look carefully at those clips, and notice that even when users are interacting with an overt "computer," rarely are they focusing on "applications" (i.e., "programs"). Watch those clips again, and notice which "things" the users are seeing and manipulating. Those things most often are objects in the users' domains of work or play, rather than the tools (computer applications) that the users use to deal with those domain objects. This is most apparent in direct manipulation gestural interfaces. The users are grabbing, rotating, and flinging people, diagrams, buildings, and spacecraft.
— Object-Oriented GUIs are the Future ↗
Similarly, users simply flip views of the same manifestation of an object, from graphical to alphanumeric to pictorial to audio, rather than searching again for that same object in each of several different applications, each devoted to only one such type of view. The heroine is looking at a live satellite image of the building, then flips that exact same portion of the screen to show a schematic view of the building, as if putting on x-ray glasses. She does not fire up the separate "Schematic Viewer" application and, once there, hunt down that building. She does not even drag that building's image from the "Satellite Viewer" application to the "Schematic Viewer" application.
— Object-Oriented GUIs are the Future ↗
416 words last tended to on 2024-02-02 — let me know what you think