Peripheral Vision in Software

My physical space is full of subtle cues. Books I read or bought most recently are lying out. Papers are lying in stacks on my desk, roughly arranged by their relationships.

If I need to fix a door, I'll be reminded each time I see it. Digital task lists live in a dedicated app. I have no natural cause to look at that app regularly, so I need to establish a new habit to explicitly review my task list.

  • similar to how I leave physical things I need to take with me near my shoes - Self-Cybernetics

If I mark up a physical book then later flip through to see my margin notes, I'll always see them in the context of the surrounding text. By contrast, digital annotation listings usually display only the text I highlighted, removed from context.

If I read an old digital note, I get the unnerving sense that it's part of some "whole" that I can't see at all-no matter how much hypertext is involved. Working with physical notes, I'd shuffle notes around to make sense of the structure. There isn't a digital equivalent.

  • there's no "peripheral vision" in computing:
    • everything is hidden in folders/files (no "prompts" to do stuff)
    • every file and folder looks the same
    • the context is hidden away
  • this is related to ideas of Embodied Cognition, Epistemic Actions, spatial thinking, and the Dynamic Medium