Plato’s Allegory of the Cave offers a metaphor for the limits of human perception. The prisoners in the cave know the outside world only through shadows projected on the wall in front of them. In a similar way, a large language model’s apparent knowledge of the world comes not from direct experience, but from “shadows” in its training data texts that describe things, events, and our conversations about them. Since an LLM cant actually 'see' we might better think of it as a prisoner in a yet another cave or as Kulveit(2023) proposes, a ‘blind oracle’ who only hears the conversations on the prisoners talking about the shadows. I though it would be interesting to simuate a version of this bind
Tag: AI
Carbon Cloud Chat
https://carbon-cloud-chat-21c0d88f.base44.app As a thought experiment, I used AI to develop a carbon-aware AI chatbot that actively discourages the use of AI and guides users toward lower-carbon alternatives, or encourages them not to use the AI at all. I have been working on several modules related to UX and sustainability in the digital arts this year, and the ecological impact of AI is a key concern. This is a work in progress, and I will use this app as a point of discussion in my lectures. I am aware of the meta irony here: I used AI (Basse44, with one primary design prompt and 20 further iterations, plus a few hours of testing) to build an AI that advises people not to overuse AI. That
