
The Probable Beauty of LLMs
There are many interesting differences between a traditional computer program and a large language model. One of the most powerful is the non-deterministic nature of a model's output; that is, how a single prompt may produce multiple unique responses each time it is run.
To see why this is important, it is helpful to understand where models come from in the first place. LLMs like ChatGPT, Claude, and Gemini are pre-trained on terabytes of text sourced from the internet. Pre-training may take weeks or months to complete, and the result is a base model.
Base models do not behave like assistants. Imagine autocomplete on steroids - to a base model, a "chat" looks like one long document, and the model simply continues that document. Base models typically output vastly varied responses to the same prompt. This makes sense - the training dataset is so large that for most prompts, many tokens have a high probability of coming next. With each output token, the range of possible following tokens shrinks, slowly pushing the model towards a deterministic output.
Usually, the base model is then fine-tuned on a specific task, typically learing to assume the role of a 'helpful, harmless, honest assistant'. This is known as instruction tuning. Instruction tuning inherently introduces a level of determinism to the model's output - now, tokens like "Sure, I'd be happy to help you with..." have a much higher probablity of following a given prompt than most other tokens in the training dataset.
Asher is still typing...
Intrinsic Labs is invested in facilitating widespread, deep understanding of AI behavior. Latent Spaces is our first big step in that direction.
Project Overview
Latent Spaces is the first mobile application specifically designed around the concept of a language model loom. The app facilitates generation of N continuations to any prompt from any point in an exchange, and allows users to traverse and curate all generated branches at will. Model providers OpenRouter and Anthropic are currently implemented. The iOS beta is in the works, with Android beta next in line on the priority list.
This project proposal aims to get the iOS app ready for a public beta release.
Alongside the mobile app, Intrinsic Labs is developing a protocol called OpenLoom that other loom interfaces may adopt to import/export trees in a standardized lossless format. Latent Spaces supports tree sharing via the OpenLoom format out of the box.
This project proposal also aims to get OpenLoom V1.0 ready for publication.
Beta Fundraiser
Monthly Support
Become a regular supporter for continuous development.
Feature Map
- Address SwiftData-related performance issues
- Upgrade node caching system
- Add support for saving reusable system prompts
- Add pinned/bookmarked trees
- Add support for editing tree nodes
- Add full markdown display support
- Add image upload support (for applicable models)
- Add document upload support (for applicable models)
- Parse reasoning tokens for relevant models (e.g. DeepSeek R1)
- Add support for user defined models that comply with OpenAI API schema
- Add on-device audio transcription for hands-free beta voice mode
- Implement functional MVP of LoomDisplay (text-to-ASCII animation system)
- Upgrade OpenLoom protocol from graph structure to hypergraph structure
- Upgrade node signing requirements to ensure accurate author attribution
- Add support for non-text node types (e.g. images, documents)
- Latent Spaces project overview
- Loom interface introduction for new users
- High level roadmap
- Beta program information and signup
- Social links
FAQ
Join Our Community
Be part of the conversation, get early access to beta releases, and help shape the future of Latent Spaces.