The Ravensfield Collection
The Ravensfield Collection is a generative museum of the weird and wonderful, powered by an AI storytelling engine that produces unique narratives and artifacts.
What It is
The Ravensfield Collection is an AI museum that builds itself. Every day, a fully automated pipeline generates original weird fiction and artwork, complete with backstory, metadata, and expert commentary. Every article is part of a cohesive narrative universe.
It’s a live experiment in what happens when you give AI the freedom to create within a tight prompting strategy and well-organized system rules.
Why I Built This
I’m fascinated by what happens when AI meets creativity, and I wanted to see how far I could push generative models within a single, cohesive project. Also, I never get tired of weird, surreal stories. Building a machine that generates them endlessly? That’s just my kind of fun.
My Goals
The goal was never just to generate content, but to orchestrate an intentional generative narrative universe where every piece of AI output was both unique and part of a larger story.
I also wanted to challenge my understanding of AI by pushing my prompt engineering skills to the limit. That meant building complex, layered prompting strategies, strict data validation to counter hallucinations, and a relentless focus on originality, coherence, and consistency.
Technical Decisions
Claude Not all LLMs are equal when it comes to creative writing. Claude consistently produces richer prose, more nuanced character voices, and better narrative structure than the alternatives I tested. For a project where the writing is the product, that difference is everything.
Leonardo.AI Leonardo gives you granular control over dimensions, style presets, and generation parameters. The artwork needs to feel visually consistent across wildly different subjects. That level of specificity matters.
JSON structured responses with Claude Asking Claude to return a structured JSON object with defined fields means every piece of the pipeline gets exactly what it needs, in a predictable shape.
Expo Router with SSR I wanted a multiplatform app without sacrificing the fast first-load that SSR gives you. Expo Router’s server output mode made that possible without reaching for a separate framework.
Tamagui It handles cross-platform styling and theming in a single package, which keeps the codebase lean and tight across web and mobile.
Turso + Drizzle Turso is serverless and fast-to-start, which matters when the app scales down to zero between requests. Drizzle keeps the queries type-safe without the overhead of a heavier ORM.
Workflow Overview
MadlibsGenerator
artwork + story prompts
Claude API [1]
artwork description + image prompt
Valibot validationClaude API [2]
story, metadata, quotes
Valibot validationLeonardo.AI
image generation
returns image URL directlyAtomic DB transaction
article + artwork + quotes
Claude API [3]
Vision consistency check
reads image + story from DB, returns refined text → updates DBFrontend rendering
SSR → React Query → Tamagui
Biggest Challenges
Controlled chaos. LLMs excel at creating the variety of outcomes a living museum of the weird requires. However, left to their own devices they tend to settle into patterns: the same narrative beats, the same tone, the same kinds of objects.
A MadlibsGenerator breaks that inertia by assembling a randomized starting point from curated word lists before anything reaches Claude, keeping the output from going fully off the rails.
Inconsistencies between image and text. When you generate text and its accompanying image independently, they tend to drift apart. Claude writes a story about a jade ceremonial vessel from the Han dynasty, and Leonardo produces something that looks like a Victorian jewellery box. Both are fine individually, but together they break the illusion.
The fix came in three layers:
- The image prompt is generated in the same structured JSON call as the story, so both share the same creative seed.
- The artwork description is passed as a parameter into both the story and image generation steps. The text generations always work from the same source of truth.
- A final vision checker that has Claude look at the produced image alongside its own story and reconcile any remaining gaps.
By the end, the text and image aren’t just loosely related. They’re describing the same object.
Keeping the AI generated data honest. LLMs are inconsistent. Claude might skip fields, return malformed JSON, or hallucinate.
I solved this adding strict data validation to the Claude responses via Valibot schemas, but it required careful strategizing about what “good enough” looks like at each stage of the pipeline.
An unusually complex architecture. Building a server-side rendered web app with Expo and Expo Router is not a well-trodden path. Expo is primarily a native app framework, and bending it into a production SSR setup (with an Express server, a custom adapter, and a multi-stage Docker build) required piecing together documentation that doesn’t always assume this use case.
Adding Tamagui to the mix raised the difficulty further. Its SSR mode is unforgiving: theme state, hydration order, and the timing of client-side JavaScript all have to line up perfectly or things break in ways that are hard to trace.
As an example, getting dark mode, server rendering, and dynamic theming to coexist without cascading re-renders or hydration mismatches was a technically demanding part of the project.
What’s Next
- Native mobile app — the codebase already targets React Native, so publishing to iOS and Android is a natural next step.
- Search and filtering — let visitors dig through the collection by medium, era, or keyword.
- Audio narration — an AI voice reading each story aloud, proper museum audio guide style.
- Visitor interactions — guestbook notes, favourites, social sharing.
- Expanded categories — more obscure and strange corners of the art world to explore.