Join DATALAND for a one-day-only pop-up exhibition, offering a first glimpse into its universe of AI Arts before it opens its doors to the public.
Where human imagination meets the creative potential of machines.
This exhibition offers a rare glimpse into the making of Refik Anadol Studio’s DATALAND, the world’s first Museum of AI Arts, opening in Spring, 2026 at The Grand LA. Conceived as a living system rather than a static museum, DATALAND explores how human and machine intelligence can co-create new forms of beauty, knowledge, and experience.
Here, visitors encounter the building blocks of this vision: large-scale projections, multisensory transparent screens, and experimental AI environments drawn from the Studio’s ongoing research. Each element represents a stage in the creative and technological process behind DATALAND’s design—from early prototypes of the Living Encyclopedia to data-driven experiments in scent, landscape, and sound.
The exhibition introduces AI as a form of intelligence capable of perceiving, translating, and reimagining the natural world through data. The three-screen scent experiences—Flora, Landscape, and Ocean—invite a synesthetic encounter between sensory perception and computation. The Living Encyclopedia demos and teasers offer a first look at how DATALAND will function as an evolving archive, transforming information into immersive experiences that connect art, science, and public imagination.
This preview reveals the process—the experiments, systems, and architectures that will animate DATALAND's five galleries. It invites audiences to witness a new paradigm for cultural institutions: one where art is generated in real time, where data becomes a creative material, and where the museum itself learns, adapts, and dreams.
This installation invites viewers into the vast, living landscape of the Large Nature Model, an AI system trained on millions of ethically sourced images gathered in collaboration with world's leading institutions such as the Smithsonian. Presented as a real-time UMAP browser, the experience allows users to fly through the model’s latent universe—an abstract space where visual, auditory, and ecological data merge. Each point represents a fragment of the natural world, from coral reefs to forest canopies, organized by the machine’s own sense of relation. Running on a high-performance GPU pipeline, Data Universe: LNM transforms data into a luminous field of cognition, a visualization of how artificial intelligence learns from, reimagines, and dreams about the planet.
Begin your journey into the DATALAND experience
$350per year
Benefits include:
Deepen your access and expand your experience
$750per year
Benefits include:
Experience DATALAND at its most complete level
$1,500per year
Benefits include:
$10per month
Benefits include:
$100
Benefits include:
The Living Encyclopedia reimagines the museum archive as a dynamic, ever-learning ecosystem. In Research Mode, users explore datasets that span nature, culture, and creativity, walking through AI-curated connections rather than static categories. It also functions as a self-navigating stream of discovery, sequentially generating images from species labels in the LNM to offer an ambient, purely observational mode of exploration. This mode demonstrates how large-scale data—text, image, sound—can be clustered, visualized, and translated into new forms of knowledge. It is an archive that studies itself, inviting audiences to discover how machines can trace patterns of meaning across the vast, living memory of the world.
Create Mode Create Mode transforms the Living Encyclopedia into a generative studio. Here, visitors can prompt the system to compose visuals and sounds derived from the nature-focused dataset. Each output is unique—an emergent artwork shaped by human input and algorithmic imagination. Within the generative studio, users can revisit their outputs and generate new prompts for inspiration. This mode exemplifies the museum’s ethos of co-creation: every interaction expands the model’s sensory vocabulary, blurring the line between observer and collaborator.
Three transparent screens—Flora, Landscape, and Ocean—translate environmental datasets into multisensory experiences that merge sight, sound, and scent. Each sequence visualizes the rhythms of natural systems—plant life, terrain, and marine environments—while a custom scent composition converts their data patterns into olfactory form. The result is a synesthetic dialogue between computation and the senses, where climate data becomes something that can be seen, heard, and inhaled.
Derived from Refik Anadol Studio’s Large Nature Model: Coral, these AI-generated sculptures reimagine the intelligence of coral reefs through data. Each form emerges from high-resolution environmental datasets documenting the biodiversity and fragility of reef ecosystems. The resulting structures exist between the organic and the synthetic, echoing coral’s own logic of growth, pattern, and interdependence. In translating oceanic data into sculptural form, the work transforms invisible ecological processes into tangible, glowing architectures of life.