In 2015, the Strong National Museum of Play in Rochester, New York, opened an exhibit called "World Video Game Hall of Fame." But this wasn't a traditional museum exhibit—dusty consoles behind glass with "Do Not Touch" signs. Visitors could play the inducted games: Pong, Pac-Man, Tetris, Doom.
The museum faced a curatorial question that would be absurd in traditional museums: Should we let people touch the artifacts? For video games, the answer had to be yes. A game you can't play is like a book you can't read—the medium requires interaction. But interaction means degradation (controllers wear out, CDs get scratched). Museums typically preserve to prevent use. Here, use was preservation—keeping the experience alive.
This is the paradox of digital memory institutions: the artifacts are not just objects to be stored, but experiences to be resurrected. A GeoCities homepage isn't just HTML files—it's the experience of navigating a web ring, seeing
The Haunted Forest is our metaphor for digital memory institutions—places where murdered platforms and their artifacts exist in liminal space between dead and alive. Not quite functional (the original platform is gone), but not quite inert (the artifacts still haunt us with meaning). Memory institutions curate this haunting—they don't just store, they interpret, contextualize, and make accessible.
This chapter explores how to build memory institutions for digital culture—museums, archives, libraries, memorials, and research collections that preserve not just bits, but meaning.
Traditional memory institutions (museums, archives, libraries) each have distinct missions. Digital memory institutions inherit these missions but must adapt them:
Traditional Mission:
Collect materials (books, journals, media)
Catalog and organize
Lend for temporary use
Provide public access
Digital Adaptation: The Web Library
Example: Internet Archive's Wayback Machine
What it does:
Crawls and stores snapshots of websites over time
Makes them publicly browsable (800+ billion pages)
Free access, no login required
Preserves web as if it were a lending library ("borrow" access to past versions)
How it embodies Library mission:
Comprehensive collection: Aims to archive "everything" (like Library of Congress)
Public access: Anyone can browse, no restrictions (unlike research archives)
Findability: URL-based access (like call numbers)
Circulation: Multiple users can "use" the same archived page simultaneously
Challenges:
Scale is overwhelming (800B pages—impossible to curate comprehensively)
Context is minimal (sites preserved but not explained)
Robots.txt compliance (respects site owners' wishes not to be archived—some historically important sites excluded)
When to use Library model:
Comprehensive preservation is goal
Public access is priority
Resources allow for massive scale
Traditional Mission:
Preserve unique/rare materials
Maintain original order and provenance
Restrict access to protect fragile items
Serve researchers, not general public
Digital Adaptation: The Restricted Research Archive
Example: Library of Congress Twitter Archive (2006-2017)
What it does:
Preserved all public tweets (billions) from 2006-2017
Metadata-only access (can search, but can't read full tweet text without special permission)
Researcher access requires application and justification
Not publicly browsable
How it embodies Archive mission:
Provenance: Preserves complete record (all tweets, in order, with timestamps)
Restriction: Protects privacy (can't mass-surveil via archive)
Research focus: Designed for scholars, not casual browsing
Permanence: Committed to preserving forever (unlike platforms)
Challenges:
Restrictions limit utility (researchers frustrated by access barriers)
Metadata-only access means context is hard to reconstruct
2017 cutoff (stopped collecting—now only selective acquisition)
When to use Archive model:
Privacy concerns require restricted access
Materials are sensitive or contested
Focus is on research, not public engagement
Traditional Mission:
Collect objects of cultural/historical significance
Curate exhibitions (select and interpret)
Educate public through display
Create narrative and meaning
Digital Adaptation: The Curated Digital Museum
Example: Cameron's World (GeoCities Archive as Art)
What it does:
Selects GIFs, backgrounds, and UI elements from archived GeoCities sites
Arranges them into a sprawling, interactive web collage
Provides essays explaining GeoCities aesthetics and culture
Makes 1990s web design comprehensible and beautiful
How it embodies Museum mission:
Curation: Selects from vast archive (not comprehensive, but meaningful)
Interpretation: Explains why GeoCities mattered aesthetically and culturally
Exhibition: Public display, visually engaging
Education: Teaches people who never experienced GeoCities what it felt like
Another Example: The Strong Museum's Video Game Hall of Fame
What it does:
Inducts significant games into "Hall of Fame" (selective canon)
Makes games playable on museum floor (interactive exhibits)
Provides historical context (when released, why important, cultural impact)
Preserves hardware and software together (full experience)
Challenges:
Curation is subjective (who decides what's significant?)
Resources limit scope (can't exhibit everything)
Interpretation can impose narrative (risk of revisionism)
When to use Museum model:
Scale is manageable (curated collections, not comprehensive dumps)
Public engagement is goal (exhibitions, education)
Narrative and interpretation are central
Traditional Mission:
Honor the dead or lost
Create space for grief and remembrance
Preserve memory of trauma or tragedy
Offer emotional, not just intellectual, engagement
Digital Adaptation: The Platform Memorial
Example: The September 11 Digital Archive
What it does:
Collected personal stories, emails, photos, websites created in response to 9/11
Community submissions (people contributed their own materials)
Public access (browsable, searchable)
Emotional framing (archive as act of collective mourning)
How it embodies Memorial mission:
Commemoration: Preserves tragedy and response
Personal stories: Not just official record, but individual experiences
Emotional resonance: Designed to evoke feeling, not just document facts
Community ownership: People participated in creating the archive
Another Example: Hypothetical "GeoCities Memorial"
What it could do:
Frame GeoCities shutdown as cultural loss (murder, not natural death)
Invite former GeoCities users to submit memories ("Where were you when GeoCities died?")
Create virtual memorial wall (names of lost sites, like Vietnam Memorial)
Offer space to grieve the loss of early web's optimism
Challenges:
Emotional framing can seem melodramatic (is a platform shutdown worth mourning?)
Risk of nostalgia (romanticizing past at expense of present)
Who is being memorialized? (platform? users? era?)
When to use Memorial model:
Cultural loss is central (not just preservation, but acknowledging grief)
Community needs space to mourn
Emotional engagement is goal
Traditional Mission:
Provide raw materials for scholars
Emphasis on completeness and accuracy
Minimal interpretation (let researchers draw conclusions)
Standardized formats for analysis
Digital Adaptation: The Research Dataset
Example: Pushshift Reddit Archive
What it does:
Archived every Reddit post and comment (billions) in machine-readable format
Made available to researchers (JSON files, searchable API)
Minimal curation (raw data dumps)
Used for: sociology research, hate speech studies, meme diffusion analysis
How it embodies Research Collection mission:
Completeness: Everything archived, not curated sample
Machine-readable: JSON, CSV, SQL—formats for computational analysis
Researcher-focused: Not public-friendly (requires technical skill)
Neutral: Doesn't interpret data, just provides it
Challenges:
Reddit demanded takedown (2023)—Pushshift stopped providing public access
Ethical issues: Contains hate speech, harassment, doxxing (should researchers have access?)
No interpretation: Requires expertise to use (not accessible to public)
When to use Research Collection model:
Scale is massive (too large for manual curation)
Goal is to enable research (not public exhibition)
Materials are best understood through computational analysis
When designing a memory institution for murdered digital artifacts, choose your model based on:
Comprehensive (Library/Research Collection)
Archive everything or nearly everything
Minimal selectivity
Example: Internet Archive's Wayback Machine
Curated (Museum/Memorial)
Select significant subset
Intensive interpretation
Example: Strong Museum's Video Game Hall of Fame
Open (Library/Museum)
Public can browse freely
No restrictions (or minimal)
Example: Cameron's World, Internet Archive
Restricted (Archive/Research Collection)
Requires application, credentials, or justification
Protects privacy or sensitivity
Example: LOC Twitter Archive
High Interpretation (Museum/Memorial)
Curators provide context, narrative, meaning
Exhibitions tell stories
Example: 9/11 Digital Archive with framing essays
Low Interpretation (Archive/Research Collection)
Minimal curation, let materials speak for themselves
Provenance and metadata, but not narrative
Example: Pushshift raw data dumps
Experiential (Museum)
Artifacts are interactive (playable games, browsable sites)
Focus on recreating original experience
Example: Strong Museum playable games
Documentary (Archive/Library)
Artifacts viewed as historical record
Screenshots, descriptions, metadata
Original experience not replicable
Example: Static screenshots of Flash games (not playable)
| Institution Type | Scale | Access | Interpretation | Experience |
|---|---|---|---|---|
| Library | Comprehensive | Open | Low | Documentary |
| Archive | Comprehensive | Restricted | Low | Documentary |
| Museum | Curated | Open | High | Experiential |
| Memorial | Curated | Open | High | Emotional |
| Research Collection | Comprehensive | Restricted | Minimal | Data-focused |
Hybrid Models Are Common:
Internet Archive = Library + Archive (comprehensive + some restrictions)
Strong Museum = Museum + Research Collection (curated exhibits + comprehensive archives in back)
9/11 Archive = Memorial + Library (emotional framing + open access)
Museums don't display everything they own. The Smithsonian's collections are 95% in storage—only 5% on exhibit. Digital memory institutions face the same question: What do we make visible?
Philosophy: Archive everything, make it all accessible, let users find what they want.
Example: Internet Archive's Wayback Machine
Strengths:
No gatekeeping (curators don't impose their taste)
Serendipity (users discover unexpected things)
Completeness (future researchers have maximum material)
Weaknesses:
Overwhelming (800B pages—where do you start?)
No hierarchy (spam and Shakespeare equally visible)
Context is absent (sites preserved without explanation)
Best for: Platforms with structured URLs (websites) where users know what they're looking for
Philosophy: Select the "most important" artifacts, create a canon.
Example: Strong Museum's Video Game Hall of Fame (inducts ~10 games/year)
Strengths:
Manageable (visitors can engage deeply with 50 games, not 50,000)
Narrative coherence (tells story of video game history)
Educational (curators explain why these games matter)
Weaknesses:
Elitism (who decides what's "important"?)
Exclusion (marginalizes non-canonical work)
Revisionism (canon reflects curator bias)
Best for: Platforms where a small subset represents the whole (pioneering games, influential creators)
Philosophy: Organize by themes, movements, or communities.
Example: Hypothetical "Tumblr Fanfiction Archive" organized by fandom, pairing, rating, era
Strengths:
Findability (users navigate by interest, not chronology)
Contextual (themes provide interpretive frame)
Inclusive (multiple themes accommodate diverse interests)
Weaknesses:
Subjective (who decides themes?)
Overlapping (artifacts fit multiple themes—where do they go?)
Incomplete (not everything fits a theme)
Best for: Platforms with identifiable communities or genres (fanfiction, meme culture, activist organizing)
Philosophy: Preserve everything in temporal order, like a timeline.
Example: Internet Archive's snapshots (sites preserved as they changed over time)
Strengths:
Objectivity (chronology is neutral)
Change visible (see how platforms evolved)
Completeness (nothing excluded for thematic reasons)
Weaknesses:
No hierarchy (early posts equal to late posts)
Narrative absent (time alone doesn't explain meaning)
Scale issues (decades of daily posts = overwhelming)
Best for: Platforms where temporal evolution is key (Twitter's changing culture, YouTube's algorithm shifts)
Philosophy: Let users/creators curate their own materials.
Example: 9/11 Digital Archive (community submissions), Fanlore (fan-created wiki)
Strengths:
Authenticity (communities define their own history)
Consent (creators choose what's shared)
Diversity (avoids institutional bias)
Weaknesses:
Unevenness (some creators participate, others don't)
Coordination challenges (requires infrastructure for submissions)
Quality varies (no editorial oversight)
Best for: Platforms where community identity is strong (fandoms, activist movements, hobbyist communities)
Philosophy: Use algorithms to select representative samples or identify significant patterns.
Example: Using view counts, shares, replies to identify "most influential" tweets
Strengths:
Scalability (algorithms process massive datasets)
Objectivity (no human bias—though algorithms have bias too)
Discovery (finds patterns humans miss)
Weaknesses:
Black box (users don't know why things were selected)
Bias (algorithms reflect creator bias and training data)
Misses margins (algorithms favor mainstream)
Best for: Platforms with clear metrics (views, likes, shares) and massive scale
Digital artifacts exist in layers. How much of each layer do you preserve?
What's preserved: Screenshots, descriptions, metadata
What's lost: Interactivity, experience, technical details
Example: Wikipedia article about Vine (describes it, but can't show it)
Pros: Cheap, easy, lightweight Cons: Least faithful to original
When to use: Platform is already dead, no way to preserve fully; documentation better than nothing
What's preserved: HTML, CSS, images (rendered as static files)
What's lost: JavaScript interactivity, dynamic content, databases
Example: Archived GeoCities sites (HTML works, but embedded widgets/scripts don't)
Pros: Relatively easy, preserves visual appearance Cons: Non-interactive sites feel "dead"
When to use: Static websites, blogs, simple HTML pages
What's preserved: Full functionality via emulator (browser, OS, hardware)
What's lost: Original hardware experience (speed, bugs, quirks)
Example: Flash games playable via Ruffle emulator, DOS games via DOSBox
Pros: Fully interactive, close to original experience Cons: Requires maintaining emulators (which can become obsolete)
When to use: Complex platforms requiring specific environments (Flash, Java, old browsers)
What's preserved: Actual code, databases, server configurations
What's lost: Nothing (in theory)—but requires technical expertise to run
Example: GitHub archives of open-source projects
Pros: Most faithful, can be recompiled/forked/modified Cons: Requires developer skills, dependencies may be obsolete
When to use: Open-source platforms, when preserving for future developers (not just users)
What's preserved: Original infrastructure still running
What's lost: Nothing (it's still alive)
Example: Old arcade games kept running on original hardware by collectors
Pros: Perfect fidelity Cons: Expensive, fragile (hardware fails), not scalable
When to use: High-value artifacts where experience depends on specific hardware (rare)
What's preserved: Platform rebuilt from scratch for modern environments
What's lost: Bugs, quirks, historical authenticity (new code ≠ old code)
Example: Homestar Runner rebuilt in HTML5 (originally Flash)
Pros: Accessible on modern devices, no emulation needed Cons: Not "authentic" (it's a recreation, not preservation)
When to use: Cultural value is high, original platform can't run anymore, resurrection is only option
Higher fidelity = higher cost (time, storage, maintenance, expertise)
Strategy: Tiered preservation
Level 1-2 (documentation/static): Archive everything
Level 3-4 (emulation/source): Archive high-value subset
Level 5-6 (live/resurrection): Only most culturally significant artifacts
Preserving artifacts is half the battle. Making them findable and usable is the other half.
How it works: Every artifact has a permanent URL; users navigate directly or via search engines
Example: Internet Archive's Wayback Machine (web.archive.org/web/TIMESTAMP/URL)
Pros:
Simple, intuitive
Integrates with web (can link from anywhere)
Decentralized (no need for central index)
Cons:
Requires knowing URL (hard if you don't remember the site)
No thematic browsing (can't explore by topic)
How it works: Full-text search across all preserved content
Example: Archive.org's search bar, Google Books
Pros:
Powerful discovery (find anything containing keyword)
Don't need to know exact URL
Cons:
Overwhelming (thousands of results)
Poor for browsing (good for finding specific thing, bad for exploration)
How it works: Curators create thematic collections or virtual exhibitions
Example: Strong Museum's Hall of Fame induction pages, Cameron's World
Pros:
Guided experience (learn through narrative)
Manageable scope (100 items, not 100,000)
Contextual (exhibits explain significance)
Cons:
Limited (most collection not exhibited)
Curator bias (what's not exhibited is invisible)
How it works: Community members add metadata, tags, context
Example: Fanlore (fan-created wiki about fandom history), Wikipedia's coverage of internet culture
Pros:
Distributed labor (community shares work)
Insider knowledge (fans know context outsiders miss)
Self-updating (as community learns, wiki improves)
Cons:
Uneven coverage (popular fandoms well-documented, niche ones ignored)
Quality varies (no editorial oversight)
Requires active community (if community dies, wiki stagnates)
How it works: Machine-readable access (JSON, CSV, SQL) for computational analysis
Example: Pushshift API, Twitter Academic API
Pros:
Enables large-scale research (computational humanities, data science)
Flexible (researchers query exactly what they need)
Cons:
Not user-friendly (requires programming skill)
Not browsable (can't casually explore)
Most memory institutions use multiple access methods:
URLs for direct access (if you know what you want)
Search for discovery (find specific content)
Exhibits for education (learn about the platform/era)
Wiki for context (community-generated metadata)
API for research (scholars analyze at scale)
Example: Internet Archive
Wayback Machine: URL-based
Search bar: keyword search
Collections: curated thematic groups (e.g., "Grateful Dead Live Concerts")
API: developers can query programmatically
Memory institutions must navigate thorny legal and ethical issues:
Problem: Most preserved content is copyrighted. Does archiving violate copyright law?
Legal Frameworks:
Fair Use (US)
Preservation may qualify as fair use (transformative, educational, minimal market harm)
Case law: Authors Guild v. Google (Google Books scanning ruled fair use)
But: Fair use is defense, not right—you could still be sued
Section 108 (US Copyright Law)
Libraries and archives can preserve copyrighted works under certain conditions:
Non-commercial purpose
Closed systems (access in library only, or limited digital access)
But: Doesn't cover web scraping or mass digitization clearly
DMCA Safe Harbor
Platforms not liable for user-uploaded content if they respond to takedowns
Memory institutions use this (Internet Archive responds to DMCA requests)
International Variations:
EU: Orphan Works Directive allows preservation of works with unknown copyright holders
Canada: Fair Dealing (narrower than US fair use)
Practical Strategy:
Preserve first, respond to takedowns if challenged (Internet Archive's approach)
Or: Seek permissions (time-consuming, often impossible)
Or: Restrict access (preserve but don't make public)
Problem: Archived content may contain personal information people no longer want public.
Ethical Questions:
Should you preserve someone's teenage LiveJournal (they might be embarrassed now)?
Should you archive doxxing or harassment (evidence of harm, but re-publicizes victim info)?
Should you preserve medical/financial/intimate details shared on forums?
Frameworks:
Right to Be Forgotten (GDPR, EU)
Individuals can request deletion of personal data
Applies to archives? Unclear (exemptions for journalism/research/public interest)
Contextual Integrity (Helen Nissenbaum)
Privacy violated when information flows across contexts inappropriately
Example: Forum post meant for small community, now archived and Google-indexed = context collapse
Practical Approaches:
Takedown Policies:
Allow individuals to request removal (Internet Archive honors requests)
Review case-by-case (balance individual privacy vs. historical value)
Restricted Access:
Preserve but don't make publicly searchable
Researcher-only access (requires IRB approval)
Anonymization:
Remove or redact names, usernames, identifying details
But: Can harm historical accuracy
Problem: Did creators consent to their work being preserved?
Arguments:
Implied Consent:
By posting publicly, you consented to archiving (like publishing a book)
Counterargument: Expectation of ephemerality (platform may shut down, but users didn't expect Internet Archive)
Explicit Consent:
Only archive if creator explicitly agrees
Counterargument: Impractical (can't contact millions of users)
Posthumous:
If creator is dead, do we need consent from estate?
Historical materials often preserved without consent (diaries, letters found after death)
Practical Strategy:
Default to preserving (implied consent for public posts)
Honor explicit deletion (if someone deleted content, don't resurrect without reason)
Provide opt-out (let creators request removal)
Problem: Some content causes harm if preserved (hate speech, doxxing, revenge porn).
Ethical Framework:
Do No Harm Principle:
If preserving causes direct, ongoing harm (reveals someone's address, enables harassment), don't do it
Historical Value vs. Harm:
Hate forums: preserve for research (understanding extremism), but restrict access (don't make recruitment tool)
Revenge porn: don't preserve (no historical value justifies harm)
Contextual Judgment:
Evaluate case-by-case
Consult affected communities when possible
What they do:
Curate exhibitions of video games, toys, and play
Make games playable (interactive exhibits)
Preserve hardware and software together
Host researchers (extensive archives beyond exhibits)
Why it works:
Curation: Selective canon (Hall of Fame inductees)
Experience: Games are played, not just viewed
Interpretation: Context provided (essays, talks, labels)
Institutional stability: Endowed museum (not dependent on platform survival)
Challenges:
Limited scope (only games, not all digital culture)
Geography-bound (must visit Rochester to play games)
What they do:
Crawl and preserve websites (Wayback Machine)
Archive books, music, video, software
Open access (free, no login)
Advocate for digital rights (lawsuits for fair use)
Why it works:
Scale: 800+ billion web pages
Longevity: 29 years and counting
Public good: Non-profit, funded by donations and services
Legal courage: Willing to defend fair use in court
Challenges:
Scale makes curation impossible (overwhelming)
Robots.txt compliance excludes important sites
Funding precarity (dependent on donations)
What they do:
Wiki documenting fandom history (ships, tropes, controversies, communities)
Created and maintained by fans
Covers all fandoms (TV, books, games, RPF, etc.)
Why it works:
Insider knowledge: Fans document nuances outsiders miss
Community ownership: Fans preserve their own history
Distributed labor: Thousands of contributors
Challenges:
Uneven coverage (big fandoms well-documented, small ones sparse)
Vandalism/edit wars (controversial topics fought over)
Succession (if volunteer community dwindles, wiki could die)
What they do:
Preserve 500,000+ Flash games and animations
Provide custom launcher with embedded emulator
Community-curated (volunteers add games, metadata)
Why it works:
Rescue mission: Saved massive amount of content before Flash died
Playability: Games fully functional (not just archived)
Community-driven: Distributed effort (volunteers curate, test, tag)
Challenges:
Maintenance burden (emulators need updates as OSes change)
Copyright gray area (hosting games without explicit permission)
Curation slow (500K games, but millions more exist—can't save everything)
Questions:
What are you preserving? (specific platform, genre, community, era)
Why does it matter? (cultural significance, underrepresentation, endangerment)
Who is your audience? (general public, researchers, community members)
What type of institution? (library, archive, museum, memorial, research collection)
Example Mission: "The Tumblr Fandom Archive preserves fanworks (fanfiction, fan art, meta) from Tumblr's golden age (2010-2016), focusing on marginalized fandoms and LGBTQ+ creators. Our audience is fans, scholars, and future generations interested in transformative works. We are a community-driven digital museum with curated exhibits and open archives."
How will you acquire content?
Option A: Scrape
Use tools (wget, ArchiveBox, Heritrix)
Pros: Comprehensive
Cons: Legal gray area, may violate ToS
Option B: Community Submissions
Invite creators to submit their work
Pros: Consent-based, community-driven
Cons: Incomplete (only those who participate)
Option C: Partnerships
Work with platform for data dump
Pros: Legal, comprehensive
Cons: Requires cooperation (rare)
Option D: Hybrid
Scrape public content + invite submissions for private/deleted content
Technical Needs:
Storage: Servers or cloud (how much data?)
Redundancy: Multiple backups (LOCKSS principle)
Access platform: Website for browsing (static site, database-driven, CMS)
Options:
Self-hosted: Full control, but maintenance burden
Cloud: Scalable, but ongoing costs
Partner with institution: University library hosts (stability, but less autonomy)
Budget:
Small project: $100-500/year (domain, shared hosting, cloud storage)
Medium project: $5K-50K/year (dedicated servers, staff time)
Large project: $100K+/year (institutional scale, like Internet Archive)
How will you organize content?
Metadata Schema:
Dublin Core (standard for libraries)
Custom schema (specific to your domain)
Essential Fields:
Title, Creator, Date, Description, Tags/Categories, Source URL, Archive Date
Curation Approach:
Comprehensive warehouse (minimal curation)
Thematic collections (curated exhibits)
Algorithmic (automated tagging, recommendations)
Community-driven (user-submitted metadata)
How will users find content?
Build:
Search functionality (full-text or metadata)
Browse by category, date, creator
Featured/curated collections (homepage highlights)
API (for researchers)
Tools:
Static site generator (Jekyll, Hugo) for simple projects
Database + CMS (WordPress, Omeka) for complex projects
Custom web app (Flask, Django, Rails) for maximum control
Document:
Copyright stance (fair use, takedown policy)
Privacy policy (what personal info do you collect/preserve?)
Consent framework (do you allow opt-out?)
Access restrictions (public, researcher-only, embargoed)
Get advice:
Consult copyright lawyer
Follow models (Internet Archive's policies, university IRB guidelines)
Launch:
Soft launch (invite community, gather feedback)
Public announcement (blog post, social media, press)
Ongoing:
Add content regularly (don't let it stagnate)
Respond to takedown requests
Update software/emulators (prevent bit rot)
Fundraise (donations, grants, sponsorships)
Succession Planning:
What happens if you can't maintain it? (partner institution, hand off to community, deposit in Internet Archive)
Memory institutions for digital culture are not just storage facilities—they're acts of interpretation. Every curatorial choice (what to preserve, how to display, who gets access) shapes how future generations understand our present.
The Haunted Forest is haunted precisely because these artifacts are liminal—neither alive nor fully dead. They exist in the gap between platform death and historical canonization. Memory institutions curate this gap, transforming murdered platforms into ghosts that can teach, inspire, and warn.
When you build a memory institution, you're not just saving bits. You're:
Resurrecting experience (making dead platforms playable, browsable, meaningful)
Creating context (explaining why this mattered, what it meant, who it served)
Enabling research (providing materials for scholars)
Honoring loss (memorializing what platforms murdered)
Building canon (deciding what future remembers)
The question isn't just "Can we preserve this?" but "How do we make this meaningful for people who never experienced it?"
In the next chapter, we move from memory institutions to political economy—examining the Sovereignty Stack and how to redesign the infrastructure that platforms control.
But first, go build a memory institution. Even a small one. Preserve something meaningful to you. Curate it. Interpret it. Make it accessible.
The Haunted Forest needs its curators.
Institutional Identity: If you were building a memory institution for a murdered platform, which model (library, archive, museum, memorial, research collection) would you choose? Why?
Curation vs. Comprehensiveness: Should memory institutions try to preserve everything, or curate selectively? What are the ethical stakes of each approach?
Fidelity Trade-offs: How much technical fidelity is "enough"? When is a screenshot sufficient vs. needing full emulation?
Access Politics: Who should have access to preserved materials? Public? Researchers only? Community members only? How do you balance openness with privacy?
Canon Formation: Who decides what's "historically significant"? How do we avoid reproducing bias in digital preservation?
Your Own Archive: What digital artifact from your life would you want preserved in a memory institution? How would you want it curated and displayed?
Scenario: Choose a platform that has died or is dying (MySpace, Vine, GeoCities, Google+, Tumblr's NSFW content, etc.). Design a memory institution to preserve and present it.
Part 1: Mission and Model (500 words)
What's your institution called?
What type (library, archive, museum, memorial, research collection, hybrid)?
What's your mission statement?
Who is your audience?
Part 2: Collection Strategy (500 words)
What will you preserve? (everything, curated subset, specific communities)
How will you acquire it? (scraping, submissions, partnership)
What's the scale? (how much content, storage needed)
Part 3: Curatorial Approach (500 words)
How will you organize content? (comprehensive warehouse, thematic collections, chronological, community-driven)
What metadata will you capture?
What level of technical fidelity? (documentation, static, emulation, source code, live)
Part 4: Access and Discovery (500 words)
How will users find content? (URL-based, search, exhibits, wiki, API)
What's publicly accessible vs. restricted?
How will you handle privacy/consent/copyright?
Part 5: Implementation Plan (500 words)
What infrastructure do you need? (servers, storage, software)
Budget estimate (startup + annual maintenance)
Staffing (who does what? volunteers or paid?)
Sustainability plan (how do you keep it running for 50 years?)
Part 6: Reflection (300 words)
What's the biggest challenge?
What compromises did you make (fidelity vs. budget, comprehensiveness vs. curation, access vs. privacy)?
Would you actually want to build this? Why or why not?
Kirshenblatt-Gimblett, Barbara. Destination Culture: Tourism, Museums, and Heritage. University of California Press, 1998.
Young, James. The Texture of Memory: Holocaust Memorials and Meaning. Yale University Press, 1993.
Crane, Susan. "Memory, Distortion, and History in the Museum." History and Theory 36, no. 4 (1997): 44-63.
Manoff, Marlene. "Archive and Database as Metaphor: Theorizing the Historical Record." Portal: Libraries and the Academy 10, no. 4 (2010): 385-398.
Brügger, Niels. "Website History and the Website as an Object of Study." New Media & Society 11, no. 1-2 (2009): 115-132.
Owens, Trevor. The Theory and Craft of Digital Preservation. Johns Hopkins University Press, 2018.
Hooper-Greenhill, Eilean. Museums and the Interpretation of Visual Culture. Routledge, 2000.
Macdonald, Sharon, ed. A Companion to Museum Studies. Wiley-Blackwell, 2011.
Pearce, Susan. Museums, Objects and Collections. Leicester University Press, 1992.
Newman, James. Best Before: Videogames, Supersession and Obsolescence. Routledge, 2012.
Guttenbrunner, Mark, et al. "Keeping the Game Alive: Evaluating Strategies for the Preservation of Console Video Games." International Journal of Digital Curation 5, no. 1 (2010): 64-90.
Internet Archive. "About the Internet Archive." https://archive.org/about/
The Strong Museum. "About the Strong." https://www.museumofplay.org/about/
Fanlore. "About Fanlore." https://fanlore.org/wiki/Fanlore:About
Flashpoint Project. https://flashpointarchive.org/
End of Chapter 14
Next: Part IV — Systems & Movements Chapter 15 — The Political Economy of Digital Ground