This Is Strange — Immersive Creative Entertainment Platform
2M+
Page Views
4.2 min
Avg. Session
50K+
Social Shares
Overview
This Is Strange is an immersive creative entertainment platform that blends generative art, interactive storytelling, and dynamic narrative experiences. Users don't just read or watch — they interact with stories that respond to their choices, generating unique visual and narrative outcomes that are shareable as personalized artifacts.
The platform drove 2M+ page views and 50K+ social shares organically — not through paid acquisition, but because every user's experience was genuinely unique and share-worthy. I was the full-stack engineer responsible for the WebGL art engine, the GSAP animation orchestration, and the backend story-state management.
The Challenge
The creative brief was ambitious: experiences that felt alive and personal, rendered entirely in the browser, with no two users seeing exactly the same thing. The technical challenge was making generative complexity work across the full spectrum of user devices — from a MacBook Pro with a dedicated GPU to a 4-year-old Android phone on 3G.
Performance was existential. A creative experience that stutters or takes 10 seconds to load doesn't create wonder — it creates frustration. The 4.2-minute average session time we achieved was only possible because we obsessed over first paint and kept frame rates locked at 60fps on mid-range hardware.
Architecture & Technical Decisions
Generative Art Engine with Three.js
The visual layer used Three.js with a custom generative system built on seeded noise functions. Each user's session ID seeded a Simplex noise function that controlled: particle field behavior, color palette selection, geometry morphing speed, and light source positioning. The same seed always produced the same visual output — making experiences reproducible when shared — but different seeds produced visually distinct experiences.
- Simplex noise seeded from session ID for deterministic personalization
- WebGL particle systems with up to 50K particles on high-end devices
- Adaptive quality system: device capability detected at load, particle count and shader complexity scaled accordingly
- Low-end fallback: CSS-only animations for devices without WebGL support
- GPU memory budget enforced: textures unloaded when scene transitioned
GSAP Animation Orchestration
The narrative layer used GSAP timelines to synchronize text reveals, particle behavior changes, and scene transitions. Timelines were data-driven — each story chapter defined its animation sequence in a JSON config, letting the creative team iterate on pacing without touching code. GSAP's ScrollTrigger plugin handled parallax depth effects without JavaScript scroll listeners (using IntersectionObserver under the hood for performance).
- Data-driven GSAP timelines from JSON story configs
- ScrollTrigger for parallax without scroll-jank
- Preload next chapter assets during current chapter playback
- FLIP animation technique for layout transitions (no layout thrashing)
Story State & Share Infrastructure
Each user's story choices and the resulting generative parameters were stored as a compact state object (< 1KB) in the URL's hash fragment. Sharing the URL reproduced the exact visual and narrative experience for the recipient. For longer state objects, we stored them server-side with a short ID. AWS CloudFront served all static assets from edge locations for sub-100ms TTFB globally.
- Story state encoded in URL hash for instant, server-free sharing
- Server-side state storage for complex states (> 1KB) with short-code URLs
- CloudFront edge delivery: TTFB <80ms in 95% of tested locations
- Open Graph image generated server-side from story state for rich social previews
Results
- 2M+ page views over the platform's first 6 months post-launch
- 4.2-minute average session duration (vs. 1.1 minutes for typical content sites)
- 50K+ organic social shares — zero paid promotion during the measurement period
- 60fps sustained on mid-range hardware (tested on devices 3+ years old)
- Core Web Vitals: LCP 1.2s, CLS 0, FID 8ms — passing all thresholds
- Zero WebGL crashes in production (adaptive fallback system caught all unsupported devices)
What I Learned
This project taught me that performance and artistry aren't opposites — they're collaborators. The generative art was only possible because we made hard technical choices: an adaptive quality system that traded visual fidelity for frame rate on slower devices, aggressive asset preloading, and a GPU memory budget that felt constraining but forced creative solutions. The 4.2-minute session time wasn't despite the technical constraints — it was partly because of them. Constraints forced elegance, and elegance created wonder.