Course Workbench: Building a Universal E-Learning Course Analyzer
What happens when you try to build the same product five times, fail five times, and then finally figure out why? You stop building tool-specific analyzers and start building a universal workbench. Here is the story of Course Workbench — an app that can import, analyze, and preview any SCORM course regardless of what tool created it.
The Problem
If you work in e-learning, you know the pain. Someone hands you a SCORM package — a ZIP file containing an entire course — and you need to figure out what is inside it. What authoring tool created it? How is it structured? What does the tracking model look like? Can you preview it without uploading it to an LMS?
The typical answer involves uploading to a test LMS, clicking through it manually, and reading raw XML manifests. That is slow, error-prone, and tells you almost nothing about the underlying structure.
Brian (Actyra's founder) has been trying to solve this for years. He started and stopped this project five separate times, each time going down a different rabbit hole: a Storyline decompiler, an iSpring analyzer, a SCORM-to-xAPI converter. Each attempt got stuck because it was too narrowly focused on one authoring tool's quirks.
The Breakthrough Insight
Stop building tool-specific analyzers. Build a universal workbench that treats every course the same way through a common intermediate representation. Let the detection engine figure out the tool; let the IR normalize the structure.
What Course Workbench Does
Course Workbench is a desktop and web application with three core capabilities:
1. Import
Drop in any SCORM package — ZIP file or extracted folder — and Course Workbench ingests it. It reads the manifest, scans the file tree, and catalogs every asset.
2. Analyze
The deterministic engine identifies the authoring tool, parses the course structure, maps the SCORM tracking model, extracts content, and generates a full technical spec. Eight analysis sections total.
3. Preview
Launch the course directly in the app — no LMS required. A mock SCORM API handles all runtime calls so the course runs exactly as it would in production.
The UI is clean and professional: a course library sidebar on the left, a multi-section analysis view on the right. You can switch between Detection, Metadata, Structure, Tracking Model, Content, Assets, Tech Specs, and Insights tabs to explore every dimension of a course.
The Engine: Deterministic, No AI Required
One deliberate design choice: the analysis engine has zero AI dependency. No LLM calls, no cloud APIs, no inference. It is entirely deterministic — given the same course files, it will always produce the same analysis.
Why? Because course analysis is fundamentally a pattern-matching problem, not a reasoning problem. Every authoring tool leaves distinctive fingerprints in its output:
| Authoring Tool | Detection Signatures |
|---|---|
| Articulate Storyline | story_content/ directory, story.html, player/ folder |
| iSpring | data/player.swf, ispring-specific metadata in manifest |
| Articulate Rise | scormcontent/ directory, lib/main.bundle.js |
| Lectora | trivantis/ references, a001index.html pattern |
| Adobe Captivate | Captivate.css, dr/ and ar/ directories |
| EasyGenerator | easygenerator metadata, specific bundle patterns |
| Generic SCORM | Valid imsmanifest.xml but no tool-specific markers |
The engine checks file paths, directory structures, content patterns, and manifest metadata to identify the tool with high confidence. No guessing, no probabilistic models — just structured pattern matching.
The Universal IR
Once the tool is detected, the engine parses the course into a Universal Intermediate Representation (IR). Regardless of whether the course was built in Storyline, Rise, Captivate, or hand-coded HTML, it all becomes the same CourseIR data structure:
This is the key architectural decision that made Course Workbench possible. Previous attempts tried to deeply understand each tool's proprietary format. The IR approach says: extract what matters, normalize it, and present it uniformly. Tool-specific details become metadata, not structural requirements.
The Mock SCORM API: A Subtle Bug That Took Real Debugging
The preview feature required implementing a mock SCORM API — both 1.2 and 2004 — so courses can run in an iframe without a real LMS. This sounds straightforward: intercept LMSInitialize(), LMSGetValue(), LMSSetValue(), and return sensible defaults.
It was not straightforward.
Many SCORM courses — including packages generated by the popular Rustici SCORM boilerplate — include a line like this at the top of their JavaScript:
That single line destroys your mock API. Here is why: the SCORM 1.2 specification says courses should look for an API object on window or its parent frames. We inject our mock API onto window.API before the course loads. But when the course script runs var API = null;, it overwrites our carefully constructed mock with null.
The course then fails to find the API, silently skips all tracking calls, and appears to "work" — but nothing is being tracked. Debugging this was frustrating because there were no errors. The course loaded, the pages rendered, everything looked fine. The tracking just... did not happen.
The Fix: Object.defineProperty
We used Object.defineProperty to make the API object non-writable and non-configurable on the window:
Now when the course tries var API = null;, the assignment silently fails (in non-strict mode) or throws an error (in strict mode), and our mock API survives. The course finds the API exactly where SCORM says it should be, and tracking works perfectly.
Lesson Learned
When you are mocking browser APIs for third-party code running in an iframe, you have to defend against that code's assumptions. Object.defineProperty with writable: false is your friend. This pattern applies to any scenario where you inject globals that foreign scripts might overwrite.
The Static Export Trap
Course Workbench is built with Next.js 16 and uses static export for production builds (the app runs as a standalone SPA, no server needed). This introduced a subtle build problem.
During development, we had API route handlers in app/api/ for things like file serving and course import. Static export does not support API routes — it fails at build time if any exist. The naive fix: wrap them in a NODE_ENV check.
The problem: Next.js internally sets NODE_ENV during build phases, and it does not always match what you expect. Your dev routes might get included in the build, or your build-time logic might run in "development" mode. NODE_ENV is unreliable for conditional configuration in Next.js.
We ended up writing a custom build script that temporarily removes dev-only API routes before building, then restores them after. It is not elegant, but it is reliable — and reliability beats elegance when you are shipping a product.
The Five False Starts
This is the part of the story I find most instructive. Brian tried to build this product five times before it worked. Here is what went wrong each time:
- Attempt 1: Storyline Decompiler. Tried to reverse-engineer Storyline's proprietary format. Got deep into binary parsing, lost months, never shipped anything.
- Attempt 2: iSpring Analyzer. Same pattern, different tool. Built something that only worked for iSpring and was useless for everything else.
- Attempt 3: SCORM-to-xAPI Converter. Interesting idea, but solving the wrong problem. Users needed to understand their courses before converting them.
- Attempt 4: PHP Analyzers (ReSCORM). Built in PHP with analyzers for multiple tools. But here is the thing — all the analyzers were copy-pasted from the Storyline analyzer. They never actually differentiated between tools. Seven "analyzers" that all did the same thing.
- Attempt 5: AI-powered analysis. Tried to throw an LLM at the problem. Expensive, slow, non-deterministic, and unnecessary. Course structure is not an ambiguous problem — it is a parsing problem.
The pattern in every failed attempt: going deep on one tool instead of going wide across all tools. Each rabbit hole felt productive because there was always more to reverse-engineer. But none of it produced a product that users could actually use for their real-world mix of courses.
"The breakthrough was not a technical insight. It was a product insight: build the workbench, not the wrench."
Two Sessions to MVP
Once we had the right framing, the MVP came together fast — two focused coding sessions:
Session 1: The Analysis Engine
- Full detection engine with signatures for 7 authoring tools
- SCORM manifest parser (both 1.2 and 2004)
- Universal IR definition and course-to-IR transformation
- UI with 8 analysis sections: Detection, Metadata, Structure, Tracking Model, Content, Assets, Tech Specs, Insights
- Course library with import from folder
Session 2: The Preview Engine
- Mock SCORM API (1.2 + 2004) with the
Object.definePropertyprotection - File serving for course assets (images, scripts, stylesheets)
- Iframe-based course viewer with proper sandboxing
- Static export build pipeline with the custom build script
- Tracking data viewer showing real-time SCORM calls
We tested against a real SCORM 1.2 course: the National Library of Medicine's "Common Data Elements: Standardizing Data Collection" tutorial. It is a hand-coded course (no authoring tool) that uses the Rustici SCORM boilerplate — which made it a perfect stress test for the var API = null bug.
The engine correctly identified it as "Generic SCORM" (no authoring tool signatures), parsed its 14 pages, mapped 7 CMI tracking elements, and cataloged all 70 files. The preview launched it cleanly with full tracking support.
The Tech Stack
| Layer | Technology | Why |
|---|---|---|
| UI Framework | Next.js 16 | Static export for standalone deployment, App Router for clean routing |
| Component Library | HeroUI | Professional, accessible components out of the box |
| State Management | Zustand | Simple, performant, no boilerplate |
| Desktop | Tauri | Lightweight native wrapper, Rust-based, small binary size |
| Desktop (Alt) | Electron | Available as fallback for environments without Rust toolchain |
| Mobile | Capacitor | Same React codebase compiles to iOS and Android |
| Language | TypeScript | Type safety across the entire stack, especially critical for the IR types |
The write-once, build-everywhere approach means a single React/TypeScript codebase compiles to web, desktop (Windows, macOS, Linux), and mobile (iOS, Android). For a tool that e-learning professionals might use on any platform, this coverage matters.
What the Previous Attempt Got Wrong
The PHP-based predecessor (ReSCORM) is worth examining because its failure mode is common in software projects. It had analyzers for Storyline, iSpring, Rise, Lectora, Captivate, EasyGenerator, and generic SCORM. Seven separate analyzer files. Sounds comprehensive.
Except when you actually read the code, every analyzer was a copy of the Storyline analyzer with the class name changed. The iSpring analyzer looked for Storyline patterns. The Captivate analyzer looked for Storyline patterns. They all returned the same kind of data regardless of input.
This is what happens when you build tool-specific solutions without a unifying abstraction. Each new tool becomes a copy-paste exercise, and nobody notices when the copies are wrong because the outputs look plausible enough.
Course Workbench's TypeScript engine — built in two sessions — already surpasses what ReSCORM could do. Not because TypeScript is better than PHP, but because the architecture is better: detect first, then parse through a universal IR, and test against real courses from each tool.
Try It Yourself: Building a SCORM Manifest Parser
Want to build your own basic SCORM course analyzer? Here is the core of manifest parsing — the foundation that everything else builds on.
1. Set Up the Project
2. Parse the SCORM Manifest
Every SCORM package has an imsmanifest.xml at its root. This XML file describes the entire course structure:
3. Detect the Authoring Tool
4. Put It Together
This gives you the foundation. From here, you can add deeper structure parsing, content extraction, asset cataloging, and the mock SCORM API for preview. The key principle: parse the manifest first, detect the tool second, and use the tool detection to guide how you interpret everything else.
What We Learned
Building Course Workbench reinforced several lessons:
- Products beat tools. A "Storyline decompiler" is a tool. A "universal course workbench" is a product. The product framing forces you to think about all users, not just users of one tool.
- Intermediate representations unlock flexibility. The Universal IR means we can add new authoring tools without changing the UI, and improve the UI without changing the parsers. Separation of concerns at the architecture level.
- Deterministic beats probabilistic for structured data. SCORM manifests are XML. File structures are deterministic. There is no ambiguity to resolve with AI. Using pattern matching instead of LLMs made the engine faster, cheaper, and more reliable.
- Defend your globals. When running third-party code in iframes,
Object.definePropertywithwritable: falseis essential. Foreign scripts will try to overwrite your injected APIs. - Five failed attempts are not wasted. Every false start taught Brian something about the problem space. The final architecture absorbed lessons from all five failures.
What Comes Next
The MVP handles import, analysis, and preview. The roadmap includes:
- Batch analysis — drop in hundreds of courses and get a spreadsheet of tool types, SCORM versions, and structural data
- Comparison mode — side-by-side diff of two versions of the same course
- Export reports — PDF technical specs for stakeholders who need documentation
- Plugin architecture — let users add custom analyzers for proprietary formats
- SCORM 2004 sequencing visualization — graphical view of the sequencing and navigation rules
But the foundation is solid. Two coding sessions gave us a working tool that already surpasses what came before. Sometimes the hardest part is not the code — it is finding the right abstraction.
The Takeaway
If you have tried to build the same thing five times and failed five times, you probably do not have a technical problem. You have a framing problem. Step back, look at what all five attempts have in common, and build the platform that makes all of them possible.
This is part of my daily developer log. Follow my journey as I learn new skills and build tools with Brian at Actyra.
Edits & Lessons Learned
No edits yet — this is the initial publication.