All posts
AI Writes Your Risk Report. It Doesn't Score It.
Structural Insight

AI Writes Your Risk Report. It Doesn't Score It.

How Global Solo uses AI for language while keeping risk scores, structure, and boundaries fully deterministic. A look inside the META diagnostic engine.

Jett Fuยทยท7 min read

When I started building the META diagnostic, I hit a problem that every AI product eventually faces: people want intelligence, but they also want consistency.

If you're a cross-border entrepreneur paying $99 for a risk assessment, you need to know the same structural setup produces the same result every time. Not "roughly similar." The same scores. The same sections. The same structural mapping. Two people with identical entity structures, identical tax residency patterns, identical documentation levels getting different risk scores? The product is broken.

That tension shaped every architectural decision in the engine. Here's how it works and why we drew the lines where we did.

The Core Principle: Determinism Over Intelligence

The META diagnostic maps structural risk across four dimensions: Money, Entity, Tax, and Accountability. Each dimension gets a score from 1 to 5. Those scores drive which sections appear in your report, which findings get highlighted, and how cross-dimensional patterns get detected.

I tested having the AI generate these scores early on. Bad idea. The same entrepreneur profile, submitted twice, would get a Money score of 3 one time and a 4 the next. The AI noticed different details each pass, weighted them differently, came back with different assessments.

For a diagnostic product, that kills trust instantly. If your doctor's blood test gave different results depending on which lab tech wrote up the report, you'd find a different doctor.

So we split the system in two. AI handles language. Everything else is deterministic.

What "Deterministic" Means in Practice

The engine runs in three stages. The line between rules and AI is sharp.

Stage 1: Analysis. Your answers map to 45 signals across the four META dimensions. A signal mapper converts each answer into a structured observation. No interpretation. If you report income from three countries, the signal records "income_country_count: 3." A scoring engine applies weighted rules to produce dimension scores. These rules are version-controlled and auditable. If the AI suggests a different score, the deterministic score wins. Always.

Stage 2: Narration. This is where the AI earns its keep. Given the structured analysis, the AI writes the narrative sections of your report. It explains what the structure looks like, translates signal data into readable prose, and connects patterns across dimensions. AI is genuinely good at this part. It can turn a wall of structural data into something a founder actually wants to read.

But it writes under constraints.

Stage 3: Assembly. The final report gets assembled with zero AI involvement. Metadata, section ordering, score badges, boundary notices, coverage statistics. All deterministic templates built from Stage 1 analysis and Stage 2 narratives. No LLM calls. No creativity.

๐Ÿ“Š

How does your structure score?

Free 2-minute screening across Money, Entity, Tax, and Accountability.

Check Now

The Language Problem

AI-generated text drifts into advice. Ask a language model to describe a tax residency gap, and it will tack on "you may want to consult a tax professional" or "it is advisable to restructure your entity." The model is trained to be helpful. That's the whole problem.

For Global Solo, that drift is a product defect. We sell structural visibility. The moment a report says "you need to" or "we suggest," it crosses from diagnosis into recommendation, which carries different liability.

So every narrative runs through a language verifier that scans for four categories of prohibited patterns:

  • Directive words โ€” tells you what to do
  • Promise words โ€” predicts an outcome
  • Compliance assertions โ€” declares something legal or illegal
  • Benchmark language โ€” compares you to "most people" or "typical" founders

Any match gets flagged and rewritten. Regex-based sanitization strips these patterns before the text reaches the assembler. The result reads like a diagnostic observation, not a prescription.

Sometimes the sanitized text reads awkwardly where a natural sentence got rewritten. I'm fine with that. Awkward-and-accurate beats smooth-and-liable.

What the AI Never Touches

Some components have a hard wall between them and any language model.

The scoring engine. 45 signals, weighted rules, structural floors and ceilings. If you have no documentation of authority relationships, your Accountability score cannot exceed 2, no matter what the AI thinks. These rules are tested and frozen.

The rendering layer. Every label, summary template, and stress-test scenario is a lookup table. "Score 1 in Money" always renders as the same label. "Critical risk in Entity" always triggers the same visual treatment. The rendering spec is a frozen document.

Cross-pattern detection. When the engine finds that your Tax score and Entity score create a tension โ€” tax residency in one country but entity registration in another โ€” that pattern is detected by rule, not AI. The AI only describes the pattern after the rules have already surfaced it.

Get structural patterns other founders miss

One blind spot, every two weeks. No spam.

Why Not Just Use Templates for Everything?

Fair question. If determinism matters this much, why use AI at all?

Because the math doesn't work otherwise. 45 signals across 4 dimensions, 8 cross-pattern types, layer-specific sections. The combinatorial space is too large for static templates. A report for someone with a Delaware LLC and income from three countries looks nothing like a report for someone with a Hong Kong holding company and a single freelance income stream. The findings might overlap, but the narrative context is completely different.

Templates would either be uselessly generic ("Your Money score is 3") or require thousands of hand-written variants. AI generates language that fits each unique signal combination. The deterministic system keeps it within bounds.

Determinism in analysis, intelligence in expression.

The Cost of This Architecture

This approach costs money and speed.

The three-stage pipeline runs about $0.10 per META Diagnostic and $0.52 per L3 Judgment report. Every time. We don't cache results. Same input tomorrow gets a fresh run through the same deterministic analysis with freshly generated narratives.

No caching is intentional. If we update a scoring rule or add a new cross-pattern, every future report reflects the latest logic. Cached reports would carry stale analysis.

The bigger cost is development velocity. Every new feature has to respect the determinism boundary. Adding a signal means updating type definitions, the mapper, scoring rules, narrator prompts, and assembler output. Adding a section means updating section definitions, narrator templates, and assembly logic. There's no shortcut where "the AI will figure it out."

That rigidity is the product.

What This Means for Your Report

When you get a META Risk Profile, every score, finding, and structural pattern comes from auditable, repeatable rules. The language wrapping those findings is AI-generated and verified against a boundary checklist.

Run the same diagnostic twice with the same answers. You'll get the same scores, same findings. The sentences might vary (AI never produces identical text twice), but the diagnostic content won't.

That's the contract: the structure is deterministic, the language is generated within enforced boundaries.


Key Takeaways

  • Three-stage pipeline: deterministic analysis, AI narration, deterministic assembly
  • Risk scores (1-5 per dimension) come from weighted rules. If the AI disagrees, the rules win
  • A post-processing verifier strips directive, promise, and compliance language from all AI output
  • The rendering layer and scoring engine never touch an LLM
  • Reports are never cached so every run reflects the latest scoring rules
  • Development is slower as a result, but every report is auditable and repeatable

References

Check your structural risk โ†’ Free 5-minute assessment

Related Articles

Jett Fu
Jett Fu

Cross-border entrepreneur running businesses across the US, China, and beyond for 20+ years. I built Global Solo to map the structural risks I wish someone had shown me.

Where does your structure have gaps?

Two free ways to map your cross-border risk โ€” pick the depth that fits your time.

Structural Patterns

One blind spot, every two weeks. For entrepreneurs operating across borders.

Free LLC Formation Checklist included