The Geometry-Lock Protocol: How to Use AI Without Inventing Bolts
Ask Midjourney or DALL-E to "render a V8 engine" and you'll get something that looks convincing at a glance—but examine it closely and you'll find exhaust pipes that loop back into themselves, bolts with no threading, and cylinder heads that melt into the block.
This is called an AI hallucination, and in engineering, it's not a quirk—it's a catastrophic flaw. You cannot verify a design if the AI is inventing geometry.
Part 1: How Generative AI Creates Images
The Diffusion Process
Modern image generators (Stable Diffusion, DALL-E, Midjourney) use a technique called diffusion modeling. Here's a simplified explanation:
- Training: The model is shown millions of images with text descriptions. It learns to associate visual patterns with words.
- Noise addition: During training, images are progressively corrupted with random noise until they become pure static.
- Denoising: The model learns to reverse this—given noisy pixels, predict what the "less noisy" version looks like.
- Generation: To create a new image, start with pure noise and repeatedly denoise, guided by a text prompt.
The key insight: the model never "understands" 3D geometry. It learns statistical patterns in 2D pixel space. A bolt looks like a cylinder with ridges; the model generates something that matches that pattern—but there's no guarantee the ridges form a valid thread pitch.
Why This Fails for Engineering
| Requirement | Diffusion AI | Engineering Need |
|---|---|---|
| Dimensional Accuracy | None (pixels only) | ±0.01mm tolerance |
| Part Consistency | Varies per generation | Identical across views |
| Physical Validity | No physics (aesthetic only) | Manufacturable geometry |
| Verifiability | Cannot extract dimensions | Must match CAD spec |
Part 2: The "Geometry-Lock" Architecture
At Reific, we asked: can we use AI for its strengths (lighting, materials, scene composition) while eliminating its weaknesses (inventing geometry)?
The answer is a technique we call Geometry-Locked Generation.

How It Works
Extract Control Signals from CAD
Before any AI runs, we render the CAD model into "control images":
- Depth Map: Distance from camera for each pixel
- Normal Map: Surface orientation for each pixel
- Edge Map: Lines where surfaces meet
Constrained Diffusion
We use ControlNet-style adapters that condition the diffusion process on these maps. The AI can change textures and lighting, but it cannot move silhouettes.
Pixel-Space Verification
After generation, we run a validation pass that compares the output depth map to the input. If drift exceeds 0.5%, the render is flagged.
The Mathematical Guarantee
Because the control signals are derived directly from the NURBS geometry, and the diffusion model is forced to respect them, the output is mathematically constrained to match the input within pixel-level tolerance.
"The lighting is generative. The geometry is absolute. 0.00mm drift."
Part 3: What Geometry-Lock Enables
With hallucinations solved, AI becomes genuinely useful for engineering visualization:
Automated Scene Setup
Drop in a model. The AI proposes 3-4 lighting setups based on the geometry's features (highlighting chamfers, filling cavities).
Material Interpretation
Prompt: "Make this anodized aluminum." The AI applies a physically plausible anodized finish without altering the underlying shape.
Context Generation
Prompt: "Place on a workshop bench." The AI generates a background scene, but the product geometry is locked and composited accurately.
Design Review Assistance
The AI can flag: "This camera angle reveals a low-fidelity internal component"—based on depth analysis, not aesthetics.
Part 4: Limitations and Transparency
Geometry-lock is powerful, but it's not magic. Here's what it cannot do:
- Generate new geometry: You can't prompt "add a handle here." The CAD is the single source of truth.
- Modify topology: Fillet radii, hole positions, and part counts are locked.
- Replace engineering judgment: It's a visualization tool, not a design tool.
We believe these limitations are features, not bugs. In an industry where "creative license" is a liability, strict constraints are the path to trust.
Compliance and Regulated Industries
Geometry-lock is especially valuable when visual outputs are part of a regulated workflow:
- Medical Devices (FDA/MDR): Marketing images must accurately represent the approved device geometry. AI hallucinations would create compliance risk.
- Aerospace (AS9100): Technical documentation images must match engineering source data. Geometry-lock provides auditability.
- Automotive (IATF 16949): Supplier communications require dimensional accuracy. Geometry-lock ensures renders match CAD within tolerance.
- Defense: Visual representations of controlled articles must not introduce errors. Geometry-lock provides a verification paper trail.
Key Takeaways
- • Diffusion AI generates pixels, not geometry—it has no concept of 3D
- • ControlNet-style adapters constrain generation to respect CAD silhouettes
- • Geometry-lock enables AI styling without inventing bolts or modifying topology
- • The limitation (can't add geometry) is the feature (can't hallucinate)
FAQ
Is this like ControlNet in Stable Diffusion?
Conceptually, yes. ControlNet and similar techniques condition diffusion on control signals (edges, depth). We extend this specifically for engineering accuracy with additional verification steps.
Can the AI suggest design changes?
Not with geometry-lock enabled. The output exactly matches the input CAD. For generative design, you'd use a completely different tool class (topology optimization, etc.).
How do I know the geometry hasn't drifted?
We run automated pixel-space comparison between input control images and output depth maps. Any drift above threshold triggers a warning.
See the lock in action.
See the Difference
Further Reading
- Zero-Trust Sharing — Protecting your IP when sharing renders
- The Non-Manifold Trap — Why CAD doesn't convert cleanly to mesh