Image to 3D Conversion
The core of SphereLinks is its image-to-3D pipeline. Upload any image and receive a textured, watertight 3D mesh within minutes.
The core of SphereLinks is its image-to-3D pipeline. Upload any image and receive a textured, watertight 3D mesh within minutes.
The generation pipeline
Under the hood, SphereLinks runs a two-stage model:
Sparse-view reconstruction — The AI infers the 3D geometry from the single input image using an octree-based model (TRELLIS).
Texture baking — A texture is projected back onto the mesh surface based on the input image colours.
The result is a GLB file containing the mesh and embedded PBR textures.
Generation parameters
Steps — More inference steps give a smoother mesh but take longer. Default: 50.
SLAT steps — Controls detail resolution in the sparse latent space. Default: 25.
Seed — Fixed seed for reproducible results. Randomise to explore variations.
Texture size — 512 px for fast results, 1024 px for high-fidelity output.
Simplification ratio — Reduce poly count (0.9 = light reduction, 0.5 = heavy reduction). Lower is better for real-time use.
Best practices
Shoot or select images where the subject is fully visible — truncated objects produce incomplete geometry.
Objects with distinct silhouettes reconstruct better than flat or very thin subjects.
Natural lighting with soft shadows is better than harsh highlights or studio flash.
Generation time
A typical job takes 60 – 120 seconds. The queue page shows live progress. You can close the browser and come back — jobs run in the background and results are saved to your account history.
Start building in 3D today
Generate your first 3D model from an image in minutes.
Get started free