Introducing Prompt to 3D: Generate 3D Models from Text on SphereLinks
Today we’re launching Prompt to 3D — the ability to generate a production-ready 3D model directly from a text description, no image required.

From day one, SphereLinks has let you turn any photo into a textured GLB. Now you can skip the photo entirely. Describe what you want — “a worn leather backpack with brass buckles”, “a low-poly pine tree”, “an ornate wooden chess piece” — and our AI reconstructs it in three dimensions.
Why Text-to-3D Changes the Workflow
Image-to-3D is powerful when you have a reference photo. But for concept work, game assets, early prototypes, and procedural content, there’s often no photo to start from — there’s only an idea.
Text prompts unlock a fundamentally different use case: 3D generation at the speed of thought. No sourcing product images. No waiting for a shoot. No licensing stock photos. Just a sentence and a GLB.
- ◆
Concept exploration: Rapidly prototype object ideas without any visual reference.
- ◆
Game & scene assets: Generate props, set pieces, and environmental details on demand.
- ◆
Ecommerce mock-ups: Visualise product concepts before a physical sample exists.
- ◆
Education & research: Instantly materialise objects for interactive 3D diagrams and visualisations.
How It Works
The workflow is the same one you already know — just with a text field instead of a file picker.
- ◆
Describe your object. Type a clear, specific description in the prompt field. The more detail you include — material, colour, style, scale — the closer the output will match your intent.
- ◆
Generate. Our AI processes your prompt on dedicated GPU infrastructure and returns a fully textured 3D mesh.
- ◆
Refine in the editor. Open the result in the built-in 3D editor to adjust materials, transform objects, or combine the generated asset with others in your scene.
- ◆
Export. Download the production-ready GLB — compatible with Blender, Unity, Unreal Engine, Three.js, and every other tool in your pipeline.
Tips for Better Prompt Results
Text-to-3D is most effective when your prompt is specific. Vague inputs like “a chair” give the AI too much latitude. Specific inputs like “a mid-century modern dining chair with walnut legs and a cream fabric seat” produce far more targeted geometry and textures.
- ◆
Name the material: “matte ceramic”, “polished chrome”, “worn brown leather”, “transparent glass”.
- ◆
Specify the style: “low-poly”, “realistic”, “cartoon”, “sci-fi”, “medieval”.
- ◆
Include distinctive details: Seams, logos, patterns, scratches, wear — surface detail that makes the object recognisable.
- ◆
State the scale context: “a hand-held object” or “a room-sized structure” helps calibrate proportions.
You can also combine both modes: upload an image for rough geometry and refine the result using the 3D editor. Prompt-to-3D and image-to-3D are complementary tools in the same pipeline.
Available Now
Prompt to 3D is live for all SphereLinks accounts today. Each generation uses one credit, the same as image-to-3D. Existing credits carry over — no new purchase required.
Head to the Generate page, switch to the Prompt tab, and describe something.
Every 3D asset starts as an idea. Now you can skip straight from idea to GLB.
Ready to generate your first 3D model?
Upload an image and get a production-ready GLB file with PBR textures in under two minutes.
Start for free →More articles