Choose the plan that works best for you
250 credits≈ 250 GPT Image 2 images
Entry-level API for indie builders
1,000 credits≈ 1,000 GPT Image 2 images
Scalable API for production workloads
16,000 credits≈ 16,000 GPT Image 2 images
High-volume API for teams and agencies
Or get started for free
50 credits≈ 50 GPT Image 2 images
Get started with Nano Banana AI image editing
Common questions teams ask before wiring GPT Image 2 into their stack.
Still need help? Talk to a human
GPT Image 2 (model ID gpt-image-2) is OpenAI's second-generation native image model, released April 21, 2026. It succeeds GPT Image 1.5 with three step-changes: multilingual in-image text rendering at 95%+ legibility, reasoning that interprets layered prompts, and native 2K resolution with optional 4K upscale. We expose the model through one HTTPS API covering text-to-image, natural-language editing, variations, style transfer, 4K upscale and multi-reference blending.
Three places. (1) Text inside images: 1.5 hit ~70% on Latin scripts only; gpt-image-2 hits 95%+ across English, Chinese, Japanese, Korean and Arabic. (2) Prompt fidelity: 1.5 starts dropping elements past six or seven; gpt-image-2 holds fifteen-element scenes thanks to reasoning integration. (3) Resolution: native 2K (vs 1K), with optional non-destructive 4K upscale. DALL-E 3 still ships for backwards compatibility, but gpt-image-2 is the model to build on.
Two places to know before you ship. Brand-logo reproduction is unreliable — for exact vector marks, composite them in Photoshop or Figma after generation. And generation is slower than lightweight models like FLUX, typically 30–60 seconds per image. For production pipelines this is a fair trade for the prompt fidelity, but not the right pick for instant interactive UIs.
Yes. New accounts receive 5 free image credits on signup — no card required. Daily free credits refresh on weekdays so you can keep building through prototyping.
Yes. Every paid tier includes a royalty-free commercial licence for advertising, e-commerce, editorial, print and derivative works. You own full rights to your outputs; we retain none.
Any HTTP client works. The request schema mirrors OpenAI's Images API, so existing OpenAI SDKs (TypeScript, Python, Go, Swift) can redirect the base URL and keep running. Inside images, gpt-image-2 reliably renders English, Chinese, Japanese, Korean, Arabic, and most other major scripts.
Still need help? Talk to a human