
Interactive drag-based AI image editing with photorealistic results
DragGAN is an innovative AI-powered image editing tool that revolutionizes photo manipulation through intuitive drag-based controls. Developed by researchers at the Max Planck Institute, DragGAN uses generative adversarial networks to let users click and drag specific points in an image to reshape objects, change poses, edit facial expressions, and transform scenes — all with photorealistic results in seconds. The tool operates on the learned generative image manifold of a GAN, combining feature-based motion supervision with point-tracking technology to ensure realistic and physically plausible edits. Unlike traditional photo editing that requires complex tools and expertise, DragGAN simplifies the process to a single intuitive action: dragging. DragGAN is fully open source on GitHub with 36,000+ stars. It supports multiple platforms including Windows, macOS, and Linux with GPU acceleration via CUDA and Apple Silicon. The project originated as academic research and is best suited for AI researchers, professional designers, and creative professionals exploring generative image manipulation.
Click and drag specific points to reshape and reposition image elements intuitively
Change the posture and position of humans, animals, and objects naturally
Alter expressions like smiles and head tilts with simple drag gestures
Modify size, angle, form, and appearance of objects in images
Rotate, tilt, and transform objects as if they were 3D models
Edit real photographs through generative inversion techniques
Maintains visual consistency and physically plausible results
Generates edits in seconds with GPU-accelerated performance
Interactive GUI accessible directly in web browsers
Supports CUDA and Apple M1/M2 chips for faster processing