Builder uses AI (Artificial Intelligence) in certain features and workflows to help customers speed up design and development time.
This document covers how Builder incorporates AI and how Builder's AI workflows engage with your data.
Your privacy is of paramount importance. Here's how we approach your data:
- Data privacy and storage. No customer data is used for training AI models.
- Minimal data collection. For features that use external Large Language Models (LLMs), we send those LLMs only the data in prompts and, if applicable, the text or content being edited. This data is necessary for the AI to help with your request. Other AI models we use are entirely internal to Builder and do not store customer data. The third-party LLMs we use do not train on your data.
- Usage. Data is strictly used for providing the requested service. There's no secondary usage.
Current Builder.io privacy policies apply to all Builder features, including AI features. Builder does not retain any AI-generated data except in cases where we need to debug an issue.
Builder uses two types of AI models:
- Builder's proprietary, privacy-first AI models
- Third-party Large Language Models (LLMs)
Builder's proprietary AI models are fully built and trained internally and hosted on Google Cloud. In this way, these models do not engage with any third party besides Google Cloud, which Builder already uses for all Builder systems.
Builder uses proprietary AI models to:
- Prepare your Figma file with the Figma plugin
- Import your design
- Make your Figma design responsive in the Visual Editor
- Generate "fast" code
Some, but not all of Builder's AI features, use fine-tuned external LLMs. Currently, Builder uses LLMs to:
- Generate images
- Generate and edit content
- Generate and edit text
- Generate "quality" code
Builder currently uses OpenAI and Anthropic Claude. We may also potentially use open-source LLMs, such as Mistral or Llama 2. Enterprise plan customers can choose which LLMs to use, including using your own LLM as well as turn off LLM-use entirely.
By understanding which Builder features use AI, how they use AI, and which models they use, you can make informed decisions and be intentional with your workflow.
Though there's a growing feature set using AI in Builder, you can still create and manage your Builder content without using the AI-enhanced features.
AI features in Builder's Visual Editor have indicators including a magic wand icon and text that states that the feature uses AI.
The features in which AI is available in Builder include:
- Image generation
- Text generation and editing
- Content generation and editing
- Visual Copilot
- Generating quality code
- Any feature with a magic wand icon
To determine if a feature in the Visual Editor uses AI, you can check for copy that states it's an AI feature or hover your cursor over any field. If AI is available for that field, a magic wand icon appears on hover.
The image below shows several examples of the magic wand icon, which when clicked, open the AI Text Editor where you can generate or edit text. Notice that this is a hover state and the icon doesn't show if you don't hover over the field.
Image generation, text generation and editing, and content generation and editing use a fine-tuned LLM to process requests. The following sections show how a prompt is submitted to the LLM and the image, text, or content are returned.
When you use the Visual Editor to generate an image, the prompt that you give in the AI Generate dialogue goes to the LLM, which in turn creates the image. No additional data is sent.
The diagram below shows this flow:
When you use the Visual Editor to generate or edit text, the prompt that you give in the Visual Editor AI dialogue goes to the LLM, which in turn creates the copy. If you're editing existing copy, the original text is included in the prompt.
The diagram below shows this flow:
When you use the Visual Editor to generate or edit content, the prompt that you give in the Visual Editor AI dialogue goes to the LLM, which in turn creates the content. For editing, the specific content being edited is converted to code and sent.
The diagram below shows this flow:
Visual Copilot is a workflow that leverages AI to the labor-intensive tasks in moments. This workflow uses AI in the Builder Figma plugin's process, creating a responsive design in Builder's Visual Editor from the Figma import, and in generating quality code.
When you use Visual Copilot, Builder's proprietary AI models and LLMs are involved at several, but different points in the process.
- Builder's Figma plugin sends your design to Builder's proprietary AI models. These models are hosted on Google Cloud (as is all of Builder's infrastructure) to ensure privacy and data security.
- Your design is imported to Builder.io.
At this point you can publish your site using Builder's APIs and SDKs. This option relies only on Builder's private, in-house AI models.
OR
Alternatively, you can generate code. When generating code, you have two options:
- Fast code: this uses a Mitosis compiler without AI.
- Quality code (optional): this uses a fine-tuned third-party LLM. The third-party LLMs that Builder uses do not train on your data.
Tip: When the Builder Figma plugin generates code, it bases the code strictly on your Figma file — not on anyone else's code or designs.
For more information on AI features in Builder, visit Visual Copilot.