Training a Custom Style for Inventory Items

Overview

Maintaining aesthetic integrity in games requires developing visually consistent game assets. This guide offers a structured approach to training custom style AI models using Scenario, using inventory items as an example. This empowers game developers to create assets that consistently reflect their unique visual style.

Several icons

Even with the Starter (free) Plan, anyone can train their own styles on Scenario. Paid subscribers (Creator and above) benefit from unlimited training sessions and greater flexibility, enabling extensive experimentation to discover the perfect models and styles for their projects.

Training a Style in Four Simple Steps.

1. Select Images

2. Set Up Training and Captions

3. Test and Generate

3. Leverage Compositions

Proceed to the steps below for detailed explanations.

Step 1: Selecting Images

In this tutorial workflow, we provide example images necessary for training an RPG Prop Model, enabling you to actively practice and observe the results firsthand. This method is applicable to most Style models; it only requires you to adjust the training dataset and captions. You can download the training images via this link.

Icons for training

First, download the zip file and review the included images. For a Style model, it is important to curate datasets that follow a few general rules:

  • 10-30 high-res images (1024x1024 or more).
  • If images have a transparent background, use a solid color background instead, preferably white
  • Images will be cropped into a 1:1 square. Although this can be done on Scenario, remember this when selecting images to ensure the most important details are included.
  • Choose a diverse selection of objects or scenes that exemplify the common style.
  • Avoid including duplicates or very similar images in your training set.

Step 2: Setting Up Training and Captions

The next step is to set up your training. From the Homepage, click Train to open the training screen. Upload or drag and drop your images into the image upload area. Captions will be automatically created.

While the automatic captions work in most cases, it's beneficial to review and edit them for even better model quality and accuracy. Access the recommended edited captions via this link.

Keep these tips in mind when writing good captions:

  • Captions do not need to be long; shorter captions can work just fine
  • Accurately describe the images as you would explain them to someone else: subjects, items, colors, relevant details...
  • For style training, avoid using style-specific terms like "cartoon illustration," "painting," or "3D render" for example
  • Avoid phrases like “in this image” or “the image shows.

When writing captions, it can be helpful to vary their length—some short and some longer. This provides more flexibility when prompting the model.

Example of good captions

Once you've applied captions and named your model, you're ready to begin training. At this stage, there's no need to alter the default training settings; the Style preset will work just fine. Simply hit "Start Training" and wait for the model to complete training, before moving on to the next step.

Alternatively, you can access the same model directly on Scenario (trained with the same parameters) via this link, to bypass the wait time.

Step 3: Testing and Generating

Basic Prompt Advice

The next step is to test your model. Navigate to the Generate section from the Homepage and select your newly trained model (if you waited for it to train). Once loaded, you can test it by prompting a few props. This model tends to generate most effectively with the LoRA Component Influence set at 0.95. Here are some examples of what to try:

‘a leather cap’

‘iron boots’

‘a health potion’

‘bandages’

A health potion and a leather cap.

Note: If you find that you're getting unwanted details (such as a character face), you can add the following to the Negative Prompt Box:

character

face

Using the Style Reference

Finally, you can further guide the AI using a reference image for the items that you are trying to generate. Simple upload it to the Reference Image section. Switch the reference Mode to Style Reference, at roughly 30 influence. For best results, crop the reference image to a 1:1 ratio and add a descriptive prompt to improve the results in this step.

Step 4: Leveraging Compositions

Another effective technique for refining your style is to blend your model with others. This approach allows you to combine the strengths of different models, creating a unique and more refined style. For instance:

- Mix two or more "style" models (up to 5)

- Combine a "style" model with a "subject" model

To leverage this option, navigate to "New Models" and select Start Composing. In the example below we have combined the model we trained above (RPG Inventory), with another model called Blocky Cartoons. The RPG Inventory model is set to a 0.50 influence and the Blocky Cartoon is set to 0.65 influence (feel free to adjusted as needed).

Composing two models

Composing models unlocks a wealth of possibilities, enabling you to create custom styles tailored to your needs. Take the time to try different variations of strengths and different combination of models to see which you prefer the most.

Conclusion

Style training is an essential skill for AI-enabled artists and developers. If your initial trainings don't yield the desired results, consider refining your model/approach: experiment with a different training dataset (add or remove images) or adjust captions. With practice, you'll quickly become more proficient in achieving all your desired styles.

Table of content

You'll also like