In the competitive gaming market, user-generated content (UGC) and community modding have always been cornerstones of loyal and thriving player communities.
This article will share an easy-to-reproduce workflow focused on enabling players to create custom avatars using their own images in a studio's unique style. This workflow can be automated through our API, providing an opportunity to add value to your communities or integrate directly into your games for maximum player customization.
Enhancing user experience with AI support is straightforward. In the following steps, you will learn how to:
In this workflow we will provide a Platform Model to follow along with. This model is pretrained to generate 2D illustrated characters in a RPG Fantasy theme. The model is named Cartoon RPG Avatars and it can be found by following this link.
You can also follow along with your own model. In that case, you may need to test some of the settings provided and adjust to accommodate your model. This workflow operates best with a character style model, however any style model could potentially work.
Go ahead and click Start Generating on the model page. In this workflow, we are going to focus on utilizing the Reference Image feature.
To start, upload a 1024x1024 image of a person. You can also crop an image once it is uploaded, but it is crucial that it has a 1024x1024 resolution. This workflow is designed primarily for photo references; however, you can absolutely use less realistic character references. We have provided a reference image below.
Once you have uploaded a reference image, you will see several options to choose from for your Reference Image Mode. Select Character Reference. Next, try generating an image without a prompt, using only the reference image. The result should transfer the style of the model you trained without significantly altering the character.
The closer in style a reference image is to the model style, the lower the influence your Character Reference will need to be. The more the reference differs from the model, the more influence the Character Reference should have. Typically 20-50 influence is a good range.
Once you are satisfied with the quality of your style transfer, you may want to include some extra prompts. For this workflow, prompts can be an excellent way to apply character classes, outfits and accessories, and more.
The Character Reference already acts as a kind of prompt, causing the image you've uploaded to behave like a prompt for the model. This means you don't need to add many additional prompts. A typical prompt addition might look like:
a witch
with cat ears
a paladin in armor
Try testing a few different prompts with a few different reference images. It is possible the addition with prompts will change how much the reference image impacts the final design. In that case, adjust settings slightly.
It is also possible to guide the final output substantially more by using multiple references. These steps are slightly more advanced, and typically a bit better as a manual workflow rather than an automation. However, some users may find that by providing pre-determined image references for the structure of an image, they can unlock more options for their community.
Using Depth Maps to Guide
The easiest way to guide towards very particular image structure is to use the Depth Mode. For the following steps feel free to use our provided depth reference image.
To access this, switch the image mode to Controlnet + Character Reference. Drag and drop the current reference image into the bottom section (Character Reference) and upload a picture of someone in a pose. We have provided an example pose to get you started.
The generation will follow the overall structure of the depth mode while still imparting the character as a unique image guidance.
Due to the nature of depth mode, it does not add accessories to characters that wouldn't make sense with the original depth map. So, in this case, we need to guide it towards making our witch hat. This can be achieved easily by click the image generation you ilke the best and choosing Use as Reference. It will be loaded as an image reference and set to Image to Image mode.
Next, click the little pencil symbol above the image. This will allow you to sketch new details. Using the color of your choice, sketch out a rough drawing of a hat. It does not need to be a good drawing - it should serve to show the general shape, size and color of the object you're trying to generate. Set the Image Influence to 45.
This method will slightly shift the style closer to that of the base model. To avoid that, choose the Image to Image + Character Reference image mode and upload your original reference image into the Character Reference space. This will guide the image towards an output that has more of the original character overlayed.
For editing smaller details, check out our canvas feature as outlined in this workflow.
Once you have tested the workflow above on test images and have defined whether you want to allow players to customize or provide them with pre-determined prompts, you are ready to leverage our API. Everything you can do on the Scenario web app is fully achievable through the API.
As a brief overview, ensure your team sets up the following capabilities through the API:
For more information on how to leverage our API, check out our support documents. If you encounter any issues, reach out to contact@scenario.com.
Now you have the recipe to supercharge your community and UGC experience. By integrating our API and following the workflow outlined in this article, you have unlocked a world of potential for your users and your community initiatives. Embrace the power of customization and personalization to foster deeper connections and engagement within your gaming community. With our tools and your creativity, this is just the beginning of an exciting journey toward building a more immersive, thriving, and loyal player base.