868 words
4 minutes
Dev Notes #001: Future Plans and Beyond

đź‘‹ Hello, everyone! Cagliostro Research Lab DevRel here~

By this “first” dev note, we would like to share some details about our current situation and planning for the future. Hopefully this will shed a light on what we’re doing for past months and problems that we’re facing. Let’s start off with our model, Animagine.

Where is Animagine?#

As of now, our latest iteration of Animagine is Animagine-XL v3.1, which was released on March 18th. After the successful launch of that model, we have two new model trainers, Kayfaahaarukku and Raelina, that will partake on creating new models for our team onwards. We have given months to showcase their personal crafts, such as Kayfa’s AingDiffusion and its sequel, UrangDiffusion, and Raelina’s Rae series. We hope that you can entrust them on crafting our next models!

UrangDiffusion-XL v2.0Rae Diffusion-XL v2.0
Cecilia Immergreen from hololive (English branch)
Generated by Kayfaahaarukku
Sento Isuzu from “Amagi Brilliant Park”
Generated by Raelina

During this period, we were and still are looking into models that can be used as the base model for our next entry. We posted a little bit of “teaser” back on July via X/Twitter, when we’re testing a PoC to find the best parameter and method for training our next entry. We chose Animagine-XL v2 as the base model for this PoC with characters from “Sousou no Frieren” as its data sample.

FrierenFern
Frieren from “Sousou no Frieren”Fern from “Sousou no Frieren”

Another reason of our absence for over half a year is that we have no intention to rush our development in order to “compete” with newer SDXL anime in short notice. We are truly grateful that our training method inspired so many people and startups to train their own model. Therefore, we are focusing on improving and perfecting our model training process instead; researching more effective training method and even more sophisticated data curation process to ensure the dataset’s quality. As of now, our data curators are still working to provide the best quality images; aiming for quality over quantity. We are pushing it beyond the boundaries in order to provide a better open-source anime model for the community.

We would also like to address how difficult to search a good open-source base model. In recent days, several open-source models have fake open-source license, which will go against our principle if we’re going to use it. On the other hand, some new base model can’t even be fine-tuned, so we have no power to create any model(s) based from that. These matters complicate our progress on training a true open-source anime model.

However, looking at the latest developments on models and such, we are currently trying to figure out which one is the best to use and the best way to train the model(s) from it. Therefore, we have several plans to do.

What’s Next?#

After several internal discussion within our team, we are finally coming to decision that we are going to train these models. These models are in no order and can be trained when one is more viable than the others. Note that this list is not final as time goes on along with “the next big thing” coming here and there.

  1. Animaestro (Large and/or Medium) model line, which utilizes Stable Diffusion 3.5 (Large and/or Medium) as its base model. We decided to use another name in order to avoid confusion with Animagine model line, which based on SDXL. This name can be interpreted as two things, with both interpretations align with our goal: 1) combination between words “Animagine” and “Cagliostro” as our identity (our project name and our team name respectively); and 2) combination between words “Anime” and “Maestro” as our focus to bring better anime-styled model than before;
  2. AniNatura (tentative), which expands Animagine’s capability to use NLP as prompt;
  3. Animagine-XL “Beyond” project (tentative), exploring further possibilities of SDXL-based Animagine beyond 3.1. This may include Animagine-XL v4 and onwards.

We are excited to do one step at a time. But, at the same time, we are currently facing a big hurdle that holds us back from exploring further.

Funding the Projects#

Training a model, let alone a big one, costs us a hefty amount of money, as our team has very limited resource and budget. We’ve partnered up with several startups to launch Animagine-XL 3.0 and 3.1 back then. We are still trying to communicate with more companies, including the ones that backed our training costs in the past, to fund our projects via sponsorships.

If you are interested to help us cover the training fee, that will mean a lot to us. Your help can make us able to train more models in the future. We will certainly put it to good use (e.g. for renting cloud GPU and storage services).

You may donate to us via our Ko-Fi page since we have limited options available in our country. Donating in any amounts are appreciated!

Ko-fi Support

Afterwords#

Thank you for your attention. We are certainly enthusiastic for our next step in these models’ development. Thank you also for sticking around with us, even though we haven’t been around for a long time. We will like to bring you more updates regularly, granted if there’s any progress afterwards.

Cheers, for the future~

Diluc from Genshin Impact
Dev Notes #001: Future Plans and Beyond
https://cagliostrolab.net/posts/dev-notes-001-future-plans-and-beyond/
Author
Cagliostro Research Lab
Published at
2024-10-30
© 2023 Cagliostro Research Lab. All Rights Reserved.
Powered by Fuwari