Hello, folks! It’s been awhile. I wish I could say there’s been more progress but things have been very busy for me lately. I’ll be a new dad within a month or so, so there was a lot of prep work and classes there. But I’ve had some scattered time to work on this. Though it’s not as much immediate reward for the work, there are some great things to show off.

Graphics!

First of all, some of the changes are difficult to manage exclusively in text files. For example, Codec conversations have subtitles listed in codec.dat. But the animations, sound, and lip syncing and timing is all in vox.dat. In my work on demo.dat, I learned the structure for the dialogue and timing, and it’s a match for vox.dat as well. And in editing demo files, it came up frequently that I need to add longer dialogue than I have room for, meaning I’ll need to split one dialogue line in japanese into two over the same amount of time.

This is easy, but with the subtitles and timings across two files rather than exclusively in radio.dat, it makes it a bit difficult to keep track of without causing errors. So, to test a better method for editing, I’ve been creating a dialogue editor gui that will be able to make this work, and also preview the dialogue as best as I can.

The first step was shopping around for the right UI components. I tried a few options and actually went quite far prototyping in tkinter, and a couple others, but I got a little frustrated with some of the limitations, and ended up moving to QT and pyside6.

I don’t like it the most because I think the designer could be better, but it at least allows me to design and create subroutines as needed once the graphical elements are in place.

For now it’s not yet available, but once the basic functions are hooked in it will be. For now, here’s a basic video showing it as it was maybe a couple months ago. (Better video later on)

Other improvements

A lot of the time that has been spent on coding has been working on demoClasses.py, which is meant to be an all inclusive resource library for working with demo and vox files. A lot of work was put into finalizing understanding the demo and vox container format, so that we could essentially have python class objects that could have operations done. There were requests to extract and replace vag audio files, which are its own headache as well. Here’s just a couple quick asides.

  1. A lot of work on the classes will have different output methods (json and xml) to pull apart the files in a usable format, but also so they can be recompiled. Its not all complete yet, but will allow writing to a text file in a way that could be edited. It’s a little unwieldly as storing raw data as the text hex makes them much bigger, but it’s a quick method to extracting the files in a uniform workable way.

  2. The final scripts to recompile aren’t moved over from the old scripts yet, there’s a lot of test code, but the library was also written to support active edits in the gui. I like to think of the demo or codec call as being actively edited and inserted back into the main group for export, that way offsets can be preserved as needed.

  3. Vag exports are dicey, because of the multiple different formats found in the game. There’s different bitrates, as well as stereo vs. Mono files, and often times there really is no standardization for how the headers are written.

    Most of the implementation assumes a certain couple of things (VAGi for interleave, VAGp for mono, specific chunk sizing, etc). Extraction and playback is working with a python implementation of ffmpeg (and ffplay). Its not the best, and caused a lot of headaches, but it works and didnt involve writing my own audio converter thankfully.

    More work will be done to inject other audio files (conversion, etc). But for now its still a work in progress.

    I went back and recorded an ACTUALLY CURRENT, and also includes some audio playback! Sorry about the quality, the original didn’t have the weird blurring at the start until it was uploaded. Oh well.

    I’m not sure what the popping is on some tracks, but playback is probably good enough for me. I did some work past that of how to display the subtitles based on frametimes, but it didn’t lead to a final version yet.

  4. Lastly, I have a new action item to run a setup script that gets this ready for any layman to work with the translation tools. That means starting to finalize formatting and how I recommend the tools are used, and write more documentation. Hopefully thats easier to squeeze in the next few days/weeks I have to work before my life is real busy again, but we’ll see how far I can get.

The future

I definitely am not giving up on this project, but I’ve got less time to dedicate to it moving forward than I thought. I’ll try to make documentation and usability a priority just for now, and see if i can spare and hour here and there to just look at the code and not forget what i was actively doing ;)

Also, eventually I’ll get email updates working. Still haven’t put the time in to that, but it’s coming! If you signed up for updates on the project that’ll be one of the next things I get moving.

Thanks again to any of y’all who are reading or pushing updates in discord.