I'm not certain if this should go in Suggestions or Development Discussion since it is regarding the database, but there is potential for some improvement that Steve might want to consider.
Saving the 1.8.0 stock game on my machine takes around 30 seconds. During this time the journal file is repeatedly created and destroyed with heavy disk activity. This seems excessive for an 80MB file.
This post explains what is happening.
I am using a single transaction for each table, so all rows are committed at once.
In an attempt to simulate this behaviour, I ran some tests using sqlite3's command line utility and the stock 1.8.0 database.
Test script:# create test files, stripping query optimization data.
sqlite3 AuroraDB.db '.dump' | grep -v 'sqlite_stat' | grep -v 'ANALYZE' > dump.prime.txt
# insert DELETE FROM commands.
sed -E 's/CREATE TABLE (IF NOT EXISTS)?([^(]+)\(/DELETE FROM\2;\n\0/' < dump.prime.txt > dump.single.txt
# break into multiple transactions.
sed -E 's/(DELETE FROM|CREATE TABLE)/COMMIT;\nBEGIN IMMEDIATE;\n\0/' < dump.single.txt > dump.multi.txt
# prime temporary database
sqlite3 dump.db < dump.prime.txt
# simulate saving a game under various conditions
sqlite3 dump.db 'PRAGMA journal_mode=delete;'
time sqlite3 dump.db < dump.multi.txt
time sqlite3 dump.db < dump.single.txt
sqlite3 dump.db 'PRAGMA journal_mode=wal;'
time sqlite3 dump.db < dump.multi.txt
time sqlite3 dump.db < dump.single.txt
# clean up
rm dump.*
Results:The closest simulation I could produce completed in
25.742 seconds. This test was two transactions per table, one to erase followed by another to fill.
Converting to a single transaction for the entire operation reduced this to
3.489 seconds, more than 7x faster than baseline.
Multiple transactions running WAL mode completed in
8.712 seconds, nearly 3x faster than baseline.
Using both optimizations completed in only
2.876 seconds, nearly 9x faster than baseline.
3-4 seconds per save is fast enough that an auto-save option becomes viable, including save-on-exit.
Further testing (by killing Aurora while it was saving) revealed another issue:Currently, if a save is interrupted (game or computer crash, power failure, etc), then sqlite can't correctly recover the database on the next start. Some tables will be updated to the new state while others remain in the old state. Because deletes and inserts are separate transactions, whichever table is being written at the time will be left blank*. Using a single transaction per save guarantees that either the new state is completely written out or the old state can be automatically restored when Aurora is next started.
*While testing this I managed to delete every ship in the default game, and another time deleted the event history. I'm not certain what got nuked by the other attempts, only that the database got smaller. When restarted, Aurora reported no errors in any case. Sadly, I never managed to accidentally Sol.
Suggestions:Use a single transaction per save so that auto-saving becomes practical.
-I recommend running all of the DELETEs before running any of the INSERTs to minimize fragmentation.
Auto-save periodically during long execution runs.
Auto-save on exit.
Two minor suggestions:Set the page size to 4096 for around 10% faster disk performance on modern hardware. This setting is persistent and requires no code changes.
The database could stand to be VACUUMed once in a while. This is equivalent to the old VB "compact database" command.
If anyone is putting together a launcher like VB Aurora had, these last two are low hanging fruit.