I have worked as a DBA for almost 20 years. I mainly work with Oracle and some mysql, but I did some google searches on sqllite. It has limited functionality. However, here are some things that may speed up saving. It should not take 3-5 minutes to refresh 60,000 rows. It should take a few seconds at most.
Here are a variety of tips for improving inserts. Ill highlight the ones I think will help the most
hxxp: stackoverflow. com/questions/1711631/improve-insert-per-second-performance-of-sqlite
Note: Steps 1-6 all work together.
1. Turn off auto commit. Use 1 transaction for inserts and 1 transction for deletes. This will exponentially improve performance. This is the case
in all SQL DBs when doing bulk DML.
https://www. tutorialspoint. com/sqlite/sqlite_transactions. htm
2. Deletes: Use Truncate optimization. On how to do it at the bottom:
This should make deletes run in seconds
https://www. sqlite. org/lang_delete. html
3. Use bulk inserts. Syntax for SQLite is wierd. With Oracle you generally have declining returns after bundling 100 records together.
If you do it as 100 you cut your inserts from 60,000 to 600
hxxp: stackoverflow. com/questions/1609637/is-it-possible-to-insert-multiple-rows-at-a-time-in-an-sqlite-database
4. Use pragma to increase cache size to improve bulk inserts. Last tip at this link
hxxp: stackoverflow. com/questions/1711631/improve-insert-per-second-performance-of-sqlite
I coudn't get any gain from transactions until I raised cache_size to a higher value i. e. PRAGMA cache_size=10000;
5. Do NOT use row level triggres. Get rid of them if you have them. IF you have them you cant use truncate optimization and it will really slow down
bulk inserts
6. Index all foreign keys. So the child table with a foreign key back to the parent needs to have that field indexed. If not, its causes
full tablescans when deleting from the parent. All SQL DBs are like this.
7. Normalize data: IF you are using child tables normalize the data so you dont repeat. Use a number field with a counter (called a surrogate key)
for the primary and foreign key link. You then use a second unique key to identify the row. You use surrogate keys so you only have to store a number
in the child table. Normalizing the DB can save you alot on space and how long it takes to write out to a file.
Referencing this:
"Everything is up and running and the Aurora DB is no longer Access, but SQLite. It loads a little more quickly but the real time saver is writing to the DB. I can delete the entire system body table (60,000 rows with 48 fields) and then re-insert all the records from the program in about 15 seconds. While that isn't exactly super-fast it is much better than Access. It probably means an actual save game will take a few minutes (maybe 3-5 mins once I code all the tables) but as some turns take that long now, this should be a huge improvement, especially as it means no disk access during play. "
Every row does not need 48 fields. Use child tables and look up tables. You dont need to write out the engine name, over and over again. Same with technology names, planet names, etc. . . Even stuff people can name can get added to lookup tables. It sounds like your save file is way too big.
8. Use lookup tables with Surrogate keys as explained in #7
-- anything that is repeated such as technologies, ship types, engine parts, planets, that are generated, keep in a lookup table.
The table will be
Surrogate key (number), rest of the data.
This way you only store that number instead of the full name.
hxxp: stackoverflow. com/questions/11304128/how-do-i-create-a-surrogate-key-on-a-database-with-composite-foreign-keys
Other Option: To keep everything in memory use an in memory database setting. So all changes get inserted, updated (in memory anyway), then the 'save'
just dumps everything to disk. This may be alot faster and less code that doing steps 1-8 and keep a separate code base to manage everything in memory.
9. Instead of using separate code to store data in memory, it may be easier to configure sqllite as an in memory database:
https://www. sqlite. org/inmemorydb. html
So you have 1 code base for all changes, etc. .
Use this to save your in memory database to disk. This should be faster than the sqls
https://www. sqlite. org/backup. html