Starting a new project here and I'm going to be storing a LOT of user data. I'm trying to make the system scalable from the start so I'm considering the fairly new idea of storing each user's data (essentially lists of files they have stored) in separate SQLite databases stored in a user-specific directory, rather than one huge table in MySQL with user IDs. The file list will store other metadata related to the file so just using the filesystem isn't an option.
My thought there is that when users are logged in and viewing their files it will be faster to just show all data in a single SQLite database rather than have MySQL go over all records in the "file" table to pull out one user's file by ID. Each user will easily have 10,000+ entries and there will initially be at least 400 users. So 400 individual SQLite databases with 10,000 rows, or a single MySQL table with 4 million? Keeping in mind that all 400 users will rarely (if ever) be logged in at the same time it seems inefficient to have the database have to deal with data for users that aren't there, even if it is indexed.
The single biggest restriction of SQLite is the locking but luckily in this case there will only ever be a single process writing to the database so that shouldn't be a problem here. The added administration of backing up the individual SQLite databases is trivial since they're all going to be part of the incremental filesystem backup anyway.
Thoughts? Opinions? Am I over thinking this?