I've a database with 3 of the tables having rows in excess of 20 million each. I've used GUIDs as primary keys (unfortunately). Now our database is about 20GB and growing 5GB per month.
It takes about 2 hrs to take full backup of the database, and 30hrs to restore on a box with 4GB RAM.
We once have all the tables from database disappeared. other mysql databases in same server were alright except one – for which only data was disappeared leaving empty tables.
A select query (among many slow queries) – which get max of a date column in one of 20m table takes about 5 mins to return result. This query used pretty frequently.
What I'm looking answers for
- recommended db design changes
- ways to improved select query performance – max date column on 20m records
- other queries' performance
- how to go about handling future db growth
Thanks all for your attention.
Best Answer
I've seen setups of larger size (with
InnoDB
as storage engine and aGUID
as a primary key), and there were no such problems.The tables may seem empty if the system
LSN
has gone below the each page'sLSN
. This may happen if theInnoDB
logfiles are corrupt.InnoDB
, however, will issue a warning in this case.Create an index on this column, the query will be instant.
Please post the exact query and I'll tell you how to create the best index.
I see no problem in the
DB
design as such, most probably it's something with your server.Is it possible to reproduce this behavior on another server with a clean vanilla
MySQL
installation?You may also want to try to split data between the tables. Set
innodb_file_per_table
and restore from the backup.