MySQL Replication
Setup Replication in MySQL. You will have to setup Master and Slave server. All read-writes to the DB could go to your Slave Server. Advantage of having Replication is you can take a backup from your slave server without interrupting Master server, your application will continue to work on Master without any downtime.
Using MySQL Dump
If your data set is small (I realize “small”), then mysqldump will probably work great. It’s easy, it’s online and it’s very flexible.
Just a few things mysqldump can do: backup everything or just certain databases or tables backup only the DDL optimize the dump for a faster restore make the resultant SQL file more compatible with other RDBMS’s and many more things.
However, the most important options are related to the consistency of your backup. My favorite options are:
–single-transaction: this option gives a consistent backup, if (and only if) the tables are using InnoDB storage engine. If you have any non-read-only MyISAM tables, then don’t use this option when backing them up.
–master-data=2: this option will make sure your dump is consistent (by doing a lock-all-tables unless you’ve added the option –single-transaction). The –master-data option also records the binary log position in the resulting dump file (=2 causes this line to be a comment in the dump file)
Performance parameters for mysqldump
The main bottleneck in the dump like this is drive I/O. You are reading a load of data and writing it again. You can speed this up in many ways:
- Make sure your output is going to a different drive(s) than the one(s) the database files are stored on – this will make a massive difference with spinning disks as the drive heads will not be constantly flicking between the location being read from and the location being written to.
- The output of mysqldump will be very compressible, so if you cannot separate the output from the input as mentioned above pipe the output through gzip or similar. This will reduce the amount of writing being done (so reduce the overall IO load, and the amount of head movement) at the expense of some CPU time (which you may have a lot of spare at these times anyway).
- Only run your backup process when IO load is otherwise low.
- If you are sending the data directly over the connection (i.e. you are running mysqldump on your local machine against a remote database, so the dump appears locally) you might be better off running the dump on the server first, compressing as needed, then transferring the data over the network using a tool (such as rsync) which supports partial transfers so you can resume the transfer (instead of restarting) if a connection drop interrupts it.
INSIGHT INTO DOING BACKUPS WITH mysqldump
IMHO Doing backups has become more of an art form if you know just how to approach it
You have options
Option 1: mysqldump an entire mysql instance
This is the easiest one, the no-brainer!!!
1 |
mysqldump -h... -u... -p... --hex-blob --routines --triggers --all-databases | gzip > MySQLData.sql.gz |
Everything written in one file: table structures, indexes, triggers, stored procedures, users, encrypted passwords. Other mysqldump options can also export unique styles of INSERT commands, log file and position coordinates from binary logs, database creation options, partial data (–where option), and so forth.
Option 2: mysqldump separate databases into separate data files
Start by creating a list of databases (2 techniques to do this)
Technique 1
1 |
mysql -h... -u... -p... -A --skip-column-names -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql')" > ListOfDatabases.txt |
Technique 2
1 |
mysql -h... -u... -p... -A --skip-column-names -e"SELECT DISTINCT table_schema FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql')" > ListOfDatabases.txt |
Technique 1 is the fastest way. Technique 2 is the surest and safest. Technique 2 is better because, sometimes, users create folders for general purposes in /var/lib/mysql (datadir) which are not database related. The information_schema would register the folder as a database in the information_schema.schemata table. Technique 2 would bypass folders that do not contain mysql data. Once you compile the list of databases, you can proceed to loop through the list and mysqldump them, even in parallel if so desired.
1 2 3 4 |
for DB in `cat ListOfDatabases.txt`do mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz done wait |
If there are too many databases to launch at one-time, parallel dump them 10 at a time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
COMMIT_COUNT=0 COMMIT_LIMIT=10 for DB in `cat ListOfDatabases.txt` do mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz (( COMMIT_COUNT++ )) if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ] then COMMIT_COUNT=0 wait fi done if [ ${COMMIT_COUNT} -gt 0 ] then wait fi |
Option 3: mysqldump separate tables into separate data files
Start by creating a list of tables
1 |
mysql -h... -u... -p... -A --skip-column-names -e"SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql')" > ListOfTables.txt |
Then dump all tables in groups of 10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
COMMIT_COUNT=0 COMMIT_LIMIT=10 for DBTB in `cat ListOfTables.txt` do DB=`echo ${DBTB} | sed 's/\./ /g' | awk '{print $1}'` TB=`echo ${DBTB} | sed 's/\./ /g' | awk '{print $2}'` mysqldump -h... -u... -p... --hex-blob --triggers ${DB} ${TB} | gzip > ${DB}_${TB}.sql.gz (( COMMIT_COUNT++ )) if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ] then COMMIT_COUNT=0 wait fi done if [ ${COMMIT_COUNT} -gt 0 ] then wait fi |
Option 4: USE YOUR IMAGINATION
Try variations of the Options plus techniques for clean snapshots
Examples
- Order the list of tables by the size of each tables ascending or descending.
- Using separate process, run “FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)” before launching mysqldumps. Kill this process after mysqldumps are complete. This is helpful if a database contains both InnoDB and MyISAM
- Save the mysqldumps in dated folders and rotate out old backup folders.
- Load whole instance mysqldumps into standalone servers.
CAVEAT
Only Option 1 brings everything. The drawback is that mysqldumps created this way can only be reloaded into the same majot release version of mysql that the mysqldump was generated. In other words, a mysqldump from a MySQL 5.0 database cannot be loaded in 5.1 or 5.5. The reason? The mysql schema is total different among major releases.
Options 2 and 3 do not include saving usernames and passwords.
Here is the generic way to dump the SQL Grants for users that is readable and more portable
1 |
mysql -h... -u... -p... --skip-column-names -A -e"SELECT CONCAT('SHOW GRANTS FOR ''',user,'''@''',host,''';') FROM mysql.user WHERE user<>''" | mysql -h... -u... -p... --skip-column-names -A | sed 's/$/;/g' > MySQLGrants.sql |
Option 3 does not save the stored procedures, so you can do the following
1 |
mysqldump -h... -u... -p... --no-data --no-create-info --routines > MySQLStoredProcedures.sql |
Another point that should be noted is concerning InnoDB. If you have a large InnoDB buffer pool, it makes sense to flush it as best you can before performing any backups. Otherwise, MySQL spends the time flushing tables with leftover dirty page out of the buffer pool. Here is what I suggest:
About 1 hour before performing the backup run this SQL command
1 |
SET GLOBAL innodb_max_dirty_pages_pct = 0; |
In MySQL 5.5 default innodb_max_dirty_pages_pct is 75. In MySQL 5.1 and back, default innodb_max_dirty_pages_pct is 90. By setting innodb_max_dirty_pages_pct to 0, this will hasten the flushing of dirty pages to disk. This will prevent or at least lessen the impact of cleaning up any incomplete two-phase commits of InnoDB data prior to performing any mysqldump against any InnoDB tables.
LVM snapshot
For those that have larger datasets, a physical backup is the way to go. While you could take a cold backup (i.e., shutdown the MySQL service, copy the data directory, restart the service), many people do not want downtime. My favorite solution is snapshots. This can be hot (for InnoDB) or require a brief lock (for MyISAM). Don’t forget to include all your data (include the ib_logfiles).
Using MySQL Enterprise Backup
Advantages of using MySQL Enterprise Backup:
- “Hot” Backups of InnoDB tables takes place entirely online, without blocking Backup only particular tables or tablespaces
- Only backup the data that changed since a prior backup
- Compressed Backup – Saves storage up to 90% and many more.
Thank you for giving your valuable time to read the above information. Please click here to subscribe for further updates
KTEXPERTS is always active on below social media platforms.
Facebook : https://www.facebook.com/ktexperts/
LinkedIn : https://www.linkedin.com/company/ktexperts/
Twitter : https://twitter.com/ktexpertsadmin
YouTube : https://www.youtube.com/c/ktexperts
Instagram : https://www.instagram.com/knowledgesharingplatform
Swapna
Nice article
Ashish gangwar
Very useful article.
Makkomaster
What about mysqldump -u xxx -pxxxx -r xxx.sql ?? The option to dump a database using > is deprecated !!! Did you know that !!! Mmm …