Wednesday, April 29, 2015

Export and Import a compressed MySQL dump

I've had to do this so many times that I'm putting on my blog as a reminder of how useful it is.

MySQL dumps are not compressed. The result is a flat text file which has a lot of room for compression. Using gzip can save a lot of space when downloading MySQL dump files. The dump can be compressed while it is being downloaded like this:

mysqldump -u {user} -p {database}  | gzip > outputfile.sql.gz

Likewise, it can be uncompressed while importing it back into MySQL:

gunzip < outputfile.sql.gz | mysql -u {user} -p {database}

Wednesday, April 22, 2015

AWS RDS migration - Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I had to migrate a small database from one AWS datacenter to a different data center. It was such a small database that I wasn't expecting any problems. When you setup an AWS RDS instance you choose a master password. Previously the people that set it up had chosen something that related to the app name. On the new AWS RDS setup, I wanted the master account to be "root". This was my first time setting up an RDS instances so I didn't really have any idea what the best practices were.

After everything was setup and all the port holes opened up for MySQL and such I started the import. I kept getting errors like this.

[ERROR in query ####] You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

My database had a number of triggers but there was no way to grant the super priv in AWS RDS.

After some searching I discovered that AWS RDS MySQL triggers are not enabled by default.

I found this answer on stack overflow and applied to my AWS instance:
http://stackoverflow.com/questions/8919907/can-i-create-trigger-in-an-rds-db

--- Copied Part Begins

First off it seems to be impossible to apply Super Privileges to default parameter group. So what I had to do was to create a new DB Parameter group either through the Console, or the CLI.

What I found was, the key is that the default region was not the region I was trying to use so I had to use a --region parameter to apply it to the group in the correct region where I was deploying my DB Instance

rds-create-db-parameter-group --db-parameter-group-name allow-triggers --description 'parameter group to allow triggers' --region your-region

Next I had to then create a DB Instance which used that parameter group. (Again through the console or CLI)

rds-create-db-instance

Then I had to modify the Parameter group to allow log_bin_trust_function_creators only accomplishable through the CLI

rds-modify-db-parameter-group --db-parameter-group-name yourgroupname --region yourRegion --parameters 'name=log_bin_trust_function_creators,value=true,method=immediate'

Lastly I had to modify the created DB Instance to allow triggers, also CLI only.

rds-modify-db-instance --db-instance-identifier your-db-instance-id --db-parameter-group-name allow-triggers --apply-immediately

--- Copied Part End

However, I still couldn't complete the import! I was so confused. Then I found this link:
https://www.percona.com/blog/2014/07/02/using-mysql-triggers-and-views-in-amazon-rds/

I learned that if you are creating triggers and setting the definer for triggers in AWS RDS MySQL, it can only be with the same user that is doing the import/creation of the triggers. The import will fail if you are logging in as "bob" and trying to make "root" the definer. You need to run the import as root if you want root to be the definer or you remove the syntax that assigns the definer and let it default to whatever user you are logging in with. If you try to assign the definer to a different user, you will get errors like this:

ERROR 1227 (42000) at line 875: Access denied; you need (at least one of) the SUPER privilege(s) for this operation


Wednesday, April 8, 2015

MySQL error converting MyISAM to InnoDB for tables with over 1000 columns

My company has an app that allows customers to create their own custom columns on their data model. I think this was an absolutely horrible architectural decision but it was done years before I ever started working at the company. We've had a number of clients create over 1000 columns. These tables are MyISAM so this is allowed. We decided it was finally time to switch over to InnoDB but kept getting an error when trying to converting the tables. The annoying thing about these errors is they don't tell you what table was having problems. I was converting the entire MySQL instance to InnoDB by querying the Information Schema for any tables that were still MyISAM and generating the ALTER statements to do the conversion like this:

SELECT @@hostname, table_schema, table_name, create_time, update_time, CONCAT("ALTER TABLE `",table_schema,"`.`",table_name,"` ENGINE = InnoDB;")
FROM INFORMATION_SCHEMA.TABLES
WHERE engine = 'MyISAM'
AND table_schema NOT IN ('information_schema','mysql');

After the error, I re-ran this query and looked at the first table at the top of the list. I did a SHOW CREATE TABLE <table_name> and saw that the number of columns was over 1000.

ERROR 1005 (HY000) at line 1: Can't create table 'database_1.#sql-9b4_ad23c1a' (errno: 139)
ERROR 1005 (HY000) at line 2428: Can't create table 'database_2.#sql-6228_506cb9b1' (errno: 139)
ERROR 1005 (HY000) at line 205: Can't create table 'database_3.#sql-8311_fc9ed4b' (errno: 139)

Wednesday, April 1, 2015

Finding stuff on linux

This is not directly MySQL related but I'm always having to find things on my Linux servers. Here are some useful example:

Finding a string value somewhere in a directory or sub directory in some file or multiple files:

grep -r "string to be searched" /path/to/dir


Finding large files on your linux machine:

du –cks ./* | sort -rn | head -11

Finding a file on a file system (the forward slash is the file system):

find / -name file_name

Finding things using the locate command (which relies on an index so it is faster)

updatedb <-- This will update the index file and can take a while on a large system, will need to be run after adding or changing a lot of files

locate file_name