list tables in mysql dump

$ zgrep -o '^CREATE.*' database_backup.sql.gz

CREATE TABLE `aa_migrations` (
CREATE TABLE `abcdefg_bar` (
CREATE TABLE `abcdefg_foo` (
CREATE TABLE `abcdefg_images` (
CREATE TABLE `abcdefg_table12` (
CREATE TABLE `abcdefg_table13` (
CREATE TABLE `abcdefg_table14` (
CREATE TABLE `abcdefg_table15` (
CREATE TABLE `abcdefg_users` (

You could also just count them:

$ zgrep -o '^CREATE.*' database_backup.sql.gz | wc -l
9

share:

tcpdump mysql queries

If you have access to the MySQL server and logging is turned on then you have access to the queries as they are logged. Many production databases do not have logging turned on, simply because there are too many queries to handle. Also, there could be hundreds of servers hitting the logs at any given time, making it hard to see activity from a particular client. To take a look at MySQL queries as they leave a webserver you can use tcpdump and massage the output to get you what queries are being sent from that host.

# tcpdump -i eth0 -l -s 0 -w - dst port 3306 | stdbuf -o0 strings| stdbuf -o0 grep "SELECT\|INSERT\|UPDATE|\FROM\|WHERE\|ORDER\|AND\|LIMIT\|FROM\|SET\|COMMIT\|ROLLBACK"

Sometimes the query gets broken up into pieces if WHERE or LIMIT is used, and those pieces wind up on separate lines so we need to grep for them separately. Use stdbuf to force all the pipes to NOT buffer output, i.e. print output in pseudo real time.

share:

count rows in mysqldump

Get rough estimate of rows from multiple mysqldump files from a single table split into multiple sections:

# cat table1.*.sql |sed -e 's/),(/\n/g' | wc -l
4699008

Here’s from the actual table:

mysql> select count(id) from table;
+-----------+
| count(id) |
+-----------+
|   4692064 |
+-----------+
1 row in set (0.00 sec)

Counting the inserts in the mysqldump is a good rough estimate. In this case it’s off by about .1%, because of pattern interpretation or shift in database between dump and the count.

share:

get live config values on MySQL Cluster ndbcluster

Sometimes it’s necessary to double check configuration values that are actually being used by the live ndbd/ndbmtd process

Some operational configs for one node:

# ndb_config -r'\n' -f: --nodeid=12 -q id,MaxNoOfConcurrentIndexOperations,MaxNoOfFiredTriggers --config-from-node=12 
12:81920:40000

Memory allocations on data (ndbd) nodes:

# ndb_config -r'\n' -f: --type ndbd -q id,DataMemory,IndexMemory --config-from-node=12
11:51539607552:4294967296
12:51539607552:4294967296
13:51539607552:4294967296
14:51539607552:4294967296
15:51539607552:4294967296
16:51539607552:4294967296
17:51539607552:4294967296
18:51539607552:4294967296
19:51539607552:4294967296
20:51539607552:4294967296
21:51539607552:4294967296
22:51539607552:4294967296
23:51539607552:4294967296
24:51539607552:4294967296
25:51539607552:4294967296
26:51539607552:4294967296

Due to config caching, pending restarts, or mismatch in management server (ndb_mgmd) nodes, the values in config.ini might be different than what is actually being used by the ndbcluster processes

share:

compare mysql datetime timestamp days ago

To get all entries in a table with a “datetime” field that is newer than 3 days ago (i.e. maybe a user last logged in in the last 72 hours):

mysql> select id,timestamp_field from sometable where timestamp_field > ( CURDATE() - INTERVAL 3 DAY );

Note that you could just use “where like” if you were looking for just today, or just yesterday, or just some month. In this case “newer than 3 days” means “since right now minus 72 hours”

There’s a multitude of ways to compare timestamps from a point in time (i.e. now()).

Here’s older than 3 months:

 timestamp_field < ( CURDATE() - INTERVAL 3 MONTH )
share:

MySQL change field separator for select dump

You can select from a table into an outfile, and change the field delimiter. This is interactive, and you must be logged in locally to the mysql server, with write permissions to wherever you want to write the file.

mysql> SELECT 'pid','email' UNION SELECT pid,email INTO OUTFILE '/tmp/test.csv' FIELDS TERMINATED BY ',' FROM table;
Query OK, 11837 rows affected (0.21 sec)

mysql> Bye
# head /tmp/test.csv
pid,email
1081603,user1@fordodone.com
888151,user2@fordodone.com
781,user3@fordodone.com
2307364,user4@fordodone.com
2286573,user5@fordodone.com
2212194,user6@fordodone.com
2137603,user7@fordodone.com
500492,user8@fordodone.com
1514582,user9@fordodone.com

This is non-interactive and can be done from remote host:

# echo "select pid,email from table;" | mysql -h dbhost -udbuser -p -Ddbname | awk '{for (i=1;i<NF;i++) printf "%s,",$i; printf $i"\n"}' > /tmp/test.csv
share: