分类目录归档:MySQL

MySQL数据库配置、使用相关的技术文章

【转】MySQL TPS 和 QPS 的统计和IOPS,附一个python脚本

MySQL TPS 和 QPS 的统计和IOPS
http://kan.weibo.com/con/3523021944781882

MySQL的QPS计算
show global status where Variable_name in(‘com_select’,’com_insert’,’com_delete’,’com_update’);
等待10秒
show global status where Variable_name in(‘com_select’,’com_insert’,’com_delete’,’com_update’);

QPS:$stat_val{“Com_select”} + $stat_val{“Com_insert”} + $stat_val{“Com_update”} + $stat_val{“Com_delete”},

计算差值
MySQL的TPS计算
show global status where Variable_name in(‘com_insert’,’com_delete’,’com_update’);
等待10秒
show global status where Variable_name in(‘com_insert’,’com_delete’,’com_update’);
计算差值

TPS:$stat_val{“Com_insert”} + $stat_val{“Com_update”} + $stat_val{“Com_delete”},

磁盘IOPS计算与测量

IOPS 是指单位时间内系统能处理的I/O请求数量,一般以每秒处理的I/O请求数量为单位,I/O请求通常为读或写数据操作请求。随机读写频繁的应用,如OLTP,IOPS是关键衡量指标。
数据吞吐量(Throughput),指单位时间内可以成功传输的数据数量。对于大量顺序读写的应用,如VOD(Video On Demand),则更关注吞吐量指标。
磁盘完成一个I/O请求所花费的时间,它由寻道时间、旋转延迟和数据传输时间三部分构成。
寻道时间Tseek是指将读写磁头移动至正确的磁道上所需要的时间。寻道时间越短,I/O操作越快,目前磁盘的平均寻道时间一般在3-15ms。
旋转延迟Trotation是指盘片旋转将请求数据所在扇区移至读写磁头下方所需要的时间。旋转延迟取决于磁盘转速,通常使用磁盘旋转一周所需时间的1/2表示。比如,7200 rpm的磁盘平均旋转延迟大约为60*1000/7200/2 = 4.17ms,而转速为15000 rpm的磁盘其平均旋转延迟约为2ms。
数据传输时间Ttransfer是指完成传输所请求的数据所需要的时间,它取决于数据传输率,其值等于数据大小除以数据传输率。目前IDE/ATA能达到133MB/s,SATA II可达到300MB/s的接口数据传输率,数据传输时间通常远小于前两部分时间。

因此,理论上可以计算出磁盘的平均最大IOPS,即IOPS = 1000 ms/ (Tseek + Troatation),忽略数据传输时间。假设磁盘平均物理寻道时间为3ms, 磁盘转速为7200,10K,15K rpm,则磁盘IOPS理论最大值分别为,
IOPS = 1000 / (3 + 60000/7200/2) = 140
IOPS = 1000 / (3 + 60000/10000/2) = 167
IOPS = 1000 / (3 + 60000/15000/2) = 200
固态硬盘SSD是一种电子装置, 避免了传统磁盘在寻道和旋转上的时间花费,存储单元寻址开销大大降低,因此IOPS可以非常高,能够达到数万甚至数十万。实际测量中,IOPS数值会受到很多因素的影响,包括I/O负载特征(读写比例,顺序和随机,工作线程数,队列深度,数据记录大小)、系统配置、操作系统、磁盘驱动等等。
因此对比测量磁盘IOPS时,必须在同样的测试基准下进行,即便如何也会产生一定的随机不确定性。通常情况下,IOPS可细分为如下几个指标:
Toatal IOPS,混合读写和顺序随机I/O负载情况下的磁盘IOPS,这个与实际I/O情况最为相符,大多数应用关注此指标。
Random Read IOPS,100%随机读负载情况下的IOPS。
Random Write IOPS,100%随机写负载情况下的IOPS。
Sequential Read IOPS,100%顺序负载读情况下的IOPS。
Sequential Write IOPS,100%顺序写负载情况下的IOPS。
IOPS的测试benchmark工具主要有Iometer, IoZone, FIO等,可以综合用于测试磁盘在不同情形下的IOPS。对于应用系统,需要首先确定数据的负载特征,然后选择合理的IOPS指标进行测量和对比分析,据此选择合适的存储介质和软件系统。下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS [1],给大家一个基本参考。

#!/usr/bin/env python
#coding=utf-8

import time
import sys
import os
#MySQLdb查询
import MySQLdb

conn = MySQLdb.connect(host='127.0.0.1',port=3306,user='root',passwd='123456', charset='utf8')
conn.autocommit(True)
cursor=conn.cursor()

while True:
    diff = 1
    sql = "show global status where Variable_name in('com_select','com_insert','com_delete','com_update')"
    cursor.execute(sql)
    results = cursor.fetchall()
    first = []
    for rec in results:
        first.append(rec[1])

    time.sleep(diff)
    sql = "show global status where Variable_name in('com_select','com_insert','com_delete','com_update')"
    cursor.execute(sql)
    results = cursor.fetchall()
    second = []
    for rec in results:
        second.append(rec[1])

    qps = 0
    tps = 0
    for i in range(0, 4):
        if i != 0:
            b = int(second[i]) - int(first[i])
            tps += b

        a = int(second[i]) - int(first[i])
        qps += a

    print 'qps = %s'%(qps/diff)
    print 'tps = %s'%(tps/diff)
conn.close()

mysql 互为主从安装、配置

mysql 的安装,参照centos 源码编译安装 percona 5.5
在做mysql主从配置时,my.cnf的配置关键项如下:
mysql实例1

server-id = 1
# 自增的起始点
auto_increment_offset=1
# 自增的量
auto_increment_increment=2  
log-bin = /webserver/mysql/3306/binlog/binlog
log-slave-updates 
replicate-do-db = test1
replicate-do-db = test2
replicate-ignore-db = information_schema
replicate-ignore-db = mysql
# 中继日志名称
relay-log = /webserver/mysql/3306/relaylog/relaylog
# 中继日志索引
relay-log-index = /webserver/mysql/3306/relaylog/relaylog
relay-log-info-file = /webserver/mysql/3306/relaylog/relaylog

mysql实例2

server-id = 2
# 自增的起始点
auto_increment_offset=2
# 自增的量
auto_increment_increment=2  
log-bin = /webserver/mysql/3307/binlog/binlog
log-slave-updates 
replicate-do-db = test1
replicate-do-db = test2
replicate-ignore-db = information_schema
replicate-ignore-db = mysql
# 中继日志名称
relay-log = /webserver/mysql/3307/relaylog/relaylog
# 中继日志索引
relay-log-index = /webserver/mysql/3307/relaylog/relaylog
relay-log-info-file = /webserver/mysql/3307/relaylog/relaylog

启动两个实例
然后配置同步的点
在两个实例上,分别做如下的步骤

master:
grant replication slave on *.* to slave@"192.168.%" identified by '123456';
flush talbes with read lock;
show master status;
unlock tables;

slave:
stop slave;
change master to master_host='192.168.117.100',master_user='slave',master_password='123456',master_log_file='binlog.000107',master_log_pos=3535;
start slave;

这样,两个mysql就互为主从了。

mysql5.6.6_m9安装测试

mysql5.6.6已经出了m9的版本(可至官网进行下载),该版本内嵌了对memcached的支持,同时mysql5.6也内嵌了使用gprof对mysql性能的测试,所以准备花些时间,对新版本的mysql进行下安装和测试。

一)安装
首先,mysql增加用户和组,以及mysql下载、解压等等的,这里就不再介绍了。我使用的是在CentOS上源码安装,mysql自5.5之后使用了cmake进行安装,不再是之前的./configure方式了,之前我在电脑上面使用./configure方式的安装选项为:

./configure --prefix=/usr/ \
            --enable-assembler \
            --with-extra-charsets=complex \
            --enable-thread-safe-client \
            --with-big-tables \
            --with-readline \
            --with-ssl \
            --with-embedded-server \
            --enable-local-infile \
            --with-plugins=partition,innobase,myisammrg

对应cmake的选项如下:(cmake的选项可参照http://dev.mysql.com/doc/refman/5.6/en/source-configuration-options.html#option_cmake_storage_engine_options 中的介绍)

cmake .\
–DCMAKE_INSTALL_PREFIX=/usr/  \
-DINSTALL_PLUGINDIR=/usr/lib/mysql/plugin \
-DWITH_EXTRA_CHARSETS=complex \
-DWITH_READLINE=1 \
-DWITH_SSL=system \
-DWITH_EMBEDDED_SERVER=1 \
-DENABLED_LOCAL_INFILE=1 \
-DWITH_INNOBASE_STORAGE_ENGINE=1 \
-DWITH_PARTITION_STORAGE_ENGINE=1 

其中有些选项,如–enable-assembler…等是mysql新版本已经取消了的,所以不用管他。

为了添加对memcached的支持,需要查看mysql新版本的选项了,可以用过下面的方式查看cmake支持的选项:

cmake . -L      #查看选项列表
cmake . -LH     #查看选项列表和帮助说明
cmake . -LAH    #查看所有选项及说明
cmake . -LAH | more #查看所有选项及说明,并分页显示

查看选项可以发现有三个选项和memcached相关,一个选项和gprof相关,如下:

ENABLE_MEMCACHED_SASL:BOOL=OFF       #Enable SASL on InnoDB Memcached
ENABLE_MEMCACHED_SASL_PWDB:BOOL=OFF  #Enable SASL on InnoDB Memcached
WITH_INNODB_MEMCACHED:BOOL=OFF
ENABLE_GPROF:BOOL=OFF                #Enable gprof (optimized Linux builds only)

至于MEMCACHED的SASL验证,暂且不去管他,我们只需要使用MEMCACHED和GPROF就可以了,所以在cmake选项里面增加对这两个选项的支持,于是完整的cmake选项如下:

cmake .\
–DCMAKE_INSTALL_PREFIX=/usr/  \
-DINSTALL_PLUGINDIR=/usr/lib/mysql/plugin \
-DWITH_EXTRA_CHARSETS=complex \
-DWITH_READLINE=1 \
-DWITH_SSL=system \
-DWITH_EMBEDDED_SERVER=1 \
-DENABLED_LOCAL_INFILE=1 \
-DWITH_INNOBASE_STORAGE_ENGINE=1 \
-DWITH_PARTITION_STORAGE_ENGINE=1 \
-DWITH_INNODB_MEMCACHED=1 \
-DENABLE_GPROF=1 

然后make && make install安装就算基本完成了。(顺便提一句,由于我之前安装mysql直接安装在了/usr目录下,所以这里也选择了/usr,建议最好讲mysql安装在单独目录下,如/usr/loca/mysql或其他等等,这样进行下面权限操作就会方便很多)
如果讲mysql安装在了单独的文件家,如/usr/local/mysql,下面的操作就很方便,如下所示(如果没有,像我这样的,那就…不说了):

shell> cd /usr/local/mysql
shell> chown -R mysql .
shell> chgrp -R mysql .
shell> scripts/mysql_install_db --user=mysql #初始化数据库
shell> chown -R root .
shell> chown -R mysql data
# Next command is optional
shell> cp support-files/my-medium.cnf /etc/my.cnf  #配置文件
shell> bin/mysqld_safe --user=mysql &
# Next command is optional
shell> cp support-files/mysql.server /etc/init.d/mysql.server  #mysql管理脚本,并使其开机启动

上面是mysql官方给的安装过程,具体和根据自己安装情况,来进行操作,至于mysql的详细配置文件和启动关闭脚本,还有对mysql的管理设置,先暂且不说了…
在完成安装后,通过msql默认脚本启动mysql,对memcached进行一下配置,具体如下:

mysql> select @@plugin_dir;                 #查看mysql插件的安装位置
+------------------------------+
| @@plugin_dir                 |
+------------------------------+
| /usr/local/mysql/lib/plugin/ |

mysql> source /usr/local/mysql/scripts/innodb_memcached_config.sql;    #导入memcached配置表
mysql> install plugin daemon_memcached soname "libmemcached.so";        #激活插件daemon进程

shell>/etc/init.d/mysql.server restart                                 #启动mysql

此时memcached已经默认启动了,通过netstat -an | grep 11211可以查看到memcached已经处于监听状态,下面我们通过telnet连接memcache,操作数据库,来验证memcache已经可以正常使用了。在windows上开启控制台,连接虚拟机(假设192.168.122.22)上面的memcache:

mysql> select * from demo_test;                                   #查看测试表
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.03 sec)

cmd>telnet 192.168.122.22 11211                                  #windows上通过telnet操作插入数据

set test1 10 0 2
t1
STORED

mysql> select * from demo_test;                                  #查看数据,已经插入数据了
+-------+--------------+------+------+------+
| c1    | c2           | c3   | c4   | c5   |
+-------+--------------+------+------+------+
| AA    | HELLO, HELLO |    8 |    0 |    0 |
| test1 | t1           |   10 |    3 |    0 |
+-------+--------------+------+------+------+
2 rows in set (0.00 sec)

cmd>telnet 192.168.122.22 11211                                  #windows上通过telnet操作删除数据

delete test1
STORED

mysql> select * from demo_test;                                  #查看数据,数据已经删除了
+----+--------------+------+------+------+
| c1 | c2           | c3   | c4   | c5   |
+----+--------------+------+------+------+
| AA | HELLO, HELLO |    8 |    0 |    0 |
+----+--------------+------+------+------+
1 row in set (0.00 sec)

至此,memcached已经安装好了,而且我们发现通过memcache也可以操作数据库了。

MySQL数据仓库——InfoBright的源码安装及使用

最近在搞后台数据分析,涉及大量的数据,他们使用的数据仓库是InfoBright(简称ib),ib提供社区版本(ICE)和商业版本(IEE)。两者区别较大。不过对于即时性要求不是特别严格的需要,社区版本勉强够用了。
两者之间的区别,参考博文: Infobright分享<1>:发展现状和ICE-IEE间区别
本文主要是指导ib的安装及使用(如果涉及boost及其他基础软件版本过低,请自行升级安装)

首先从InfoBright官网下载源码,最新的版本是:infobright-4.0.7-0-src-ice.tar.gz
编译安装

tar zxvf infobright-4.0.7-0-src-ice.tar.gz
cd infobright-4.0.7
make  PREFIX=/usr/local/infobright  EDITION=community release
make PREFIX=/usr/local/infobright   EDITION=community install-release
mkdir /usr/local/infobright/conf  /usr/local/infobright/data /usr/local/infobright/logs
chown -R mysql.mysql /usr/local/infobright/data /usr/local/infobright/logs
cp src/build/pkgmt/my-ib.cnf /usr/local/infobright/conf/my-ib.cnf
/usr/local/infobright/bin/mysql_install_db --basedir=/usr/local/mysql --datadir=/usr/local/infobright/data --user=mysql

修改my-ib.cnf

basedir = /usr/local/infobright
datadir = /usr/local/infobright/data
log-error = /usr/local/infobright/logs/bh.err

启动ib实例

cd /usr/local/infobright
bin/mysqld_safe --defaults-file=conf/my-ib.cnf --user=mysql > /dev/null 2>&1 &

初始化ib实例的密码

/usr/local/infobright/bin/mysqladmin -u root password "123456"

因为使用的ICE社区版本,所以只能使用IB loader导入数据(其实就是只支持csv文件手工导入数据)。
注意:create table t () engine=brighthouse xxx;
创建表时,表的引擎要使用【brighthouse】,这样才会使用到ib仓库的特性(因为infobright-4.0.7带得有myisam、memory等mysql存储引擎,如果不指定,有可能使用到其他引擎)
示例如下,假设csv数据文件为data.csv

bin/mysql -u root -p123456
load data infile "/root/data.csv" into table t_data fields terminated by ',' enclosed by '"' escaped by '\' lines terminated by '\n';

数据导入成功后,就可以在ib中对数据进行sum、avg、group by等数据挖掘操作了。

本文参考了下列文章:
http://www.mysqlsky.com/201109/infobright-data-load-error
http://www.itpuk.net/?p=14
http://www.mysqlsky.com/201110/infobrigh-now-diff

centos 源码编译安装 percona 5.5

下载percona源码

http://www.percona.com/downloads/Percona-Server-5.5/LATEST/

我下载的是 Percona-Server-5.5.21-rel25.0.tar.gz

tar zxvf Percona-Server-5.5.21-rel25.0.tar.gz
cd Percona-Server-5.5.21-rel25.0
cmake . -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/percona -DWITH_EXTRA_CHARSETS=all -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_PARTITION_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DWITH_SSL=yes
(cmake -LHA 可以查看cmake可以使用的参数)
make && make install
mkdir /usr/local/percona/etc
cp support-files/my-medium.cnf /usr/local/percona/etc/my.cnf
vim /usr/local/percona/etc/my.cnf

因为我的虚拟机已经有了运行3306端口的mysql实例,将些/usr/local/percona/etc/my.cnf的端口改为3506、socket改为/tmp/percona.sock
如果要使用innodb引擎(percona分支中的innodb实际就是说的XtraDB引擎,XtraDB引擎向下完全兼容InnoDB引擎),将my.cnf中这句【# Uncomment the following if you are using InnoDB tables】以下的所有innodb相关的注释全部取消掉即可,如下(此处仅为演示,具体的数值,请参考相关的说明文档)

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/percona/data
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /usr/local/percona/data
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

初始数据库

/usr/local/percona/scripts/mysql_install_db --basedir=/usr/local/percona --datadir=/usr/local/percona/data --user=mysql

启动percona数据库实例

/usr/local/percona/bin/mysqld_safe --defaults-file=/usr/local/percona/etc/my.cnf > /dev/null 2>&1 &

初始化

/usr/local/percona/bin/mysqladmin -u root -P 3506 -h 127.0.0.1 password "123456"

再使用下面的命令即可进入实例

/usr/local/percona/bin/mysql -h 127.0.0.1 -u root -P 3506 -p123456

安装Percona自带的HandleSocket插件

cd storage/HandlerSocket-Plugin-for-MySQL/
./configure --with-mysql-source=/root/tmp/Percona-Server-5.5.21-rel25.0 --with-mysql-bindir=/usr/local/percona/bin --with-mysql-plugindir=/usr/local/percona/lib/plugin 
make
make install
/usr/local/percona/bin/mysql -h 127.0.0.1 -u root -P 3506 -p123456
mysql> install plugin handlersocket SONAME "handlersocket.so";
mysql> show plugins;

MySQL使用alter更改表结构的常见用法

在表中添加一列类型(一次只能添加一列)
alter table  t_note  add  cityname varchar(20) not null   [after|first  id];
在表中添加一般或唯一索引
alter table  t_note  add index|unique newindex (username,cityname);
在表中添加主键索引
alter table  t_note  add primary  key  (username,cityname);
在表中给指定列添加默认值
alter table t_note alter name set default 'StamHe';
在表中改变指定列的约束条件(使用change可以改变列的名称)
alter table t_note change name newname varchar(30) not null default 'stamhe';
在表中改变指定列的约束条件(使用modify不可以改变列的名称)
alter table t_note modify newname varchar(20) not null default 'StamHe';
在表中删除指定列
alter table t_note drop sex;
在表中删除主键索引
alter table t_note drop primary key;
在表中删除指定的索引
alter table t_note drop index|unique newindex;
在表中删除指定列的默认值
alter table t_note alter name drop default;
更改表的名称
alter table t_note rename t_note2; 

MySQL 5.1.7以后版本的主从同步示例

MySQL 5.1.7以后版本的主从同步示例

由于MySQL官方在5.1.7以后取消了配置文件中master-host类似参数的设置,故5.1.7以后版本的MySQL主从同步只能通过命令行来进行(个人感觉这个决定出来的结果比较恶心人,导致现在MySQL的主从同步非常的不爽)
主服务器配置文件(假设为192.168.1.100,端口为3306)

[client]
character-set-server = utf8
port    = 3306
socket  = /tmp/mysql3306.sock

[mysqld]
character-set-server = utf8
user    = mysql
port    = 3306
socket  = /tmp/mysql3306.sock
basedir = /usr/local/mysql
datadir = /webserver/mysql/3306/data
log-error = /webserver/mysql/3306/log/mysql_error.log
pid-file = /webserver/mysql/3306/mysql.pid
open_files_limit    = 10240
back_log = 600
max_connections = 5000
max_connect_errors = 6000
table_cache = 614
external-locking = FALSE
max_allowed_packet = 32M
sort_buffer_size = 1M
join_buffer_size = 1M
thread_cache_size = 300
#thread_concurrency = 8
query_cache_size = 512M
query_cache_limit = 2M
query_cache_min_res_unit = 2k
default-storage-engine = MyISAM
thread_stack = 192K
transaction_isolation = READ-COMMITTED
tmp_table_size = 246M
max_heap_table_size = 246M
long_query_time = 3

binlog_cache_size = 4M
binlog_format = MIXED
max_binlog_cache_size = 8M
max_binlog_size = 1G
expire_logs_days = 30
key_buffer_size = 256M
read_buffer_size = 1M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover

interactive_timeout = 120
wait_timeout = 120

skip-name-resolve

# 主服务器的server-id一般为1
server-id = 1
# slave MySQL 5.1.7以后开始就不支持master-host类似的参数了
#master-host     =   192.168.117.100
#master-user     =   slave
#master-password =   123456
#master-port     =  3306
# 如果主服务器断线,重试的时间间隔,单位:秒
#master-connect-retry = 10

log-bin = /webserver/mysql/3306/binlog/binlog
# 要同步的数据库名称(有多个时,以英文的逗号分隔(下同)
binlog-do-db=wordpress292,test
log-slave-updates
# 跳过错误,继续执行复制操作
slave-skip-errors = 1032,1062,126,1114,1146,1048,1396
# 不用同步的数据库名称
replicate-ignore-db = mysql,information_schema
# 需要同步的数据库名称
replicate-do-db		= wordpress292,test

# 中继日志名称
relay-log = /webserver/mysql/3306/relaylog/relaylog
# 中继日志索引
relay-log-index = /webserver/mysql/3306/relaylog/relaylog
relay-log-info-file = /webserver/mysql/3306/relaylog/relaylog

innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 512M
innodb_data_file_path = ibdata1:256M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 8
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_log_file_size = 128M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
innodb_file_per_table = 0

#log-slow-queries = /webserver/mysql/3306/log/slow.log
#long_query_time = 10

[mysqldump]
quick
max_allowed_packet = 32M

分配一个同步数据的帐号

mysql>GRANT REPLICATION SLAVE ON  *.* TO "slave"@'192.168.1.*’ IDENTIFIED BY ‘123456';
mysql>flush privileges;

给主服务器所有表加锁,禁止继续写入数据,以对现有的数据进行打包

mysql>flush  tables  with  read  lock;

记录从服务器开始启动复制的断点,包括日志名和偏移量

>show master status\G;
File: binlog.000049
Position: 107
Binlog_Do_DB: wordpress292,test
Binlog_Ignore_DB: 
1 row in set (0.00 sec)

压缩已经存在的wordpress292、test数据

tar jcvf  data3306.tar.bz2  /webserver/mysql/3306/data/wordpress292/*   /webserver/mysql/3306/data/test/*

主服务器表解锁

mysql>unlock tables

从机的配置文件

[client]
character-set-server = utf8
port    = 3308
socket  = /tmp/mysql3308.sock

[mysqld]
character-set-server = utf8
user    = mysql
port    = 3308
socket  = /tmp/mysql3308.sock
basedir = /usr/local/mysql
datadir = /webserver/mysql/3308/data
log-error = /webserver/mysql/3308/log/mysql_error.log
pid-file = /webserver/mysql/3308/mysql.pid
open_files_limit    = 10240
back_log = 600
max_connections = 5000
max_connect_errors = 6000
table_cache = 614
external-locking = FALSE
max_allowed_packet = 32M
sort_buffer_size = 1M
join_buffer_size = 1M
thread_cache_size = 300
#thread_concurrency = 8
query_cache_size = 512M
query_cache_limit = 2M
query_cache_min_res_unit = 2k
default-storage-engine = MyISAM
thread_stack = 192K
transaction_isolation = READ-COMMITTED
tmp_table_size = 246M
max_heap_table_size = 246M
long_query_time = 3

binlog_cache_size = 4M
binlog_format = MIXED
max_binlog_cache_size = 8M
max_binlog_size = 1G
expire_logs_days = 30
key_buffer_size = 256M
read_buffer_size = 1M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover

interactive_timeout = 120
wait_timeout = 120

skip-name-resolve

# 从机的server-id,注意这个编号一定要是唯一的,没有使用的号
server-id = 2
# slave MySQL 5.1.7以后开始就不支持master-host类似的参数了
#master-host     =   192.168.117.100
#master-user     =   slave
#master-password =   123456
#master-port     =  3306
# 如果主服务器断线,重试的时间间隔,单位:秒
#master-connect-retry = 10

log-bin = /webserver/mysql/3308/binlog/binlog
binlog-do-db=wordpress292,test
log-slave-updates
# 跳过错误,继续执行复制操作
slave-skip-errors = 1032,1062,126,1114,1146,1048,1396
# 需要同步的数据库名称
replicate-do-db		= wordpress292,test
#忽略mysql库的同步
replicate-ignore-db = mysql
replicate-ignore-db = information_schema
#replicate-ignore-db = test

# 中继日志名称
relay-log = /webserver/mysql/3308/relaylog/relaylog
# 中继日志索引
relay-log-index = /webserver/mysql/3308/relaylog/relaylog
relay-log-info-file = /webserver/mysql/3308/relaylog/relaylog


innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 512M
innodb_data_file_path = ibdata1:256M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 8
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_log_file_size = 128M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
innodb_file_per_table = 0

#log-slow-queries = /webserver/mysql/3308/log/slow.log
#long_query_time = 10

[mysqldump]
quick
max_allowed_packet = 32M

解压缩主机已经有的数据

tar   jxvf   data3306.tar.bz2

启动从机的mysql服务,设置同步开始断点,并启动同步

mysql>change master to master_user='slave',master_password='123456',master_host='192.168.1.100',master_port=3306,master_log_file='binlog.000049',master_log_pos=107;
mysql>start slave;  
停止同步为
mysql>stop slave;

查看状态是否OK

mysql>show slave status\G;

如果Slave_IO_Running,Slave_SQL_Running都为Yes,说明复制已经配置正确

python操作mysql——插入、查询、删除

#!/usr/bin/env python
#coding=utf-8

#MySQLdb查询
import MySQLdb

#conn = MySQLdb.connect(host='localhost',port=3306,user='root',passwd='44570253',db='python')
conn = MySQLdb.connect(host='localhost',port=3306,user='root',passwd='44570253')
cursor=conn.cursor()

#执行SQL,创建一个数据库
cursor.execute('drop database if exists python')
cursor.execute('create database python')
#选择数据库
conn.select_db('python')
#执行SQL,创建一个数据库
#cursor.execute('drop table if exists test')
cursor.execute('create table test(id int,info varchar(100))')
# 清空数据
#cursor.execute("delete from test")

value = [1,"inserted ?"]
#插入和一条记录
cursor.execute('insert into test values(%s,%s)',value)

values = []
#生成插入参数值
for i in range(20):
	values.append((i,'Hello mysqldb,I am recoder ' + str(i)))

#插入多条记录
cursor.executemany('insert into test values(%s,%s)',values)
#conn.select_db('python')
count = cursor.execute('select * from test')
print '总共有 %d 条记录'%count
#获取一条记录,每条记录做为一个元组返回
print '只获取一条记录:'
result=cursor.fetchone()
print result
print 'ID: %s info: %s'%(result[0],result[1])
#获取5条记录,注意由于之前执行有了fetchone(),所以游标已经指到第二条记录了,也就是从第二条开始的所有记录
print '只获取5条记录:'
results=cursor.fetchmany(5)
for rec in results:
	print "ID: %s info: %s"%(rec[0],rec[1])
#	print r

print '获取所有结果:'
#重置游标位置,0为偏移量,mode=absolute|relative,默认为relative
cursor.scroll(0,mode='absolute')
#获取所有结果
resultss=cursor.fetchall()
for rec in resultss:
	print "ID: %s info: %s"%(rec[0],rec[1])

conn.close()

mysql下安装HandlerSocket插件

HandlerSocket插件下载链接

wget -c http://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/tarball/master

安装HandlerSocket

tar zxvf HandlerSocket.tar.gz
cd HandlerSocket
./autogen.sh
./configure --prefix=/usr/local/handlersocket --with-mysql-source=/data/nginx-v6/mysql-5.5.3-m3 --with-mysql-bindir=/usr/local/mysql/bin/ --with-mysql-plugin=/usr/local/mysql/lib/mysql/plugin/
make
make install

mysql中加载HandlerSocket插件

mysql> install  plugin  handlersocket  soname   "handlersocket.so";
mysql> show  plugins;