mysql ‘max_allowed_packet’ 问题

本人自建一个网站,监控公司内部核心交换机端口流量及CPU利用率,在网站使用过程中发现,php经常无法将数据写入数据库,调试后发现提示"Got a packet bigger than'max_allowed_packet' bytes"。后修改了mysql配置文件,改为max_allowed_packet=512m,保存并重启MYSQL。之后在mysql控制台 show VARIABLES like 'max_allowed_packet';显示也确实是512M,说明mysql中max_allowed_packet已经改为512M了。可是之后还是会出现无法写入问题,调试后仍然显示是Got a packet bigger than'max_allowed_packet' bytes。这个故障发生并无规律可循,一般重启mysql进程后就正常了,但是一段时间后,通常是5-6个小时后,故障重现,然后后续数据全部无法写入。其实写入的数据量其实并不大,是交换机端口的流量的一个json格式文件,在数据库中以mediumtext类型储存。本人将其中8小时的数据复制出来,检查大小为45KB,那么理论上一天的数据量最大也不会超过300K。所以对这个现象感到很困惑。
一下是my.cnf配置文件的[mysqld]部分:
port = 3306
socket = /tmp/mysql.sock
datadir = /usr/local/mysql/var
skip-external-locking
skip-name-resolve
skip-grant-tables
key_buffer_size = 64M
max_allowed_packet = 32M
table_open_cache = 512
sort_buffer_size = 8M
net_buffer_length = 312K
read_buffer_size = 4M
read_rnd_buffer_size = 2M
另外提到一点,就是web服务器每5分钟会执行php页面,也就是说每5分钟就要去读并写入一次,这个问题会不会和php页面被持续执行有关?

本人对mysql具体设置并不在行,请大家帮忙。

2个回答

       最近项目上线前通过把线上环境的数据库导出到本地通过GUI来进行恢复,结果老是在导出的过程中出错,后来发现是由于数据文件太大了(貌似只有6MB多),说可以把把文件拆分成两个就可以导进去了,后来我改用command通过source也能导入成功。
       后来在网上查了下资料,发现是由......
答案就在这里:解决MySQL max_allowed_packet问题
----------------------Hi,地球人,我是问答机器人小S,上面的内容就是我狂拽酷炫叼炸天的答案,除了赞同,你还有别的选择吗?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
mysql访问异常,提示设置max_allowed_packet
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1168 > 1024). You can change this value on the server by setting the max_allowed_packet' variable. max_allowed_packet已经为16m修改且生效,但是还是报这个异常。重启后恢复正常, 运行一段时间会再次出现,请求
Packet for query is too large (60 > -1).
连接池用tomcat jdbc ,数据库为mariadb10 中间经过amoeba代理读写分离 偶尔会报这个错 Caused by: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (60 > -1). You can change this value on the server by setting the max_allowed_packet' variable. mariadb SHOW VARIABLES LIKE 'max_allowed_packet';返回结果为 max_allowed_packet 67108864 不知道什么情况下会引起负值,请高手帮忙排查
Navicat链接mysql报错 l2013 lost connection mysql server during query
数据库之前没有问题,可以正常连接,打开库,新建表,最近用Navicat远程连接linux服务器的mysql数据库报错。 ![图片说明](https://img-ask.csdn.net/upload/201902/14/1550112241_396484.jpg) 尝试了增大max_allowed_packet = 500M set global max_allowed_packet=1024*1024*16;都不管用,试了网上的各种解决办法,都不管用 进入linux本地数据库,使用use table 没有打开相关的数据表,提示Database changed。 求各位大佬帮忙解决!!谢过~ 这个是连接的数据库,点击数据表的时候报错了 ![图片说明](https://img-ask.csdn.net/upload/201902/14/1550128797_200178.jpg)
JDBC 通过SSH Tunnel连接MySQL数据库
JDBC 通过SSH Tunnel连接MySQL数据库,在获取connection 的时候总是报查询信息太多,是怎么回事? 具体信息如下:Exception in thread "main" com.mysql.jdbc.PacketTooBigException: Packet for query is too large (4739923 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
MysqL :Lost connection to Mysql server during query。
目前用C写的mysql代码,在Linux下运行,老是出现“Lost connection to Mysql server during query”的错误根据百度方法修改my.cnf配置文件中的max_allowed_packet或者添加'skip-name-tables'或修改'wait_timeout'等参数都没效果,而且mysql的日志里面也无错误信息,真心不知道该怎么修改了
办法用尽,不得不提问了,还是MYSQL的问题:Data truncation: Data too long for column
MYSQL5.1 ,使用kindeditor做了一个编辑器,用户可以在里面输入文本,然后提交保存到数据库中 数据库设置的是TEXT类型 刚开始,有一个WORD文档 ,粘贴进去,报Data truncation: Data too long for column 'content ' row 1错误,用以下方法排除: 1、MYSQL,脚本、字段,JSP URL连接信息统一为UTF8 2、把mysql 的max_allowed_packet 设置为了16M,OK提交过去了,很欣喜 3、换了另一个稍大点的word,贴到kindeditor中花了点时间,查看源代码,有176K,再点提交Data truncation: Data too long for column 异常又出来了,把max_allowed_packet设置为64M,128M都没有效果 4、开始用的驱动是5.1.5,后来换到了5.1.12驱动,错误依旧。 5、将word换成纯文字,放进去又能提交了,我很郁闷... 难道真是文件太大了?才100多K咧,TEXT会装不下??
求解:mysql-5.6,搭建galera 集群出现的问题。数据库添加集群相关参数后就起不来
报错: 2016-08-18 13:49:00 24069 [ERROR] /usr/local/mysql/bin/mysqld: unknown variable 'wsrep_provider=/usr/lib64/libgalera_smm.so' 2016-08-18 13:49:00 24069 [ERROR] Aborting 配置: [mysqld] pid-file=/usr/local/mysql/mysql_db3.pid binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=0 query_cache_type=0 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_name="my_wsrep_cluster" wsrep_cluster_address="gcomm://192.168.0.15,192.168.0.16,192.168.0.17" wsrep_node_name=17 wsrep_node_address=17 wsrep_slave_threads=4 wsrep_sst_method=rsync wsrep_sst_auth=sst:bjyappam port = 3306 socket = /var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 256M max_allowed_packet = 1M table_open_cache = 256 sort_buffer_size = 1M read_buffer_size = 1M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size= 16M innodb_buffer_pool_size = 5000M thread_concurrency = 10 character_set_server=utf8 max_allowed_packet=256M log-bin=mysql-bin lower_case_table_names=1 basedir=/usr/local/mysql datadir=/data1/data binlog_format=row server-id = 3 innodb_flush_log_at_trx_commit = 0
关于服务器mysql的my.cnf配置问题,进来看看
![图片说明](https://img-ask.csdn.net/upload/201703/30/1490842768_647422.png) ![图片说明](https://img-ask.csdn.net/upload/201703/30/1490842794_717197.png) ![图片说明](https://img-ask.csdn.net/upload/201703/30/1490842805_315889.png) ----------------------目前的配置---------------------------- [mysqld] port = 3306 socket = /tmp/mysql.sock #skip-locking key_buffer = 384M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M user=mysql local-infile=0 #skip-networking default-character-set=utf8 myisam-recover = backup,force myisam_sort_buffer_size = 128M #skip-locking is deprecated, use skip-external-locking skip-external-locking skip-name-resolve connect_timeout = 15 ft_min_word_len = 1 wait_timeout = 28800 #log-error = /data/mysql/error.log thread_cache = 8 query_cache_size = 32M old_passwords=1 max_connections = 3000 max_user_connections = 3000 max_connect_errors = 10000 #set-variable =log_bin_trust_function_creators=1 log_bin_trust_function_creators=1 thread_concurrency = 4 ft_min_word_len = 1 server-id = 1 expire_logs_day = 30 [mysqldump] quick max_allowed_packet = 16M [isamchk] key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout [mysql] default-character-set=utf8 # end 大神们看下哪里可以优化的地方,谢谢
并发1000左右,数据优化
Linux服务器:64G内存、320G硬盘、20M带宽;nginx、PHP、MySQL MySQL数据量在1亿左右。现在并发是10000,宝塔后台负载状态一直都是100%,业务处理/数据访问比较多【数据库增删改查频繁】。服务器运行一段时间之后MySQL崩溃了,重启MySQL之后,命令链接mysql -uroot -p 输入密码后需要很久才能连接到数据库,有时还会卡死导致无法连接数据库。 nginx配置图 ``` user www www; worker_processes auto; error_log /www/wwwlogs/nginx_error.log crit; pid /www/server/nginx/logs/nginx.pid; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; multi_accept on; } http { include mime.types; #include luawaf.conf; include proxy.conf; default_type application/octet-stream; server_names_hash_bucket_size 512; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 100; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_comp_level 2; gzip_types text/plain application/javascript application/x-javascript text/javascript text/css application/xml; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_disable "MSIE [1-6]\."; limit_conn_zone $binary_remote_addr zone=perip:10m; limit_conn_zone $server_name zone=perserver:10m; server_tokens off; access_log off; ``` MySQL配置文件 ``` #password = your_password port = 3306 socket = /mysql_log/mysql.sock [mysqld] binlog_cache_size = 256K thread_stack = 512K join_buffer_size = 8192K query_cache_type = 0 max_heap_table_size = 2048M port = 3306 socket = /mysql_log/mysql.sock datadir = /www/server/data default_storage_engine = InnoDB performance_schema_max_table_instances = 400 table_definition_cache = 400 skip-external-locking key_buffer_size = 1024M max_allowed_packet = 100G table_open_cache = 2048 sort_buffer_size = 4096K net_buffer_length = 4K read_buffer_size = 4096K read_rnd_buffer_size = 2048K myisam_sort_buffer_size = 256M thread_cache_size = 256 query_cache_size = 0M tmp_table_size = 2048M sql-mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES explicit_defaults_for_timestamp = true #skip-name-resolve max_connections = 500 max_connect_errors = 100 open_files_limit = 65535 wait_timeout=100 interactive_timeout=100 #log-bin=mysql-bin #binlog_format=mixed server-id = 1 expire_logs_days = 1 slow_query_log=1 slow-query-log-file=/www/server/data/mysql-slow.log long_query_time=3 #log_queries_not_using_indexes=on early-plugin-load = "" innodb_data_home_dir = /www/server/data innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /www/server/data innodb_buffer_pool_size = 4096M innodb_log_file_size = 2048M innodb_log_buffer_size = 0M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_max_dirty_pages_pct = 90 innodb_read_io_threads = 32 innodb_write_io_threads = 32 [mysqldump] quick max_allowed_packet = 500M [mysql] no-auto-rehash [myisamchk] key_buffer_size = 1024M sort_buffer_size = 16M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] #interactive-timeout ```
sql脚本insert语句过长,导入报错
今天在用mysql导入sql脚本时,报错"MySQL server has gone away",经检测,是“insert...value...”里数据过多导致不能插入。后修改配置文件: 1.tmp_table_size 2.max_allowed_packet 仍旧报同样的错。来位大神解决了吧,真揪心。 ps:最好是通过调整mysql配置参数,因为sql不是本人写的。
mysql 插入报错 100213 20:22:16 [ERROR] /usr/libexec/mysqld: Sort aborted
我用的一台server进行插入报错 24G内存 4核CPU 程序抛出的异常为:java.sql.SQLException: Unexpected eof found when reading file '/home/tmp/MYvFQFtT' (Errcode: 0) 我查看mysql的日志,错误信息为:100221 10:03:53 [ERROR] /usr/libexec/mysqld: Sort aborted 对应的my.cnf配置如下: character-set-server = utf8 key_buffer = 3072M max_allowed_packet = 512M table_cache = 1024 query_cache_size = 1024M sort_buffer_size = 3072M read_buffer_size = 1024M read_rnd_buffer_size = 1024M myisam_sort_buffer_size = 1024M net_buffer_length = 128M open_files_limit = 10240 tmp_table_size=1024M max_tmp_tables=100 thread_stack =512M join_buffer_size=1024M max_connections = 250 max_user_connections = 200 wait_timeout = 259200 interactive_timeout= 259200 net_read_timeout = 500 net_write_timeout = 600 是不是和我的配置有关 以前这种操作是不会报错的。最近修改了几个参数后就开始报错,但是我改回去依旧是同样的情况
mysql 访问量过大,造成链接不上的情况
mysql 访问量过大,造成链接不上的情况 硬件:16核心CPU ,32G内存,100M带宽 软件,php5.6,+mysql 5.7 用途,接口调用 mysql 基本配置: ``` [client] #password = your_password port = 3306 socket = /tmp/mysql.sock [mysqld] binlog_cache_size = 192K thread_stack = 384K join_buffer_size = 4096K query_cache_type = 0 max_heap_table_size = 1024M port = 3306 socket = /tmp/mysql.sock datadir = /www/server/data skip-external-locking performance_schema_max_table_instances=400 table_definition_cache=400 key_buffer_size = 512M max_allowed_packet = 100G table_open_cache = 1024 sort_buffer_size = 2048K net_buffer_length = 8K read_buffer_size = 2048K read_rnd_buffer_size = 1024K myisam_sort_buffer_size = 64M thread_cache_size = 192 query_cache_size = 0M tmp_table_size = 1024M explicit_defaults_for_timestamp = true #skip-networking skip-name-resolve skip-grant-tables max_connections = 2800 max_connect_errors = 100 open_files_limit = 65535 sql-mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES #log-bin=mysql-bin #binlog_format=mixed server-id = 1 expire_logs_days = 10 slow_query_log=1 slow-query-log-file=/www/server/data/mysql-slow.log long_query_time=3 back_log = 900 #log_queries_not_using_indexes=on #loose-innodb-trx=0 #loose-innodb-locks=0 #loose-innodb-lock-waits=0 #loose-innodb-cmp=0 #loose-innodb-cmp-per-index=0 #loose-innodb-cmp-per-index-reset=0 #loose-innodb-cmp-reset=0 #loose-innodb-cmpmem=0 #loose-innodb-cmpmem-reset=0 #loose-innodb-buffer-page=0 #loose-innodb-buffer-page-lru=0 #loose-innodb-buffer-pool-stats=0 #loose-innodb-metrics=0 #loose-innodb-ft-default-stopword=0 #loose-innodb-ft-inserted=0 #loose-innodb-ft-deleted=0 #loose-innodb-ft-being-deleted=0 #loose-innodb-ft-config=0 #loose-innodb-ft-index-cache=0 #loose-innodb-ft-index-table=0 #loose-innodb-sys-tables=0 #loose-innodb-sys-tablestats=0 #loose-innodb-sys-indexes=0 #loose-innodb-sys-columns=0 #loose-innodb-sys-fields=0 #loose-innodb-sys-foreign=0 #loose-innodb-sys-foreign-cols=0 default_storage_engine = InnoDB innodb_data_home_dir = /www/server/data innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /www/server/data innodb_buffer_pool_size = 1024M innodb_log_file_size = 512M innodb_log_buffer_size = 128M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 100 innodb_max_dirty_pages_pct = 90 innodb_read_io_threads = 16 innodb_write_io_threads = 16 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer_size = 256M sort_buffer_size = 4M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ``` 一旦并发量有点大,可能会有一个查询阻塞,就会造成链接不上数据库或者php报错: connect() to unix:/tmp/php-cgi-56.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: ,问题应该在于数据库这边,数据库是独立的服务器,web有web的服务器,访问走的内网,不占用带宽。
mysql启动的时参数文件中的[mysql]下的参数没有生效
my.cnf下的参数如下 [mysqld] 此处省略..... [mysql] #no-auto-rehash prompt='\u@\h:\p\d mysql>' #max_allowed_packet = 1024M ##影响mysql导人的速度 #default_character_set = gbk 启动方式 mysqld --defaults-file=/etc/my.cnf & 启动后prompt没有生效 [root@mysqldb15 ~]# mysql -uroot -p -S /tmp/mysql3306.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.6.29-log MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> 求回复
windows下MySQL5.7.19无法启动日志是为什么
安装的是解压缩版的MySQL5.7.19,可以正常启动和进入mysql,但是一直无法启动日志, ![图片说明](https://img-ask.csdn.net/upload/201709/20/1505890481_954674.png) 以下是我的my.ini内容,希望各路大神能够指导一下我: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [client] default-character-set=utf8 [mysqld] port=3306 log-bin =mysql-bin default-storage-engine=INNODB character-set-server=utf8 collation-server=utf8_general_ci basedir ="E:\MySQL\mysql-5.7.19-winx64/" datadir ="E:\MySQL\mysql-5.7.19-winx64/data/" tmpdir ="E:\MySQL\mysql-5.7.19-winx64/data/" socket ="E:\MySQL\mysql-5.7.19-winx64/data/mysql.sock" log-error="E:\MySQL\mysql-5.7.19-winx64/data/mysql_error.log" server-id =1 #skip-locking max_connections=100 table_open_cache=256 query_cache_size=1M tmp_table_size=32M thread_cache_size=8 innodb_data_home_dir="E:\MySQL\mysql-5.7.19-winx64/data/" innodb_flush_log_at_trx_commit =1 innodb_log_buffer_size=128M innodb_buffer_pool_size=128M innodb_log_file_size=10M innodb_thread_concurrency=16 innodb-autoextend-increment=1000 join_buffer_size = 128M sort_buffer_size = 32M read_rnd_buffer_size = 32M max_allowed_packet = 32M explicit_defaults_for_timestamp=true sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" skip-grant-tables #sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
ubuntu14.04.1启动mysql出错
ubuntu版本为14.04.1,apt-get install mysql 启动出错: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) my.cnf配置如下: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error log - should be very few entries. # log_error = /var/log/mysql/error.log # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ 在/var/run/mysqld/中没有任何文件,/var/lib/mysql/下为 debian-5.5.flag ibdata1 ib_logfile0 ib_logfile1 mysql performance_schema 找不到mysqld.sock文件。
用PHP给MySQL插入一段BLOG产生Fatal Error且没有任何提示,为啥?
如题,没有任何错误提示。尝试过用statment和直接query的方式,都无效…… ``` // 我自己的文件函数,大概类似file_get_contents() $fdata = $SYS->FileIO->file_read($_FILES['mediafile']['tmp_name']); // 建立一个statement,$DB也是自己写的类,反正_CONNECTER就是mysqli实例了 $stmt = mysqli_prepare($DB->_CONNECTER, "INSERT INTO filedata (checksum, filesize, fileext, uploadtime, filedata) VALUES (?, ?, ?, ?, ?)"); // 绑定变量 mysqli_stmt_bind_param($stmt, "sissb", $checksum, $_FILES['mediafile']['size'], $fileext, date('Y-m-d H:i:s'), $DB->_CONNECTER->real_escape_string($fdata)); 执行 mysqli_stmt_execute($stmt); 关闭statement mysqli_stmt_close($stmt); ``` 然后就甩给我一个500错误。看PHP的error log没有任何详细提示,完整log就是这样: > [Sun Apr 05 21:15:17 2020] [error] [client 111.196.2.224] PHP Fatal error: Query Errors:\nINSERT INTO GDMN_filedata (checksum, filesize, fileext, uploadtime, filedata) VALUES ('fcbb8d2c6bd3561d54ecde7289982d56', 153727193, 'mp4', '2020-04-05 21:14:34', '\\0\\0\\0 ftypisom\\0\\0\x02\\0isomiso2avc1mp41\\0\\0\\0\bfree\t\x1e\xc6Lmdat\xde\x02\\0Lavc58.54.100\\0B \b\xc1\x188\\0\\0\\0\x0c\x06\\0\x07\x80u0\\0\\0\x03\\0@\x80\\0\\0\\0\b\x06\x01\x04\\0\\0\b\x10\x80\\0\\0\x0fwe\xb8\x04\x17\xff\xf3\x03a\xeb\x7f2\xbcD~\xf12\xe26\xa9\xc3T\xfe\xc4)\xceK,\x0e\xd1m\\"\xcc\xf0b\xb9\xc7\xd8\x0cU\x13Z-\xdeq@m)\xc0\xa0\\n\xb4\xbd\xb8\xdf\xd5\xa9\xd7\x85\xbb\xee\x06F\xb3_\xf3\xf8.\x0c\xd84n3EJ\xde\\0\\0\x03\\0\\0\x03\\0\\0\x03\\0\x01\xb0\xbe\xa1\xa2\xb7\xc0\x06\xdf$\x06\xd4f\xd7\xda\xa1<P\xeb\xec\xdf\xcc\x1d{\xb2M\va\x8e;\xeb\xc3\v\x8c\xfax\xab\xd9\xde\xec)\xcb\xb5\xf6/,\x03\xfea,\x97\xb5L\xcf\x03\x856\xe6/?E\xc3\xd2\xb2\xf9s\x1e\xa0\xed\xe0\xd7\x94\x9e\xa9mS\x89\x9b\xe3\xb6P\x9c@c\xfe\xaf\x93\xdf\x1c\xb1V\x8c\xba\b\xb0\xab\xd9I\x1cr\xc9\xe5\xb9 ~0\xc6\x82\xd0me\x0e\xe0\xe2\x8d:w-\xcf\xa3\xe2\xfd\\'\xc8\x9fdqp\xca\x9d\x9bT\x8fx\\r\x90\tr\xf7_\x7fn\xc2 Ms\xba\x01;Q\x1f\xe7\x84\xf4\xea\xb3\x82\x95X\x8f\xa1\xed\xfe\xd4\xd3i\xae\xd2\x83Pg5VA\xc4\xfc?8\xd0\xfaYG\xc0\xc5\xd0t\xd1\xb2W\x81\xf5\xe7\xb7\xb9\x18\xb8)\xb3\xa1M\\0\xec\xec\xc6!8\xd1]/\x17\xeb\xcfN\xc6h\xf0~\xd6\xd0\xf3\x85\x8b\xa3\xd21u\x7fJf\xe9\b\x18\xbd\x1f\vh\xdcV)\\"\xc2\xa6\x16\x8c\xa0d\xa7\xe6 u\xd5\x94\x10\x11v_l\xdd\xd7d\x1ds\x04q\xc6\x80Q\x1d\xc2#\xc33mxHV\x85)\xbb\x9f\xff\xfd\xb3\xde\xb7G\x101h%\t\x13\xd9<\xeb/tO.\\0U\xa8\x7f\x91b\x13\x92\x03./yd\xddgd\xb3\xcejh\xfbm+\xa8\xe5l+\\\\1\xfa\xa9\x0fN\xb3\xfaT/\xd2\xb7\xb5\x07\x9eM\xc13\xbc\xf4SVD\xb1cV7\x99\xa7S\xc8\xed\xc8b~\xa9\x19C\xb3A)Rp\\n\x8f\x9e\x94\xeb\xe4\xe8e\xf4:\xae7\x96\xeck\x15\x9c\xf5\xb8\xce\xa5\x02\x19\xcb\xfa\xa4\xad\x19\x0c\xdb\xf4\x12\\n\x90}\x0f\xdf7n\\'\x8b*8&Qs;\xc1\x15\xee@\x9ac\xdd\xf9\xb0eGN?\xdd\xc4\x17<\x1b\x96\x06h\xc0N-\xc4V\xcb\xfd\b\xe7\xcd\xf1\x07\xbd{\xc0b\xcc\x17\x98\xeb\xf8\xdc}&\xfd\x1e\x13\x89L\xdd\xfb\xcc-\xee\xe1\\0V\xcaBp\x02\\\\\xdex\xa9\xfd\xa0\x8a-\xdec\x7f\x16\xcd\x0c\xf14\xd4:=\x9c\xf2\xfb\xcc\xc3A\xef\xdfl\x14\x8a\xcb\xa7\xf5\x97\x05\x1f\xc9\xe3\x11\xb5\x84\x86\xe9\xc8|`\x90\x8c\x18B\xf3\xd1w6S\x82y\x879\x1e\x8e\xa0\xc1\x16\xac$n\x9eM\x99\x84\x1e\xd4>:#\x074\xacJ\xff\xb36p\xb4\x9c\xbaX\x81\xf3\x955@\xba\xb8H\xfc\\r\xdb\x9c6\x87\xdc[\xb6Ev\xe7\x89+\xdb\x90\xf5\x80\\0\\0\x03\\0\x84=J\xd4\xaa\xb9\xd3\x15\\r\x98\xf5| in /data/www/vip.gdmn/ysqj/mini/jt/includes/ob.mysqli.php on line 51, referer: https://servicewechat.com/wx2c9334cdeea730dd/devtools/page-frame.html 这是个mp4文件,我上传别的mp4都没问题,就这个文件会出错,肯定不是upload file size或max allowed packet之类的错误,比这个大的都没问题,唯独就这个。 鉴于这个情况,我也想过可能是这个文件里有特殊字符,可是real_escape_string过了,也不行。 还有别的思路吗?谢谢!
mysql宕机求高手指教原因
============== my.ini配置 START ============== # Example MySQL config file for medium systems. # # This is for a system with little memory (32M - 64M) where MySQL plays # an important part, or systems up to 128M where MySQL is used together with # other programs (such as a web server) # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer = 16M max_allowed_packet = 10M table_cache = 128 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 64M max_connections = 1000 default-character-set = utf8 thread_cache_size = 8 query_cache_size = 0 query_cache_type = 0 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # binary logging format - mixed recommended binlog_format=mixed # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT= 3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 10M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ============== my.ini配置 END ============== ============== mysql 错误日志 START ============== 180120 18:16:36 InnoDB: Starting shutdown... 180120 18:16:38 InnoDB: Shutdown completed; log sequence number 2 1119799076 180120 18:16:38 [Warning] Forcing shutdown of 1 plugins 180120 18:16:38 [Note] D:\PHPnow-1.5.3\MySQL-5.1.33\bin\mysqld.exe: Shutdown complete 180120 18:16:39 InnoDB: Started; log sequence number 2 1119799076 180120 18:16:39 [Note] Event Scheduler: Loaded 0 events 180120 18:16:39 [Note] D:\PHPnow-1.5.3\MySQL-5.1.33\bin\mysqld.exe: ready for connections. Version: '5.1.33-community-log' socket: '' port: 3306 MySQL Community Server (GPL) 180120 21:12:28 - mysqld got exception 0xc0000005 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8388608 read_buffer_size=262144 max_used_connections=43 max_threads=1000 threads_connected=20 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 782496 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x37f7930 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 00725584 mysqld.exe!??? 004E6C5A mysqld.exe!??? Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 05CE2E30=select * from massage_common_user where yn=1 and id=29156 thd->thread_id=8430 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. InnoDB: Log scan progressed past the checkpoint lsn 2 1125547569 180120 21:14:56 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 2 1125550276 180120 21:14:56 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 4956330, file name .\mysql-bin.000053 180120 21:14:57 InnoDB: Started; log sequence number 2 1125550276 180120 21:14:57 [Note] Recovering after a crash using mysql-bin 180120 21:14:57 [Note] Starting crash recovery... 180120 21:14:57 [Note] Crash recovery finished. 180120 21:14:57 [Note] Event Scheduler: Loaded 0 events 180120 21:14:57 [Note] D:\PHPnow-1.5.3\MySQL-5.1.33\bin\mysqld.exe: ready for connections. Version: '5.1.33-community-log' socket: '' port: 3306 MySQL Community Server (GPL) ============== mysql 错误日志 END ============== 服务器环境 window server 2012 16G
Ubuntu16.04 修改mysql 数据存储目录datadir 启动失败
Ubuntu16.04 修改mysql 数据存储目录datadir 启动失败 ,网上的各种方法都已经尝试,但是还是没有启动成功请教各位大神!!!!! 修改步骤 ## 创建 迁移文件夹 > cd /mnt > mkdir lib > cd lib && mkdir mysqldata > 数据存放在 /mnt/lib/mysqldata 修改所属用户 和所属用户组为 msyql > sudo chown -vR mysql:mysql /mnt/lib/mysqldata 修改权限 > sudo chmod -vR 700 /mnt/lib/mysqldata ### 迁移文件 停止服务 > sudo /etc/init.d/mysql stop 迁移数据 > cp -av /var/lib/mysql/* /mnt/lib/mysqldata # vim /etc/mysql/mysql.conf.d/mysqld.cnf 将 [mysqld] 组下的 datadir改为: datadir = /mnt/lib/mysqldata sudo vim /etc/apparmor.d/usr.sbin.mysqld 找到其中的 /var/lib/mysql/ r, /var/lib/mysql/** rwk, 两行权限声明,可以在前面加上#好注释掉。然后对照格式,加入新路径的权限声明: /mnt/lib/mysqldata/ r, /mnt/lib/mysqldata/** rwk 重启服务 配置文件修改成功后就可以重启数据库,重启数据库之前需要先重新载入apparmor配置文件,使用下面命令重新载入: > sudo /etc/init.d/apparmor restart >sudo /etc/init.d/mysql start 重启的时候 Starting mysql (via systemctl): mysql.serviceJob for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. failed! root@iZm5e472vz1trxejt8m5akZ:/etc/mysql# systemctl status mysql.service ● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2018-06-23 00:55:37 CST; 25s ago Process: 13418 ExecStartPost=/usr/share/mysql/mysql-systemd-start post (code=exited, status=0/SUCCESS) Process: 13408 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 13417 (mysqld) CGroup: /system.slice/mysql.service └─13417 /usr/sbin/mysqld . 配置文件 内容 ``` # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. # Here is entries for some specific programs # The following values assume you have at least 32M ram [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr # datadir = /var/lib/mysql datadir = /mnt/lib/mysqldata tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # key_buffer_size = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover-options = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error log - should be very few entries. # log_error = /var/log/mysql/error.log # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem ```
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
在中国程序员是青春饭吗?
今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...
我在支付宝花了1分钟,查到了女朋友的开房记录!
在大数据时代下,不管你做什么都会留下蛛丝马迹,只要学会把各种软件运用到极致,捉奸简直轻而易举。今天就来给大家分享一下,什么叫大数据抓出轨。据史料证明,马爸爸年轻时曾被...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
20道你必须要背会的微服务面试题,面试一定会被问到
写在前面: 在学习springcloud之前大家一定要先了解下,常见的面试题有那块,然后我们带着问题去学习这个微服务技术,那么就会更加理解springcloud技术。如果你已经学了springcloud,那么在准备面试的时候,一定要看看看这些面试题。 文章目录1、什么是微服务?2、微服务之间是如何通讯的?3、springcloud 与dubbo有哪些区别?4、请谈谈对SpringBoot 和S...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
有网友说:2020年还不懂Spring就放弃Java吧?
前言 Spring这个词对于开发者想必不会陌生,可能你每天都在使用Spring,享受着Spring生态提供的服务,理所当然的用着SpringIOC和SpringAOP去实现老板交给你的功能 ,唔 它就是这样使用的(类声明为Bean组件,然后注入),没错 能完成老板任务,没毛病。如果向你提问什么是Spring,Spring有什么核心功能呢,你会想:这太简单了,Spring就是框架嘛,Spring核...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
2020 年,大火的 Python 和 JavaScript 是否会被取而代之?
Python 和 JavaScript 是目前最火的两大编程语言,但是2020 年,什么编程语言将会取而代之呢? 作者 |Richard Kenneth Eng 译者 |明明如月,责编 | 郭芮 出品 | CSDN(ID:CSDNnews) 以下为译文: Python 和 JavaScript 是目前最火的两大编程语言。然而,他们不可能永远屹立不倒。最终,必将像其他编程语言一...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
没用过这些 IDEA 插件?怪不得写代码头疼
使用插件,可以提高开发效率。对于开发人员很有帮助。这篇博客介绍了IDEA中最常用的一些插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
最全最强!世界大学计算机专业排名总结!
我正在参与CSDN200进20,希望得到您的支持,扫码续投票5次。感谢您! (为表示感谢,您投票后私信我,我把我总结的人工智能手推笔记和思维导图发送给您,感谢!) 目录 泰晤士高等教育世界大学排名 QS 世界大学排名 US News 世界大学排名 世界大学学术排名(Academic Ranking of World Universities) 泰晤士高等教育世界大学排名 中国共...
《java面试宝典》三 类初始化和类实例化顺序
前言: 社长,4年api搬运工程师,之前做的都是一些框架的搬运工作,做的时间越长,越发感觉自己技术越菜,有同感的社友,可以在下方留言。现侧重于java底层学习和算法结构学习,希望自己能改变这种现状。 为什么大厂面试,更侧重于java原理底层的提问,因为通过底层的提问,他能看出一个人的学习能力,看看这个人的可培养潜力。随着springboot的流行,大部分的开发,起步就是springboot。也...
一份王者荣耀的英雄数据报告
咪哥杂谈本篇阅读时间约为 6 分钟。1前言前一阵写了关于王者的一些系列文章,从数据的获取到数据清洗,数据落地,都是为了本篇的铺垫。今天来实现一下,看看不同维度得到的结论。2环境准备本次实...
工作十年的数据分析师被炒,没有方向,你根本躲不过中年危机
2020年刚刚开始,就意味着离职潮高峰的到来,我身边就有不少人拿着年终奖离职了,而最让我感到意外的,是一位工作十年的数据分析师也离职了,不同于别人的主动辞职,他是被公司炒掉的。 很多人都说数据分析是个好饭碗,工作不累薪资高、入门简单又好学。然而今年34的他,却真正尝到了中年危机的滋味,平时也有不少人都会私信问我: 数据分析师也有中年危机吗?跟程序员一样是吃青春饭的吗?该怎么保证自己不被公司淘汰...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
张朝阳回应迟到 1 分钟罚 500:资本家就得剥削员工
loonggg读完需要2分钟速读仅需 1 分钟大家我,我是你们的校长。前几天,搜狐的董事局主席兼 CEO 张朝阳和搜狐都上热搜了。原因很简单,就是搜狐出了“考勤新规”。一封搜狐对员工发布...
一个程序在计算机中是如何运行的?超级干货!!!
强烈声明:本文很干,请自备茶水!???? 开门见山,咱不说废话! 你有没有想过,你写的程序,是如何在计算机中运行的吗?比如我们搞Java的,肯定写过这段代码 public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } ...
【蘑菇街技术部年会】程序员与女神共舞,鼻血再次没止住。(文末内推)
蘑菇街技术部的年会,别开生面,一样全是美女。
那个在阿里养猪的工程师,5年了……
简介: 在阿里,走过1825天,没有趴下,依旧斗志满满,被称为“五年陈”。他们会被授予一枚戒指,过程就叫做“授戒仪式”。今天,咱们听听阿里的那些“五年陈”们的故事。 下一个五年,猪圈见! 我就是那个在养猪场里敲代码的工程师,一年多前我和20位工程师去了四川的猪场,出发前总架构师慷慨激昂的说:同学们,中国的养猪产业将因为我们而改变。但到了猪场,发现根本不是那么回事:要个WIFI,没有;...
立即提问