有啥方法在freeBSD中安装MySQL

有啥方法在freeBSD中安装MySQL,试了好久,安装失败,求助大佬,教教我怎么安装在freeBSD中安装MySQL

1个回答

是什么系统?报什么错误?mac可以参考下https://blog.csdn.net/joyous/article/details/47282129

qq_45003037
难寻何遇 虚拟机的freeBSD,是基于unix的系统
7 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
freeBSD中不连网安装postgresql9.0
在虚拟机中安装了freeBSD,安装postgresql9.0时需要使用gmake,安装gmake时需连接互联网安装,我在网上下载了gmake的压缩包,但是怎么也装不上,提示没有gmake这个命令,要怎么装才是正确的方法呢
vmware安装freebsd 10.0 出问题???求教
不能分区,比如/ swap 等,无法输入数字和选择挂载点![CSDN移动问答][1] [1]: http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/bsdinstall/bsdinstall-part-manual-addpart.png
openldap如何添加memberof属性 ,系统用的是Freebsd
openldap如何添加memberof属性 ,系统用的是Freebsd
FreeBSD 如何查看tid ?
![图片说明](https://img-ask.csdn.net/upload/201805/07/1525665696_845583.png)这是centos的,想知道FreeBSD如何查看
FreeBsd10.0系统老是宕机
/var/crash/info.0 Dump header from device /dev/mfid0p5 Architecture: amd64 Architecture Version: 2 Dump Length: 3524464640B (3361 MB) Blocksize: 512 Dumptime: Wed Apr 11 12:51:33 2018 Hostname: mail.test.com Magic: FreeBSD Kernel Dump Version String: FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC Panic String: page fault Dump Parity: 401462448 Bounds: 0 Dump Status: good
freebsd下 命令行输入php命令无反应
最近手动编译安装 更新了php到5.5.30版本 发现php -v php -i等命令执行后没有反应 但是php -h php-fpm -v php-config等命令可以正常执行 导致无法安装composer 看了下php-fpm -m 发现phar扩展 openssl都是有安装的 然后就困惑了 是我编译安装的问题么 还是php的cli模式出了问题?
freebsd上编译swift源码的时候报错
![图片说明](https://img-ask.csdn.net/upload/201602/18/1455779369_12055.png) 所有该关联的库都关联了,来大神解救下吧 谢谢!!!
FreeBSD 下 ruby 数组输出字符串问题?
如题: [code="ruby"] s = "fajsdfkojepiorjeafksdjaf;kasjff" a = s.scan(/f(.{14})/) # 结果:=> [["ajsdfkojepiorj"]] a.to_s # 结果显示:=> "[[\"ajsdfkojepiorj\"]]" [/code] 在Windows下正常 显示"ajsdfkojepiorj"。请问:这个是什么情况? 顺便问一下:Ruby On Rails 服务器那种系统比较好。Ubuntu or FreeBSD ? [b]问题补充:[/b] ruby 1.9.1p0 (2009-01-30 revision 21907) [x86_64-freebsd7.2]
libuinet 能否用于redhat
因为看了一些资料后发现,觉得 libuinet 工程是将 FreeBSD 9.x 的 TCP/IP 协议栈移植到了用户态,不知道 libuinet 工程是否能在 linux 下使用,另外请问有没有能在linux下直接使用的 TCP/IP 用户态开源协议栈,望各位指教.
tomcat下java程序读取文件名乱码
我的系统运行环境是FreeNAS9.2(可以参考FreeBSD)、TOMCAT6、JAVA7,我在系统的一个目录下放了很多文件,文件名都是中文的,但是在tomcat下部署了一个web项目,web项目读取这些文件并把这些文件的文件名输出到一个文件,但是出现下图的乱码。但是我在系统里放几个本地文件,跑java本地程序(java test),这样输出到文件就不出现乱码,不知道有没有大神知道这个问题,求指导 ![CSDN移动问答][1] [1]: http://img.my.csdn.net/uploads/201405/23/1400836491_7629.jpg
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
tomcat启动错误 Can't initialize wildcard name
这是在kvm虚拟机上运行freebsd系统,java安装成功,但是启动tomcat时,报如下错误(tomcat为6.0.43版本,tomcat下没有部署项目):Mar 20, 2015 10:04:50 AM org.apache.catalina.startup.Bootstrap initClassLoaders SEVERE: Class loader creation threw exception java.lang.Error: Can't initialize wildcard name at javax.management.ObjectName.<clinit>(ObjectName.java:1940) at javax.management.MBeanServerDelegate.<clinit>(MBeanServerDelegate.java:204) at com.sun.jmx.mbeanserver.JmxMBeanServer.newMBeanServerDelegate(JmxMBeanServer.java:1326) at javax.management.MBeanServerBuilder.newMBeanServerDelegate(MBeanServerBuilder.java:49) at javax.management.MBeanServerFactory.newMBeanServer(MBeanServerFactory.java:302) at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:213) at javax.management.MBeanServerFactory.createMBeanServer(MBeanServerFactory.java:174) at org.apache.catalina.startup.Bootstrap.createClassLoader(Bootstrap.java:182) at org.apache.catalina.startup.Bootstrap.initClassLoaders(Bootstrap.java:91) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:206) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:390) Caused by: javax.management.MalformedObjectNameException: Key properties cannot be empty at javax.management.ObjectName.construct(ObjectName.java:467) at javax.management.ObjectName.<init>(ObjectName.java:1394) at javax.management.ObjectName.<clinit>(ObjectName.java:1938) ... 10 more
关于TCP中的【backlog】疑问
《TCP/IP详解卷1 协议》第2版中写道: Linux% sock -s -v -q1 -O30000 6666 The -q1 option sets the backlog of the listening endpoint to 1.The -O30000 option causes the program to sleep for 30,000s(basically a long time,about 8 hours) before accepting any client connections.If we now try to connect to this server continually,the first four connections are completed immediately.After that,two connections are completed every 9s.Other operating systems vary considerably in how this is handled.In Solaris 8 and FreeBSD 4.7,for example, two connections are handled immediately and the third times out;subsequent connections time out as well. 我不懂的是命令设置了backlog=1,那么后面的 "If we now try to connect to this server continually,the first four connections are completed immediately.After that,two connections are completed every 9s." 和 "In Solaris 8 and FreeBSD 4.7,for example, two connections are handled immediately and the third times out" 是怎么回事? 求详解,谢谢大神!
ldapadd: not compiled with SASL support
系统:freebsd 给openldap添加memberof属性时, 执行命令:ldapadd -Q -Y EXTERNAL -H ldapi:/// -f memberofconfig.ldif 提示:ldapadd: not compiled with SASL support 我的 memberof_config.ldif文件内容如下 ``` dn: cn=module,cn=config cn: module objectClass: olcModuleList olcModuleLoad: memberof olcModulePath: /usr/local/libexec/openldap dn: olcOverlay={0}memberof,olcDatabase={1}mdb,cn=config objectClass: olcConfig objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: top olcOverlay: memberof olcMemberOfDangling: ignore olcMemberOfRefInt: TRUE olcMemberOfGroupOC: groupOfNames olcMemberOfMemberAD: member olcMemberOfMemberOfAD: memberOf ``` 这是什么问题,该怎么解决?求助!!
Realvnc3.3.7开源代码嵌入式移植
在realvnc下载了源码版本3.3.7进行交叉编译,解压源码后查看README分两步进行编译,其编译方式说明为: To build this distribution you need a C++ compiler as well as a C compiler. You also need a reasonably recent version of the X window system installed. This come as standard with most unix machines. If you don't have it installed, see http://www.x.org or http://www.xfree86.org To build everything but Xvnc, do: % ./configure % make This should build first some libraries - zlib, rdr and rfb - then vncconnect, vncpasswd and vncviewer. If you already have zlib installed on your system you can run "./configure --with-installed-zlib" if you prefer (this is strongly advised on FreeBSD, since we've been told there are problems otherwise). Xvnc differs from the other programs in that it is built inside a cut-down version of the X build tree. This is based around the XFree86 3.3.2 "server only" distribution, which in turn is based on the X11R6.3 distribution from the X consortium. To build Xvnc, do: % cd Xvnc % make World 我要实现交叉编译源码在第一步方式中可以设置configure参数实现交叉编译,编译通过;第二步进去Xvnc目录下只有Makefile相关文件,而没有configure文件无法配置交叉编译,于是考虑修改Makefile文件来交叉编译,可是在它的每个子目录下都有一个Makefile不能修改每一个!请高手帮忙:在没有configure文件而且主目录和子目录都有Makefile下怎么实现交叉编译,如果直接make World将直接gcc编译了而非cross-compile!不知道怎么实现,还请多多帮忙解决,谢谢!
APUE3到底值不值得入手?
1. APUE3 是2013出来的,原文描述如下:The four platforms used to test the examples in the book include FreeBSD 8.0, Linux 3.2.0, Mac OS X 10.6.8 and Solaris 10. 而现在Linux都已经出到4.4,说明APUE3的内容并不是最新的,很多新技术没有在其中,是否还有学习的必要? 2.现在都是学习Linux,少有人专门学习unix,社会需求也是Linux,所以学习Linux看一本关于unix的书籍是否合适? 3.如果吃透APUE3,技术可以达到什么水平?是否真如网上一些人说的那样,此书是神书,有多少多少顶级程序员都终身受益。 以上纯属个人疑问,欢迎吐槽,欢迎解惑!
访问lighttpd的index出现400错误……
用sockstat可以查看到2048端口被监听着 可是总是get不到index.html 在modules.conf中只开启了accesslog和access 求助啊啊啊啊
Execution failed for task 'packageDebug'.
Warning:Ignoring Android API artifact com.google.android:android:2.1_r1 for debug Error:Execution failed for task ':xxx:packageDebug'. > android/util/Log Information:BUILD FAILED 直接在android studio里面启动或者debug就会产生上面的错误,但是能通过gradle命令安装到手机上,请问哪位大侠雨果这个错我,请告知一下,谢谢 主Module的build.gradle配置 //repositories { // mavenLocal() // mavenCentral() //} android { // useLibrary 'org.apache.http.legacy' buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } debug { // multiDexEnabled true } } packagingOptions { exclude 'META-INF/DEPENDENCIES' exclude 'META-INF/NOTICE' exclude 'META-INF/LICENSE' exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } configurations { all { exclude module: 'httpclient' exclude module: 'commons-logging' } } dependencies { compile 'com.google.protobuf:protobuf-java:2.5.0' compile 'com.android.support:appcompat-v7:19.+' compile 'com.nineoldandroids:library:2.4.+' compile 'com.google.zxing:core:3.1.0@jar' compile 'com.android.support:multidex:1.0.0' compile('com.github.tony19:logback-android-classic:1.0.10-2@jar') { // Work around for gradle 0.9 and 0.10 transitive = true } // compile project(':bitherj') compile (project(':bitherj')){ exclude group: 'org.json', module: 'json' exclude group: 'com.google.android' } // compile ('com.github.nkzawa:socket.io-client:0.4.1'){ // exclude group: 'org.json', module: 'json' // } compile project(':wheel') compile project(':android-charts') //加入日志依赖 // compile 'org.slf4j:slf4j-api:1.7.25' } 整个项目的build.gradle apply plugin: 'com.android.library' buildscript { repositories { jcenter() mavenCentral() // maven { // url 'https://maven.google.com/' // name 'Google' // } } dependencies { // classpath 'com.android.tools.build:gradle:2.3.2' // classpath 'com.android.tools.build:gradle:3.0.0' classpath 'com.android.tools.build:gradle:2.2.2' } } project(':bither-android') { apply plugin: 'android' } project(':wheel') { apply plugin: 'com.android.library' } project(':android-charts') { apply plugin: 'com.android.library' } project(':bitherj') { apply plugin: 'java' } subprojects { android { compileSdkVersion 22 buildToolsVersion '26.0.2' defaultConfig { // minSdkVersion 9 minSdkVersion 15 targetSdkVersion 22 multiDexEnabled true } dexOptions { preDexLibraries = false additionalParameters = ['--core-library'] } sourceSets { main { manifest.srcFile 'AndroidManifest.xml' java.srcDirs = ['src'] res.srcDirs = ['res'] assets.srcDirs = ['assets'] jniLibs.srcDirs = ['native-libs'] jni.srcDirs = [] } // Move the build types to build-types/<type> // For instance, build-types/debug/java, build-types/debug/AndroidManifest.xml, ... // This moves them out of them default location under src/<type>/... which would // conflict with src/ being used by the main source set. // Adding new build types or product flavors should be accompanied // by a similar customization. debug.setRoot('build-types/debug') release.setRoot('build-types/release') } packagingOptions { exclude 'META-INF/NOTICE.txt' exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE' exclude 'META-INF/LICENSE' exclude 'META-INF/DEPENDENCIES' exclude 'org/apache/http/entity/mime/version.properties' exclude 'org/apache/http/version.properties' exclude 'lib/x86_64/darwin/libscrypt.dylib' exclude 'lib/x86_64/freebsd/libscrypt.so' exclude 'lib/x86_64/linux/libscrypt.so' } lintOptions { abortOnError false disable "ResourceType" } } } //repositories { // maven { // url 'https://maven.google.com/' // name 'Google' // } //} allprojects { repositories { jcenter() maven { url 'https://maven.google.com' } } }
sed 命令替换后的显示问题
用tcpdump -n icmp | sed "s/ICMP/TCP/g"抓包时,另一个机器ping,console不像直接tcpdump会立即显示,而是等一段时间把这段时间的包一次性显示,请问这是怎么回事? 小弟查询了很多sed命令用法没有提及上面情况,在线等,求解答! 如果用ping 10.8.2.118 | sed "s/icmp/tcp/g"是可以及时显示,跟正常ping一样,正常替换。
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
阿里面试官问我:如何设计秒杀系统?我的回答让他比起大拇指
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图和个人联系方式,欢迎Star和指教 前言 Redis在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在Redis的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了...
五年程序员记流水账式的自白。
不知觉已中码龄已突破五年,一路走来从起初铁憨憨到现在的十九线程序员,一路成长,虽然不能成为高工,但是也能挡下一面,从15年很火的android开始入坑,走过java、.Net、QT,目前仍处于android和.net交替开发中。 毕业到现在一共就职过两家公司,目前是第二家,公司算是半个创业公司,所以基本上都会身兼多职。比如不光要写代码,还要写软著、软著评测、线上线下客户对接需求收集...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n...
一文详尽系列之模型评估指标
点击上方“Datawhale”,选择“星标”公众号第一时间获取价值内容在机器学习领域通常会根据实际的业务场景拟定相应的不同的业务指标,针对不同机器学习问题如回归、分类、排...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
压测学习总结(1)——高并发性能指标:QPS、TPS、RT、吞吐量详解
一、QPS,每秒查询 QPS:Queries Per Second意思是“每秒查询率”,是一台服务器每秒能够相应的查询次数,是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准。互联网中,作为域名系统服务器的机器的性能经常用每秒查询率来衡量。 二、TPS,每秒事务 TPS:是TransactionsPerSecond的缩写,也就是事务数/秒。它是软件测试结果的测量单位。一个事务是指一...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
程序员该看的几部电影
1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?如何...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
还记得那个提速8倍的IDEA插件吗?VS Code版本也发布啦!!
去年,阿里云发布了本地 IDE 插件 Cloud Toolkit,仅 IntelliJ IDEA 一个平台,就有 15 万以上的开发者进行了下载,体验了一键部署带来的开发便利。时隔一年的今天,阿里云正式发布了 Visual Studio Code 版本,全面覆盖前端开发者,帮助前端实现一键打包部署,让开发提速 8 倍。 VSCode 版本的插件,目前能做到什么? 安装插件之后,开发者可以立即体验...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
2019年除夕夜的有感而发
天气:小雨(加小雪) 温度:3摄氏度 空气:严重污染(399) 风向:北风 风力:微风 现在是除夕夜晚上十点钟,再有两个小时就要新的一年了; 首先要说的是我没患病,至少现在是没有患病;但是心情确像患了病一样沉重; 现在这个时刻应该大部分家庭都在看春晚吧,或许一家人团团圆圆的坐在一起,或许因为某些特殊原因而不能团圆;但不管是身在何处,身处什么境地,我都想对每一个人说一句:新年快乐! 不知道csdn这...
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了掌握好Android打下基础。
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
相关热词 c# 压缩图片好麻烦 c#计算数组中的平均值 c#获取路由参数 c#日期精确到分钟 c#自定义异常必须继承 c#查表并返回值 c# 动态 表达式树 c# 监控方法耗时 c# listbox c#chart显示滚动条
立即提问