socket连接,关于String index out of range:-1?求大神帮忙!

为什么客户端一发消息,服务器端就出现String index out of range:-1?
服务器程序:
import java.io.BufferedReader;

import java.io.BufferedWriter;

import java.io.InputStreamReader;

import java.io.OutputStreamWriter;

import java.io.PrintWriter;

import java.net.ServerSocket;

import java.net.Socket;
public class teste implements Runnable

{

public String latitude,longitude;
public int i=0;
public void run()

{

try
{

//创建ServerSocket

ServerSocket serverSocket = new ServerSocket(8080);

while (true)

{

//接受客户端请求

Socket client = serverSocket.accept();

System.out.println("accept");

try
{

//接收客户端消息

BufferedReader in = new BufferedReader(new InputStreamReader(client.getInputStream()));

String str = in.readLine();

System.out.println(" "+str);
in.close();

            }  
            catch (Exception e)  
            {  
                System.out.println(e.getMessage());  
                e.printStackTrace();  
            }  
            finally 
            {  
                //关闭 
                socket.close();
                serverSocket.close();
                client.close();
            }
        } 
    }  
    catch (Exception e)  
    {  
        System.out.println(e.getMessage());  
    }  
}
//main函数,开启服务器  
public static void main(String[] args)  
{  
    Thread desktopServerThread = new Thread(new teste());  
    desktopServerThread.start();  
}  

}
客户端程序:
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;

public class SocketClient {
public static void main(String[] args) {
try {
Socket socket =new Socket("192.168.1.238",8080);
socket.setSoTimeout(60000);

        PrintWriter printWriter =new PrintWriter(socket.getOutputStream(),true);
        BufferedReader bufferedReader =new BufferedReader(new InputStreamReader(socket.getInputStream()));

        String result ="";
        while(result.indexOf("bye") == -1){
            BufferedReader sysBuff =new BufferedReader(new InputStreamReader(System.in));
            printWriter.println(sysBuff.readLine());
            printWriter.flush();

            result = bufferedReader.readLine();
            System.out.println("Server say : " + result);
        }

        printWriter.close();
        bufferedReader.close();
        socket.close();
    }catch (Exception e) {
        System.out.println("Exception:" + e);
    }
}

}

2个回答

com.ibatis.common.jdbc.exception.NestedSQLException:   
--- The error occurred in com/company/basicdata/app/module/domain/sqlmapping/price_center_SqlMap.xml.  
--- The error occurred while preparing......
答案就在这里:String index out of range: -1
----------------------你好,人类,我是来自CSDN星球的问答机器人小C,以上是依据我对问题的理解给出的答案,如果解决了你的问题,望采纳。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
动态规划入门到熟悉,看不懂来打我啊
持续更新。。。。。。 2.1斐波那契系列问题 2.2矩阵系列问题 2.3跳跃系列问题 3.1 01背包 3.2 完全背包 3.3多重背包 3.4 一些变形选讲 2.1斐波那契系列问题 在数学上,斐波纳契数列以如下被以递归的方法定义:F(0)=0,F(1)=1, F(n)=F(n-1)+F(n-2)(n&gt;=2,n∈N*)根据定义,前十项为1, 1, 2, 3...
程序员,职场上请远离这种人!
对有些职场人来讲,甩锅就是一种生存手段。01.从大学打篮球说起上大学的时候喜欢打篮球,然后我又特别喜欢抢篮板,经常是跳起来的时候没事,落下来的时候偶尔会踩到别人的脚上,于...
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
对计算机专业来说学历真的重要吗?
我本科学校是渣渣二本,研究生学校是985,现在毕业五年,校招笔试、面试,社招面试参加了两年了,就我个人的经历来说下这个问题。 这篇文章很长,但绝对是精华,相信我,读完以后,你会知道学历不好的解决方案,记得帮我点赞哦。 先说结论,无论赞不赞同,它本质就是这样:对于技术类工作而言,学历五年以内非常重要,但有办法弥补。五年以后,不重要。 目录: 张雪峰讲述的事实 我看到的事实 为什么会这样 ...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch, ...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
一个读研让我损失了一百万的真实故事
关注我!只要10分钟,包你学会数据分析 毕业后,应该就业还是考研? 我刚出国留学那会儿,就惊讶的发现,外国local95%都不会选择读研 他们说,硕士学费很贵,时间宝贵,老板不认,所以很费 当几乎所有人都是本科学历时,硕士学历反而像个异类 在老板眼里,三年硕士远远不如3年的工作经验实用 他们甚至专门为拒绝高学历者发明了一个词,叫overoccupie...
Python 植物大战僵尸代码实现(2):植物卡片选择和种植
这篇文章要介绍的是: - 上方植物卡片栏的实现。 - 点击植物卡片,鼠标切换为植物图片。 - 鼠标移动时,判断当前在哪个方格中,并显示半透明的植物作为提示。
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给袈...
shell脚本:备份数据库、代码上线
备份MySQL数据库 场景: 一台MySQL服务器,跑着5个数据库,在没有做主从的情况下,需要对这5个库进行备份 需求: 1)每天备份一次,需要备份所有的库 2)把备份数据存放到/data/backup/下 3)备份文件名称格式示例:dbname-2019-11-23.sql 4)需要对1天以前的所有sql文件压缩,格式为gzip 5)本地数据保留1周 6)需要把备份的数据同步到远程备份中心,假如...
聊聊C语言和指针的本质
坐着绿皮车上海到杭州,24块钱,很宽敞,在火车上非正式地聊几句。 很多编程语言都以 “没有指针” 作为自己的优势来宣传,然而,对于C语言,指针却是与生俱来的。 那么,什么是指针,为什么大家都想避开指针。 很简单, 指针就是地址,当一个地址作为一个变量存在时,它就被叫做指针,该变量的类型,自然就是指针类型。 指针的作用就是,给出一个指针,取出该指针指向地址处的值。为了理解本质,我们从计算机模型说起...
为什么你学不过动态规划?告别动态规划,谈谈我的经验
动态规划难吗?说实话,我觉得很难,特别是对于初学者来说,我当时入门动态规划的时候,是看 0-1 背包问题,当时真的是一脸懵逼。后来,我遇到动态规划的题,看的懂答案,但就是自己不会做,不知道怎么下手。就像做递归的题,看的懂答案,但下不了手,关于递归的,我之前也写过一篇套路的文章,如果对递归不大懂的,强烈建议看一看:为什么你学不会递归,告别递归,谈谈我的经验 对于动态规划,春招秋招时好多题都会用到动态...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
如何安装 IntelliJ IDEA 最新版本——详细教程
IntelliJ IDEA 简称 IDEA,被业界公认为最好的 Java 集成开发工具,尤其在智能代码助手、代码自动提示、代码重构、代码版本管理(Git、SVN、Maven)、单元测试、代码分析等方面有着亮眼的发挥。IDEA 产于捷克,开发人员以严谨著称的东欧程序员为主。IDEA 分为社区版和付费版两个版本。 我呢,一直是 Eclipse 的忠实粉丝,差不多十年的老用户了。很早就接触到了 IDEA...
面试还搞不懂redis,快看看这40道面试题(含答案和思维导图)
Redis 面试题 1、什么是 Redis?. 2、Redis 的数据类型? 3、使用 Redis 有哪些好处? 4、Redis 相比 Memcached 有哪些优势? 5、Memcache 与 Redis 的区别都有哪些? 6、Redis 是单进程单线程的? 7、一个字符串类型的值能存储最大容量是多少? 8、Redis 的持久化机制是什么?各自的优缺点? 9、Redis 常见性...
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
为什么要推荐大家学习字节码?
配套视频: 为什么推荐大家学习Java字节码 https://www.bilibili.com/video/av77600176/ 一、背景 本文主要探讨:为什么要学习 JVM 字节码? 可能很多人会觉得没必要,因为平时开发用不到,而且不学这个也没耽误学习。 但是这里分享一点感悟,即人总是根据自己已经掌握的知识和技能来解决问题的。 这里有个悖论,有时候你觉得有些技术没用恰恰是...
【超详细分析】关于三次握手与四次挥手面试官想考我们什么?
在面试中,三次握手和四次挥手可以说是问的最频繁的一个知识点了,我相信大家也都看过很多关于三次握手与四次挥手的文章,今天的这篇文章,重点是围绕着面试,我们应该掌握哪些比较重要的点,哪些是比较被面试官给问到的,我觉得如果你能把我下面列举的一些点都记住、理解,我想就差不多了。 三次握手 当面试官问你为什么需要有三次握手、三次握手的作用、讲讲三次三次握手的时候,我想很多人会这样回答: 首先很多人会先讲下握...
压测学习总结(1)——高并发性能指标:QPS、TPS、RT、吞吐量详解
一、QPS,每秒查询 QPS:Queries Per Second意思是“每秒查询率”,是一台服务器每秒能够相应的查询次数,是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准。互联网中,作为域名系统服务器的机器的性能经常用每秒查询率来衡量。 二、TPS,每秒事务 TPS:是TransactionsPerSecond的缩写,也就是事务数/秒。它是软件测试结果的测量单位。一个事务是指一...
新程序员七宗罪
当我发表这篇文章《为什么每个工程师都应该开始考虑开发中的分析和编程技能呢?》时,我从未想到它会对读者产生如此积极的影响。那些想要开始探索编程和数据科学领域的人向我寻求建议;还有一些人问我下一篇文章的发布日期;还有许多人询问如何顺利过渡到这个职业。我非常鼓励大家继续分享我在这个旅程的经验,学习,成功和失败,以帮助尽可能多的人过渡到一个充满无数好处和机会的职业生涯。亲爱的读者,谢谢你。 -罗伯特。 ...
2019年Spring Boot面试都问了什么?快看看这22道面试题!
Spring Boot 面试题 1、什么是 Spring Boot? 2、Spring Boot 有哪些优点? 3、什么是 JavaConfig? 4、如何重新加载 Spring Boot 上的更改,而无需重新启动服务器? 5、Spring Boot 中的监视器是什么? 6、如何在 Spring Boot 中禁用 Actuator 端点安全性? 7、如何在自定义端口上运行 Sprin...
【图解】记一次手撕算法面试:字节跳动的面试官把我四连击了
字节跳动这家公司,应该是所有秋招的公司中,对算法最重视的一个了,每次面试基本都会让你手撕算法,今天这篇文章就记录下当时被问到的几个算法题,并且每个算法题我都详细着给出了最优解,下面再现当时的面试场景。看完一定让你有所收获 一、小牛试刀:有效括号 大部分情况下,面试官都会问一个不怎么难的问题,不过你千万别太开心,因为这道题往往可以拓展出更多有难度的问题,或者一道题看起来很简单,但是给出最优解,确实很...
面试官:关于Java性能优化,你有什么技巧
通过使用一些辅助性工具来找到程序中的瓶颈,然后就可以对瓶颈部分的代码进行优化。 一般有两种方案:即优化代码或更改设计方法。我们一般会选择后者,因为不去调用以下代码要比调用一些优化的代码更能提高程序的性能。而一个设计良好的程序能够精简代码,从而提高性能。 下面将提供一些在JAVA程序的设计和编码中,为了能够提高JAVA程序的性能,而经常采用的一些方法和技巧。 1.对象的生成和大小的调整。 J...
【图解算法面试】记一次面试:说说游戏中的敏感词过滤是如何实现的?
版权声明:本文为苦逼的码农原创。未经同意禁止任何形式转载,特别是那些复制粘贴到别的平台的,否则,必定追究。欢迎大家多多转发,谢谢。 小秋今天去面试了,面试官问了一个与敏感词过滤算法相关的问题,然而小秋对敏感词过滤算法一点也没听说过。于是,有了下下事情的发生… 面试官开怼 面试官:玩过王者荣耀吧?了解过敏感词过滤吗?,例如在游戏里,如果我们发送“你在干嘛?麻痹演员啊你?”,由于“麻痹”是一个敏感词,...
GitHub 标星 1.6w+,我发现了一个宝藏项目,作为编程新手有福了!
大家好,我是 Rocky0429,一个最近老在 GitHub 上闲逛的蒟蒻… 特别惭愧的是,虽然我很早就知道 GitHub,但是学会逛 GitHub 的时间特别晚。当时一方面是因为菜,看着这种全是英文的东西难受,不知道该怎么去玩,另一方面是一直在搞 ACM,没有做一些工程类的项目,所以想当然的以为和 GitHub 也没什么关系(当然这种想法是错误的)。 后来自己花了一个星期看完了 Pyt...
杭漂程序员2019的心路历程,还不起助学贷款,交不起房租,披荆斩棘终雨过天晴
一个杭漂2019的心酸历程,一路披荆斩棘终于还是雨过天晴。
我采访了同事,让他掏出了每天都会浏览的干货网站
前言只有光头才能变强。文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y在周六的晚上,我日常去到公司写文章。想写...
Java知识体系最强总结(2020版)
更新于2019-12-15 10:38:00 本人从事Java开发已多年,平时有记录问题解决方案和总结知识点的习惯,整理了一些有关Java的知识体系,这不是最终版,会不定期的更新。也算是记录自己在从事编程工作的成长足迹,通过博客可以促进博主与阅读者的共同进步,结交更多志同道合的朋友。特此分享给大家,本人见识有限,写的博客难免有错误或者疏忽的地方,还望各位大佬指点,在此表示感激不尽。 文章目录...
计算机专业的书普遍都这么贵,你们都是怎么获取资源的?
介绍几个可以下载编程电子书籍的网站。 1.Github Github上编程书资源很多,你可以根据类型和语言去搜索。推荐几个热门的: free-programming-books-zh_CN:58K 星的GitHub,编程语言、WEB、函数、大数据、操作系统、在线课程、数据库相关书籍应有尽有,共有几百本。 Go语言高级编程:涵盖CGO,Go汇编语言,RPC实现,Protobuf插件实现,Web框架实...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
春节不出门!这三款超好评编程游戏,好玩到停不下来
By 超神经场景描述:春节马上就要来临,在这个假期里,怎么能让自己放松,又不至于生疏了自己的老本行?不妨来玩一下编程向的小游戏吧,超神经在此整理了三款好玩有趣又有深度的游戏,快看看是不是...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
昂,我24岁了
24岁的程序员,还在未来迷茫,不知道能不能买得起房子
正则表达式引擎执行原理——从未如此清晰!
如何写好一篇关于 正则表达式 的文章,我思考了一周的时间,从未有一篇文章能让猪哥如此费神。 因为我觉得正则表达式 :难记忆、难描述、广而深且不受重视,有人说正则表达式既好写也难写! 好写:无非写一些常用、实用的案例,说实话你们每个人都能写出这种:在网上百度一下然后结合一点自己的实际经验,一篇文章就出来了。 难写:很多人都认为正则简单,不用记,要用就百度一下。但是绝大多数人了解的只是正则的一个小面...
史上最牛逼的 Eclipse 快捷键,提高开发效率!
如果你在使用IDEA,请参考博主另外的一篇idea快捷键的博客。
如何快速打好Java基础?
二哥,我是一名大学生,专业是电力工程,但想自学 Java,如何快速打好基础呢? 微信上 tison 向我提出了这个问题。我想我是有资格来回答的,从北京奥运会那年开始学 Java,到现在已经有 10 多个年头了,真的是从一名编程白痴一步步走到现在的。 我在各大技术网站上分享了不少 Java 方面的原创文章,不知道影响了多少初学者,反正文章的留言中经常能看到他们由衷的感谢。现在每天仍然有不少同学(...
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问