C# 如何用get_Range选择不连续区域

Microsoft.Office.Interop.Excel.Range rang = worksheet.get_Range("A5:A58, H5:J58",Type.Missing);
就是这句代码,运行后,能能选择出A5:A58的,而没有H5到J58的。

1个回答

用excel录制一个宏,看下生成的代码。不要用range,用selection试试看

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
c#操作excel.Range,怎么选取不连续单元格,谢谢
Excel.Range temRange; temRange = (Excel.Range)Worksheet.Range["C1", "C2"]; temRange.Borders.get_Item(XlBordersIndex.xlEdgeBottom).LineStyle = XlLineStyle.xlContinuous; 这样的Range,可以选择多个不连续单元格吗
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
C# 设置excel单元格颜色
Microsoft.Office.Interop.Excel.Range titleRange = worksheet.get_Range(worksheet.Cells[1, 1], worksheet.Cells[2, 2]); titleRange.Interior.Color = Color.FromArgb(220, 20, 60);//设置颜色 抛异常 System . Color参数或返回值的型在内的方法叫出IDispatch中不能
c# 设置excel 数据有效性
Range r_validate = sheet.get_Range("A2"); r_validate.Validation.Add(XlDVType.xlValidateList, XlDVAlertStyle.xlValidAlertWarning, Type.Missing,validate, Type.Missing); validate是形如 “1,2,3”的字符串,但是当validate过长的时候设置失败,这是为什么
C#读excel出现异常 (异常来自 HRESULT:0x80010105 (RPC_E_SERVERFAULT))
C#操作excel出错:(异常来自 HRESULT:0x80010105 (RPC_E_SERVERFAULT))A03EC ``` private void OpenExcel(string strFileName) { object missing = System.Reflection.Missing.Value; Application excel = new Application(); if (excel == null) { Response.Write("<script>alert('Can't access excel')</script>"); } else { excel.Visible = false; excel.UserControl = true; // 以只读的形式打开EXCEL文件 Workbook wb = excel.Application.Workbooks.Open(strFileName, missing, true, missing, missing, missing, missing, missing, missing, true, missing, missing, missing, missing, missing); //取得第一个工作薄 Worksheet ws = (Worksheet)wb.Worksheets.get_Item(1); //取得总记录行数 (包括标题列) int rowsint = ws.UsedRange.Cells.Rows.Count; //得到行数 //int columnsint = mySheet.UsedRange.Cells.Columns.Count;//得到列数 //取得数据范围区域 (不包括标题列) Range rng1 = ws.Cells.get_Range("A1", "A" + rowsint); //item Range rng2 = ws.Cells.get_Range("B1", "B" + rowsint); //Customer object[,] arryItem = (object[,])rng1.Value2; //get range's value object[,] arryCus = (object[,])rng2.Value2; //将新值赋给一个数组 string[] arryA = new string[rowsint]; string[] arryB = new string[rowsint]; for (int i = 1; i <= rowsint; i++) { //Item_Code列 arryA[i - 1] = arryItem[i, 1].ToString(); //Customer_Name列 arryB[i - 1] = arryCus[i, 1].ToString(); } //Response.Write(arry[0, 0] + " / " + arry[0, 1] + "#" + arry[rowsint - 2, 0] + " / " + arry[rowsint - 2, 1]); } excel.Quit(); excel = null; Process[] procs = Process.GetProcessesByName("excel"); foreach (Process pro in procs) { pro.Kill();//没有更好的方法,只有杀掉进程 } GC.Collect(); } ``` 当 excel,visible = true 时可以正常运行,求解决。每次弹出来一个excel表接受不了呀=。=
c#创建word时如何设置图片插入到指定文字后
我们学校要求做一个c#的小程序,用c#连接word,在指定标签处添加文字后,我想在文字中间添加一个图片,但不知道怎么加,希望各位高手帮帮忙。下面是程序,我能找到的就这些,运行结果不是想要的。 using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.Office.Interop.Word; using System.IO; using System.Collections.Specialized; namespace WindowsFormsApplication23 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { Object Nothing = System.Reflection.Missing.Value; Directory.CreateDirectory("C:\\Users\\dell3\\Desktop"); //创建文件所在目录 string name = "7.doc"; object filename = "C:\\Users\\dell3\\Desktop\\" + name; //文件保存路径 //创建Word文档 _Application WordApp = new ApplicationClass(); Microsoft.Office.Interop.Word.Document WordDoc = WordApp.Documents.Add(); //WordDoc.Paragraphs.Last.Alignment = WdParagraphAlignment.wdAlignParagraphCenter; //设置对齐方式 string text1 = richTextBox1.Text; WordDoc.Paragraphs.Last.Alignment = WdParagraphAlignment.wdAlignParagraphLeft; WordDoc.Paragraphs.Last.Range.Bold = 2; WordDoc.Paragraphs.Last.Range.Font.Size = 20; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = "实验目的\n"; WordApp.Selection.TypeParagraph(); //插入段落 WordDoc.Paragraphs.Last.Range.Bold = 0; WordDoc.Paragraphs.Last.Range.Font.Size = 13; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = text1 + "\n"; //WordDoc.Paragraphs.Last.Alignment = WdParagraphAlignment.wdAlignParagraphLeft; WordDoc.Paragraphs.Last.Range.Bold = 2; //加粗 WordDoc.Paragraphs.Last.Range.Font.Size = 20; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = "实验环境\n"; WordApp.Selection.TypeParagraph(); //插入段落 WordDoc.Paragraphs.Last.Range.Bold = 0; WordDoc.Paragraphs.Last.Range.Font.Size = 13; //设置字号大小 WordApp.Selection.ParagraphFormat.LineSpacing = 15f;//设置文档的行间距 WordDoc.Paragraphs.Last.Range.Text = richTextBox2.Text.ToString() + "\n"; WordDoc.Paragraphs.Last.Range.Bold = 2; WordDoc.Paragraphs.Last.Range.Font.Size = 20; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = "实验原理\n"; WordApp.Selection.TypeParagraph(); //插入段落 WordDoc.Paragraphs.Last.Range.Font.Size = 13; //设置字号大小 WordDoc.Paragraphs.Last.Range.Bold = 0; //this.richTextBox3.Focus(); int a=this.richTextBox1.SelectionStart; //定义该插入的图片是否为外部链接 object linkToFile = false; //默认 //定义要插入的图片是否随Word文档一起保存 object saveWithDocument = true; object range = WordDoc.Paragraphs.Last.Range; if (Clipboard.ContainsFileDropList()) { StringCollection sc = Clipboard.GetFileDropList(); for (int i = 0; i < sc.Count; i++) { string fileName = sc[i]; richTextBox1.Text = fileName; Image img = Image.FromFile(fileName); Clipboard.Clear(); Bitmap bmp = new Bitmap(img); Clipboard.SetImage(bmp); richTextBox3.Paste(); // WordDoc.InlineShapes.AddPicture(fileName); //object range = WordDoc.Paragraphs.Last.Range; //Object range = WordDoc.Paragraphs.Last.Range; //定义该插入的图片是否为外部链接 // Object linkToFile = false; //默认 //定义要插入的图片是否随Word文档一起保存 //Object saveWithDocument = true; //默认 //object Anchor = this.richTextBox3.SelectionStart; // object bkObj = "bookmark"; // string bk; //if (WordApp.ActiveDocument.Bookmarks.Exists(bk) == true) // { // WordApp.ActiveDocument.Bookmarks.get_Item(ref bkObj).Select(); // object oRng = WordDoc.Bookmarks.get_Item(ref bkObj).Range; // object Anchor = WordDoc.Application.Selection.Range; // WordDoc.InlineShapes.AddPicture(fileName, ref linkToFile, ref saveWithDocument, ref oRng); // } } } WordDoc.Paragraphs.Last.Range.Text = richTextBox3.Text.ToString() + "\n"; //MessageBox.Show("1"); //object Anchor = WordDoc.Application.Selection.Range; //string path = @"C:\Users\dell3\Desktop\新建文件夹 (18)\登陆界面\0.jpg"; //Clipboard.Clear(); //Bitmap bmp=new Bitmap(path); //Clipboard.SetImage(bmp); //richTextBox3.Paste(); //Clipboard.Clear(); //WordDoc.InlineShapes.AddPicture(path); //G_str_path = string.Format( //计算文件保存路径 //@"{0}{1}", G_FolderBrowserDialog.SelectedPath,+ ".doc"); // WordDoc.Application.ActiveDocument.InlineShapes.AddPicture(richTextBox3.Paste()); //= richTextBox3.Paste + "\n"; //WordDoc.Paragraphs.Last.Range = richTextBox3.Focus()+ "\n"; //移动焦点并换行 //object count = 14; object WdLine = WdUnits.wdLine;//换一行; WordDoc.Paragraphs.Last.Range.Bold = 2; WordDoc.Paragraphs.Last.Range.Font.Size = 20; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = "实验内容与要求\n"; WordApp.Selection.TypeParagraph(); //插入段落 WordDoc.Paragraphs.Last.Range.Font.Size = 13; //设置字号大小 WordDoc.Paragraphs.Last.Range.Bold = 0; WordDoc.Paragraphs.Last.Range.Text = richTextBox4.Text.ToString() + "\n"; WordDoc.Paragraphs.Last.Range.Bold = 2; WordDoc.Paragraphs.Last.Range.Font.Size = 20; //设置字号大小 WordDoc.Paragraphs.Last.Range.Text = "实验过程及结果分析\n"; WordApp.Selection.TypeParagraph(); //插入段落 WordDoc.Paragraphs.Last.Range.Font.Size = 13; //设置字号大小 WordDoc.Paragraphs.Last.Range.Bold = 0; WordDoc.Paragraphs.Last.Range.Text = richTextBox5.Text.ToString() + "\n"; //WordApp.Selection.MoveDown(ref WdLine, ref count, ref Nothing);//移动焦点 //WordDoc.Paragraphs.Last.Range.Text = "应收获书\n"; //WordApp.Selection.TypeParagraph(); //插入段落 //WordDoc.Paragraphs.Last.Range.Text = "应收获确认书\n"; WordDoc.SaveAs(filename); //保存文件 WordApp.Quit(); //结束程序 } } }
c#导出excel绘制折线图设置横坐标
m_Book.ActiveChart.SetSourceData(m_Sheet.get_Range((object)m_Sheet.Cells[1][num1], (object)m_Sheet.Cells[num2][num3]), Excel.XlRowCol.xlColumns); m_Book.ActiveChart.SeriesCollection(1).XValues =? ``` 这个横坐标应该怎么填,导出的excel表格如下,横坐标应该是第一列,可是如图横坐标不对,横坐标那列也变成了series了 ``` ![图片说明](https://img-ask.csdn.net/upload/201508/29/1440838180_812919.png) XAXIS为横坐标
C#输出excel合并单元格格式控制
我想把输出的数据按照第一列数据相同的合并单元格。下面是我的代码,错误类型是object未包含r.get_range。 private void button_print_Click(object sender, EventArgs e) //打印excel { int index_head = 0; int index_tail = 0; Excel.Range r; Excel.Application excel = new Excel.Application(); Excel.Workbook book = excel.Workbooks.Add(true); Excel.Worksheet sheet = (Excel.Worksheet)book.ActiveSheet; for (int i = 0; i < dataGridView_selecteddata.Rows.Count; i++) { if (i != 0) { if (dataGridView_selecteddata.Rows[i].Cells[0].Value == dataGridView_selecteddata.Rows[i - 1].Cells[0].Value) { index_tail++; } else { r=sheet.get_Range(sheet.Cells[index_head+1,1],sheet.Cells[index_tail+1,1]); r.MergeCells = true; index_head = i; index_tail = i; } } for (int j = 0; j < dataGridView_selecteddata.Columns.Count; j++) { sheet.Cells[i + 1, j + 1] = dataGridView_selecteddata.Rows[i].Cells[j].Value; } } excel.Visible = true; book.Save(); book.SaveAs("D:sqldataoutput", Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Excel.XlSaveAsAccessMode.xlNoChange, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); book.Close(true, Type.Missing, Type.Missing); excel.Quit(); }
c#如何将从数据库中取出的图片写进word模板中
``` public void InsertPicture(string bookmark, string picturePath, float width, float hight) { object miss = System.Reflection.Missing.Value; object oStart = bookmark; Object linkToFile = false; //图片是否为外部链接 Object saveWithDocument = true; //图片是否随文档一起保存 object range = oDoc.Bookmarks.get_Item(ref oStart).Range;//图插入位置 oDoc.InlineShapes.AddPicture(picturePath, ref linkToFile, ref saveWithDocument, ref range); oDoc.Application.ActiveDocument.InlineShapes[2].Width = width; //设置图片宽度 oDoc.Application.ActiveDocument.InlineShapes[2].Height =hight; //设置图片高度 } ```
问题不大,代码有点长,求大神解答:tensorflow生成tfrecord文件运行不下去
运行结果如下:![图片说明](https://img-ask.csdn.net/upload/201809/01/1535791673_447681.jpg) 完整代码如下: ``` #验证集数量 _NUM_TEST = 100 #随机种子 _RANDOM_SEED = 0 #数据块 _NUM_SHARDS = 3 #数据集路径 DATASET_DIR = "C:/Users/ASUS/TF实战(炼石成金)/8-对谷歌inception-v3模型从头开始训练/slim/images/" #标签文件名字 LABELS_FILENAME = r"C:\Users\ASUS\TF实战(炼石成金)\8-对谷歌inception-v3模型从头开始训练\slim\images\labels" #定义tfrecord文件的路径+名字 def _get_dataset_filename(dataset_dir, split_name, shard_id): output_filename = 'image_%s_%05d-of-%05d.tfrecord' % (split_name, shard_id, _NUM_SHARDS) return os.path.join(dataset_dir, output_filename) #判断tfrecord文件是否存在 def _dataset_exists(dataset_dir): for split_name in ['train', 'test']: for shard_id in range(_NUM_SHARDS): #定义tfrecord文件的路径+名字 output_filename = _get_dataset_filename(dataset_dir, split_name, shard_id) if not tf.gfile.Exists(output_filename): return False return True #获取所有文件以及分类 def _get_filenames_and_classes(dataset_dir): #数据所在路径目录 directories = [] #分类名称 class_names = [] for filename in os.listdir(dataset_dir): #os.listdir(dataset_dir)列出给出的路径下所有的文件夹或者文件名的名字 #合并文件路径 path = os.path.join(dataset_dir, filename) #判断该路径是否为目录 if os.path.isdir(path): #加入数据目录 directories.append(path) #加入类别名称 class_names.append(filename) photo_filenames = [] #循环每个分类的文件夹 for directory in directories: for filename in os.listdir(directory): path = os.path.join(directory, filename) #把图片的路径加入图片列表 photo_filenames.append(path) return photo_filenames, class_names def int64_feature(values): if not isinstance(values, (tuple, list)): values = [values] return tf.train.Feature(int64_list=tf.train.Int64List(value=values)) def bytes_feature(values): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[values])) def image_to_tfexample(image_data, image_format, class_id): #Abstract base class for protocol messages. return tf.train.Example(features=tf.train.Features(feature={ 'image/encoded': bytes_feature(image_data), 'image/format': bytes_feature(image_format), 'image/class/label': int64_feature(class_id), })) def write_label_file(labels_to_class_names, dataset_dir,filename=LABELS_FILENAME): labels_filename = os.path.join(dataset_dir, filename) with tf.gfile.Open(labels_filename, 'w') as f: for label in labels_to_class_names: class_name = labels_to_class_names[label] f.write('%d:%s\n' % (label, class_name)) #把数据转为TFRecord格式 def _convert_dataset(split_name, filenames, class_names_to_ids, dataset_dir): assert split_name in ['train', 'test'] #计算每个数据块有多少数据 num_per_shard = int(len(filenames) / _NUM_SHARDS) with tf.Graph().as_default(): with tf.Session() as sess: for shard_id in range(_NUM_SHARDS): #定义tfrecord文件的路径+名字 output_filename = _get_dataset_filename(dataset_dir, split_name, shard_id) with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer: #每一个数据块开始的位置 start_ndx = shard_id * num_per_shard #每一个数据块最后的位置 end_ndx = min((shard_id+1) * num_per_shard, len(filenames)) for i in range(start_ndx, end_ndx): try: sys.stdout.write('\r>> Converting image %d/%d shard %d' % (i+1, len(filenames), shard_id)) sys.stdout.flush() #读取图片 image_data = tf.gfile.FastGFile(filenames[i], 'r').read() #获得图片的类别名称 #os.path.dirname(filenames[i])输出filenames[i]所在的绝对路径 #os.path.basename(path),返回path最后的文件名,这里是类别名称 class_name = os.path.basename(os.path.dirname(filenames[i])) #找到类别名称对应的id class_id = class_names_to_ids[class_name] #生成tfrecord文件 example = image_to_tfexample(image_data, b'jpg', class_id) tfrecord_writer.write(example.SerializeToString()) except IOError as e: print("Could not read:",filenames[i]) print("Error:",e) print("Skip it\n") sys.stdout.write('\n') sys.stdout.flush() if __name__ == '__main__': #判断tfrecord文件是否存在 if _dataset_exists(DATASET_DIR): print('tfcecord文件已存在') else: #获得所有图片以及分类 photo_filenames, class_names = _get_filenames_and_classes(DATASET_DIR) #把分类转为字典格式,类似于{'house': 3, 'flower': 1, 'plane': 4, 'guitar': 2, 'animal': 0} class_names_to_ids = dict(zip(class_names, range(len(class_names)))) #把数据切分为训练集和测试集 random.seed(_RANDOM_SEED) random.shuffle(photo_filenames) training_filenames = photo_filenames[_NUM_TEST:] testing_filenames = photo_filenames[:_NUM_TEST] #数据转换 _convert_dataset('train', training_filenames, class_names_to_ids, DATASET_DIR) _convert_dataset('test', testing_filenames, class_names_to_ids, DATASET_DIR) #输出labels文件 labels_to_class_names = dict(zip(range(len(class_names)), class_names)) write_label_file(labels_to_class_names, DATASET_DIR) ```
tensorflow 训练数据集时,报错InvalidArgumentError: Incompatible shapes: [15] vs. [15,6],标签的占位符与标签喂的数据格式不符,要怎么解决?
InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] 报错的详细信息如下所示: ``` INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] Caused by op 'input_producer/input_producer_EnqueueMany', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 320, in <module> batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) File "<ipython-input-19-6fa659dba762>", line 147, in batch_test tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') File "<ipython-input-19-6fa659dba762>", line 84, in read_records filename_queue = tf.train.string_input_producer([filename]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 232, in string_input_producer cancel_op=cancel_op) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 164, in input_producer enq = q.enqueue_many([input_tensor]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 367, in enqueue_many self._queue_ref, vals, name=scope) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 1556, in _queue_enqueue_many_v2 name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1038 try: -> 1039 return fn(*args) 1040 except errors.OpError as e: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1020 feed_dict, fetch_list, target_list, -> 1021 status, run_metadata) 1022 H:\aa\Anaconda\anaconda\envs\tensorflow\lib\contextlib.py in __exit__(self, type, value, traceback) 87 try: ---> 88 next(self.gen) 89 except StopIteration: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status() 465 compat.as_text(pywrap_tensorflow.TF_Message(status)), --> 466 pywrap_tensorflow.TF_GetCode(status)) 467 finally: InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) <ipython-input-19-6fa659dba762> in <module>() 318 range_num = 5 319 --> 320 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) 321 <ipython-input-19-6fa659dba762> in batch_test(record_file, resize_height, resize_width, n_batch, train_op, loss, acc, range_num, val_batch) 187 images_x = np.reshape(images, (-1, 30000)) 188 labels_y = np.reshape(labels, (-1, 6)) --> 189 _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 190 train_loss = train_loss + err 191 train_acc = train_acc + ac H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 776 try: 777 result = self._run(None, fetches, feed_dict, options_ptr, --> 778 run_metadata_ptr) 779 if run_metadata: 780 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 980 if final_fetches or final_targets: 981 results = self._do_run(handle, final_targets, final_fetches, --> 982 feed_dict_string, options, run_metadata) 983 else: 984 results = [] H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1030 if handle is None: 1031 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1032 target_list, options, run_metadata) 1033 else: 1034 return self._do_call(_prun_fn, self._session, handle, feed_dict, H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1050 except KeyError: 1051 pass -> 1052 raise type(e)(node_def, op, message) 1053 1054 def _extend_graph(self): InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'Equal', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 311, in <module> correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 672, in equal result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] ``` x,y- 占位符打印的信息如下: ``` x: Tensor("x-input:0", shape=(?, 100, 100, 3), dtype=float32) y_:Tensor("y_:0", shape=(?, 6), dtype=float32) ``` image 和 labels 的打印信息如下: ``` shape:(15, 100, 100, 3),tpye:float32,labels:[[ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1. 0.]] ``` 整个运行的代码如下: ``` import tensorflow as tf import numpy as np import os import cv2 import matplotlib.pyplot as plt import random import time from PIL import Image os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' data_path = 'people_pictures_train/record/one_train_demo_people_train.tfrecords' # tfrecords 文件的地址 data_path_val = 'people_pictures_train/record/one_test_demo_people_val.tfrecords' # tfrecords 文件的地址 print("----------------------------") tf.reset_default_graph() def get_example_nums(tf_records_filenames): ''' 统计tf_records图像的个数(example)个数 :param tf_records_filenames: tf_records文件路径 :return: ''' nums= 0 for record in tf.python_io.tf_record_iterator(tf_records_filenames): nums += 1 return nums def show_image(title,image): ''' 显示图片 :param title: 图像标题 :param image: 图像的数据 :return: ''' # plt.figure("show_image") # print(image.dtype) plt.imshow(image) plt.axis('on') # 关掉坐标轴为 off plt.title(title) # 图像题目 plt.show() def get_batch_images(images,labels,batch_size,labels_nums,one_hot=False,shuffle=False,num_threads=1): ''' :param images:图像 :param labels:标签 :param batch_size: :param labels_nums:标签个数 :param one_hot:是否将labels转为one_hot的形式 :param shuffle:是否打乱顺序,一般train时shuffle=True,验证时shuffle=False :return:返回batch的images和labels ''' min_after_dequeue = 200 capacity = min_after_dequeue + 3 * batch_size # 保证capacity必须大于min_after_dequeue参数值 if shuffle: images_batch, labels_batch = tf.train.shuffle_batch([images,labels], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue, num_threads=num_threads) else: images_batch, labels_batch = tf.train.batch([images,labels], batch_size=batch_size, capacity=capacity, num_threads=num_threads) if one_hot: labels_batch = tf.one_hot(labels_batch, labels_nums, 1, 0) return images_batch,labels_batch def read_records(filename,resize_height, resize_width,type=None): ''' 解析record文件:源文件的图像数据是RGB,uint8,[0,255],一般作为训练数据时,需要归一化到[0,1] :param filename: :param resize_height: :param resize_width: :param type:选择图像数据的返回类型 None:默认将uint8-[0,255]转为float32-[0,255] normalization:归一化float32-[0,1] standardization:归一化float32-[0,1],再减均值中心化 :return: ''' # 创建文件队列,不限读取的数量 filename_queue = tf.train.string_input_producer([filename]) # create a reader from file queue reader = tf.TFRecordReader() # reader从文件队列中读入一个序列化的样本 _, serialized_example = reader.read(filename_queue) # get feature from serialized example # 解析符号化的样本 features = tf.parse_single_example( serialized_example, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'depth': tf.FixedLenFeature([], tf.int64), 'labels': tf.FixedLenFeature([], tf.string) } ) tf_image = tf.decode_raw(features['image_raw'], tf.uint8)#获得图像原始的数据 tf_height = features['height'] tf_width = features['width'] tf_depth = features['depth'] # tf_label = tf.cast(features['labels'], tf.float32) tf_label = tf.decode_raw(features['labels'],tf.float32) # PS:恢复原始图像数据,reshape的大小必须与保存之前的图像shape一致,否则出错 # tf_image=tf.reshape(tf_image, [-1]) # 转换为行向量 tf_image=tf.reshape(tf_image, [resize_height, resize_width, 3]) # 设置图像的维度 tf_label=tf.reshape(tf_label, [6]) # 设置图像的维度 # 恢复数据后,才可以对图像进行resize_images:输入uint->输出float32 # tf_image=tf.image.resize_images(tf_image,[224, 224]) # [3]数据类型处理 # 存储的图像类型为uint8,tensorflow训练时数据必须是tf.float32 if type is None: tf_image = tf.cast(tf_image, tf.float32) elif type == 'normalization': # [1]若需要归一化请使用: # 仅当输入数据是uint8,才会归一化[0,255] # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.convert_image_dtype(tf_image, tf.float32) tf_image = tf.cast(tf_image, tf.float32) * (1. / 255.0) # 归一化 elif type == 'standardization': # 标准化 # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.per_image_standardization(tf_image) # 标准化(减均值除方差) # 若需要归一化,且中心化,假设均值为0.5,请使用: tf_image = tf.cast(tf_image, tf.float32) * (1. / 255) - 0.5 # 中心化 # 这里仅仅返回图像和标签 # return tf_image, tf_height,tf_width,tf_depth,tf_label return tf_image,tf_label def batch_test(record_file,resize_height,resize_width,n_batch,train_op,loss,acc,range_num,val_batch): ''' :param record_file: record文件路径 :param resize_height: :param resize_width: :return: :PS:image_batch, label_batch一般作为网络的输入 ''' # 读取record函数 tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') image_batch, label_batch= get_batch_images(tf_image,tf_label,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) a = image_batch.get_shape() a2 = a.as_list() b = label_batch.get_shape() b2 = b.as_list() print('image_batch: '+ str(image_batch) + ' label_batch: ' + str(label_batch)) print('image_batch-len:' + str(len(a2)) + ' label_batch-len: ' + str(len(b2))) # 测试的数据 images_val,labels_val = read_records(data_path_val,100,100,type='normalization') image_batch_val, label_batch_val = get_batch_images(images_val,labels_val,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) # print('image_batch_val: '+ str(image_batch_val) + ' label_batch_val: ' + str(label_batch_val)) init = tf.global_variables_initializer() with tf.Session() as sess: # 开始一个会话 sess.run(init) # train_writer = tf.summary.FileWriter('logs/train',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 # test_writer = tf.summary.FileWriter('logs/test',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for epoch in range(range_num) : start_time = time.time() train_loss, train_acc = 0,0 for i in range(n_batch): images, labels = sess.run([image_batch, label_batch]) print('shape:{},tpye:{},labels:{}'.format(images.shape,images.dtype,labels)) print('images-len:' + str(len(images)) + ' labels-len: ' + str(len(labels))) for i in range(len(images)): show_image("image0", images[i, :, :, :]) a = np.zeros( (len(labels)) ) print(' a: ' +str(a)) for i in range(len(labels)): for j in range(len(labels[i])): if labels[i][j] > 0: a[i] = j print(' a: ' +str(a)) print('x: ' + str(x) + ' y_:' + str(y_)) images_x = np.reshape(images, (-1, 30000)) labels_y = np.reshape(labels, (-1, 6)) _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 train_loss = train_loss + err train_acc = train_acc + ac print(" train loss: %f" % (np.sum(train_err)/n_batch)) print(" train acc: %f" % (np.sum(train_acc)/n_batch)) val_loss, val_acc = 0, 0 for i in range(val_batch): # test 在会话中取出images和labels测试数据, images_val2 主要是为了与 images_val 进行区分 images_val2, labels_val2 = sess.run([image_batch_val, label_batch_val]) val_loss, val_acc = sess.run([loss,acc], feed_dict={x:images_val_x, y_:labels_val2}) # 测试一下准确率,喂的数据是,图片和图片的标签 val_loss = val_loss + err val_acc = val_acc + ac print(" validation loss: %f" % (np.sum(val_loss)/val_batch)) print(" validation acc: %f" % (np.sum(val_acc)/val_batch)) # 停止所有线程 coord.request_stop() coord.join(threads) # 每个批次的大小 batch_size = 15 #每个批次 10,一次性放入100张图,放到神经网络中进行训练,以矩阵的形式放入 # 计算一共有多少个批次 # n_batch = mnist.train.num_examples // batch_size #整除 n_batch = get_example_nums(data_path) // batch_size val_batch = get_example_nums(data_path_val) // batch_size # 测试图片的数量 转换格式时以一个batch 放所有的图片 # val_num = get_example_nums(data_path_val) # 测试图片的数量 转换格式时以一个batch 放所有的图片 # train_num = get_example_nums(data_path) # 测试图片的数量 转换格式时以一个batch 放所有的图片 print ("-----------------" + str(n_batch) + " batch------------") #将所有的图片resize成100*100 w=100 h=100 c=3 #-----------------构建网络---------------------- #占位符 #-----------------构建网络---------------------- #占位符 x = tf.placeholder(tf.float32,[None,100,100,3],name='x-input') #图片像素 转换 一维向量,行与批次有关,none 代表行,列是784 y_=tf.placeholder(tf.float32,shape=[None,6],name='y_') def inference(input_tensor, train, regularizer): with tf.variable_scope('layer1-conv1'): conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0)) conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME') relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases)) with tf.name_scope("layer2-pool1"): pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID") with tf.variable_scope("layer3-conv2"): conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0)) conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME') relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases)) with tf.name_scope("layer4-pool2"): pool2 = tf.nn.max_pool(relu2, ksize=[1, 2 , 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer5-conv3"): conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME') relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases)) with tf.name_scope("layer6-pool3"): pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer7-conv4"): conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME') relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases)) with tf.name_scope("layer8-pool4"): pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') nodes = 6*6*128 reshaped = tf.reshape(pool4,[-1,nodes]) with tf.variable_scope('layer9-fc1'): fc1_weights = tf.get_variable("weight", [nodes, 1024], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights)) fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1)) fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases) if train: fc1 = tf.nn.dropout(fc1, 0.5) with tf.variable_scope('layer10-fc2'): fc2_weights = tf.get_variable("weight", [1024, 512], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights)) fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1)) fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases) if train: fc2 = tf.nn.dropout(fc2, 0.5) with tf.variable_scope('layer11-fc3'): fc3_weights = tf.get_variable("weight", [512, 6], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights)) fc3_biases = tf.get_variable("bias", [6], initializer=tf.constant_initializer(0.1)) logit = tf.matmul(fc2, fc3_weights) + fc3_biases return logit #---------------------------网络结束--------------------------- regularizer = tf.contrib.layers.l2_regularizer(0.0001) logits = inference(x,False,regularizer) #(小处理)将logits乘以1赋值给logits_eval,定义name,方便在后续调用模型时通过tensor名字调用输出tensor b = tf.constant(value=1,dtype=tf.float32) logits_eval = tf.multiply(logits,b,name='logits_eval') # loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits) train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("----------------------------") if __name__ == '__main__': range_num = 5 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) ```
yolo3 darknet.py问题
我用darknetAB https://github.com/AlexeyAB/darknet 编译gpu版本后生成darknet.py文件 然后我也编译了yolo_cpp_dll.sln文件 生成dll文件 然后运行darknet.py文件 不显示图片 异常退出 ![图片说明](https://img-ask.csdn.net/upload/201911/02/1572688446_628910.png) 百度了这个问题 有人说要换python3.5版本 我也尝试了 但是也是不行 不会显示图片。请问各位大佬到底怎么解决??急!!!谢谢!!! ``` #!python3 """ Python 3 wrapper for identifying objects in images Requires DLL compilation Both the GPU and no-GPU version should be compiled; the no-GPU version should be renamed "yolo_cpp_dll_nogpu.dll". On a GPU system, you can force CPU evaluation by any of: - Set global variable DARKNET_FORCE_CPU to True - Set environment variable CUDA_VISIBLE_DEVICES to -1 - Set environment variable "FORCE_CPU" to "true" To use, either run performDetect() after import, or modify the end of this file. See the docstring of performDetect() for parameters. Directly viewing or returning bounding-boxed images requires scikit-image to be installed (`pip install scikit-image`) Original *nix 2.7: https://github.com/pjreddie/darknet/blob/0f110834f4e18b30d5f101bf8f1724c34b7b83db/python/darknet.py Windows Python 2.7 version: https://github.com/AlexeyAB/darknet/blob/fc496d52bf22a0bb257300d3c79be9cd80e722cb/build/darknet/x64/darknet.py @author: Philip Kahn @date: 20180503 """ #pylint: disable=R, W0401, W0614, W0703 from ctypes import * import math import random import os def sample(probs): s = sum(probs) probs = [a/s for a in probs] r = random.uniform(0, 1) for i in range(len(probs)): r = r - probs[i] if r <= 0: return i return len(probs)-1 def c_array(ctype, values): arr = (ctype*len(values))() arr[:] = values return arr class BOX(Structure): _fields_ = [("x", c_float), ("y", c_float), ("w", c_float), ("h", c_float)] class DETECTION(Structure): _fields_ = [("bbox", BOX), ("classes", c_int), ("prob", POINTER(c_float)), ("mask", POINTER(c_float)), ("objectness", c_float), ("sort_class", c_int)] class IMAGE(Structure): _fields_ = [("w", c_int), ("h", c_int), ("c", c_int), ("data", POINTER(c_float))] class METADATA(Structure): _fields_ = [("classes", c_int), ("names", POINTER(c_char_p))] #lib = CDLL("/home/pjreddie/documents/darknet/libdarknet.so", RTLD_GLOBAL) #lib = CDLL("libdarknet.so", RTLD_GLOBAL) hasGPU = True if os.name == "nt": cwd = os.path.dirname(__file__) os.environ['PATH'] = cwd + ';' + os.environ['PATH'] winGPUdll = os.path.join(cwd, "yolo_cpp_dll.dll") winNoGPUdll = os.path.join(cwd, "yolo_cpp_dll_nogpu.dll") envKeys = list() for k, v in os.environ.items(): envKeys.append(k) try: try: tmp = os.environ["FORCE_CPU"].lower() if tmp in ["1", "true", "yes", "on"]: raise ValueError("ForceCPU") else: print("Flag value '"+tmp+"' not forcing CPU mode") except KeyError: # We never set the flag if 'CUDA_VISIBLE_DEVICES' in envKeys: if int(os.environ['CUDA_VISIBLE_DEVICES']) < 0: raise ValueError("ForceCPU") try: global DARKNET_FORCE_CPU if DARKNET_FORCE_CPU: raise ValueError("ForceCPU") except NameError: pass # print(os.environ.keys()) # print("FORCE_CPU flag undefined, proceeding with GPU") if not os.path.exists(winGPUdll): raise ValueError("NoDLL") lib = CDLL(winGPUdll, RTLD_GLOBAL) except (KeyError, ValueError): hasGPU = False if os.path.exists(winNoGPUdll): lib = CDLL(winNoGPUdll, RTLD_GLOBAL) print("Notice: CPU-only mode") else: # Try the other way, in case no_gpu was # compile but not renamed lib = CDLL(winGPUdll, RTLD_GLOBAL) print("Environment variables indicated a CPU run, but we didn't find `"+winNoGPUdll+"`. Trying a GPU run anyway.") else: lib = CDLL("./libdarknet.so", RTLD_GLOBAL) lib.network_width.argtypes = [c_void_p] lib.network_width.restype = c_int lib.network_height.argtypes = [c_void_p] lib.network_height.restype = c_int copy_image_from_bytes = lib.copy_image_from_bytes copy_image_from_bytes.argtypes = [IMAGE,c_char_p] def network_width(net): return lib.network_width(net) def network_height(net): return lib.network_height(net) predict = lib.network_predict_ptr predict.argtypes = [c_void_p, POINTER(c_float)] predict.restype = POINTER(c_float) if hasGPU: set_gpu = lib.cuda_set_device set_gpu.argtypes = [c_int] make_image = lib.make_image make_image.argtypes = [c_int, c_int, c_int] make_image.restype = IMAGE get_network_boxes = lib.get_network_boxes get_network_boxes.argtypes = [c_void_p, c_int, c_int, c_float, c_float, POINTER(c_int), c_int, POINTER(c_int), c_int] get_network_boxes.restype = POINTER(DETECTION) make_network_boxes = lib.make_network_boxes make_network_boxes.argtypes = [c_void_p] make_network_boxes.restype = POINTER(DETECTION) free_detections = lib.free_detections free_detections.argtypes = [POINTER(DETECTION), c_int] free_ptrs = lib.free_ptrs free_ptrs.argtypes = [POINTER(c_void_p), c_int] network_predict = lib.network_predict_ptr network_predict.argtypes = [c_void_p, POINTER(c_float)] reset_rnn = lib.reset_rnn reset_rnn.argtypes = [c_void_p] load_net = lib.load_network load_net.argtypes = [c_char_p, c_char_p, c_int] load_net.restype = c_void_p load_net_custom = lib.load_network_custom load_net_custom.argtypes = [c_char_p, c_char_p, c_int, c_int] load_net_custom.restype = c_void_p do_nms_obj = lib.do_nms_obj do_nms_obj.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] do_nms_sort = lib.do_nms_sort do_nms_sort.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] free_image = lib.free_image free_image.argtypes = [IMAGE] letterbox_image = lib.letterbox_image letterbox_image.argtypes = [IMAGE, c_int, c_int] letterbox_image.restype = IMAGE load_meta = lib.get_metadata lib.get_metadata.argtypes = [c_char_p] lib.get_metadata.restype = METADATA load_image = lib.load_image_color load_image.argtypes = [c_char_p, c_int, c_int] load_image.restype = IMAGE rgbgr_image = lib.rgbgr_image rgbgr_image.argtypes = [IMAGE] predict_image = lib.network_predict_image predict_image.argtypes = [c_void_p, IMAGE] predict_image.restype = POINTER(c_float) predict_image_letterbox = lib.network_predict_image_letterbox predict_image_letterbox.argtypes = [c_void_p, IMAGE] predict_image_letterbox.restype = POINTER(c_float) def array_to_image(arr): import numpy as np # need to return old values to avoid python freeing memory arr = arr.transpose(2,0,1) c = arr.shape[0] h = arr.shape[1] w = arr.shape[2] arr = np.ascontiguousarray(arr.flat, dtype=np.float32) / 255.0 data = arr.ctypes.data_as(POINTER(c_float)) im = IMAGE(w,h,c,data) return im, arr def classify(net, meta, im): out = predict_image(net, im) res = [] for i in range(meta.classes): if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] res.append((nameTag, out[i])) res = sorted(res, key=lambda x: -x[1]) return res def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45, debug= False): """ Performs the meat of the detection """ #pylint: disable= C0321 im = load_image(image, 0, 0) if debug: print("Loaded image") ret = detect_image(net, meta, im, thresh, hier_thresh, nms, debug) free_image(im) if debug: print("freed image") return ret def detect_image(net, meta, im, thresh=.5, hier_thresh=.5, nms=.45, debug= False): #import cv2 #custom_image_bgr = cv2.imread(image) # use: detect(,,imagePath,) #custom_image = cv2.cvtColor(custom_image_bgr, cv2.COLOR_BGR2RGB) #custom_image = cv2.resize(custom_image,(lib.network_width(net), lib.network_height(net)), interpolation = cv2.INTER_LINEAR) #import scipy.misc #custom_image = scipy.misc.imread(image) #im, arr = array_to_image(custom_image) # you should comment line below: free_image(im) num = c_int(0) if debug: print("Assigned num") pnum = pointer(num) if debug: print("Assigned pnum") predict_image(net, im) letter_box = 0 #predict_image_letterbox(net, im) #letter_box = 1 if debug: print("did prediction") # dets = get_network_boxes(net, custom_image_bgr.shape[1], custom_image_bgr.shape[0], thresh, hier_thresh, None, 0, pnum, letter_box) # OpenCV dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, None, 0, pnum, letter_box) if debug: print("Got dets") num = pnum[0] if debug: print("got zeroth index of pnum") if nms: do_nms_sort(dets, num, meta.classes, nms) if debug: print("did sort") res = [] if debug: print("about to range") for j in range(num): if debug: print("Ranging on "+str(j)+" of "+str(num)) if debug: print("Classes: "+str(meta), meta.classes, meta.names) for i in range(meta.classes): if debug: print("Class-ranging on "+str(i)+" of "+str(meta.classes)+"= "+str(dets[j].prob[i])) if dets[j].prob[i] > 0: b = dets[j].bbox if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] if debug: print("Got bbox", b) print(nameTag) print(dets[j].prob[i]) print((b.x, b.y, b.w, b.h)) res.append((nameTag, dets[j].prob[i], (b.x, b.y, b.w, b.h))) if debug: print("did range") res = sorted(res, key=lambda x: -x[1]) if debug: print("did sort") free_detections(dets, num) if debug: print("freed detections") return res netMain = None metaMain = None altNames = None def performDetect(imagePath="data/dog.jpg", thresh= 0.25, configPath = "./cfg/yolov3.cfg", weightPath = "yolov3.weights", metaPath= "./cfg/coco.data", showImage= True, makeImageOnly = False, initOnly= False): """ Convenience function to handle the detection and returns of objects. Displaying bounding boxes requires libraries scikit-image and numpy Parameters ---------------- imagePath: str Path to the image to evaluate. Raises ValueError if not found thresh: float (default= 0.25) The detection threshold configPath: str Path to the configuration file. Raises ValueError if not found weightPath: str Path to the weights file. Raises ValueError if not found metaPath: str Path to the data file. Raises ValueError if not found showImage: bool (default= True) Compute (and show) bounding boxes. Changes return. makeImageOnly: bool (default= False) If showImage is True, this won't actually *show* the image, but will create the array and return it. initOnly: bool (default= False) Only initialize globals. Don't actually run a prediction. Returns ---------------------- When showImage is False, list of tuples like ('obj_label', confidence, (bounding_box_x_px, bounding_box_y_px, bounding_box_width_px, bounding_box_height_px)) The X and Y coordinates are from the center of the bounding box. Subtract half the width or height to get the lower corner. Otherwise, a dict with { "detections": as above "image": a numpy array representing an image, compatible with scikit-image "caption": an image caption } """ # Import the global variables. This lets us instance Darknet once, then just call performDetect() again without instancing again global metaMain, netMain, altNames #pylint: disable=W0603 assert 0 < thresh < 1, "Threshold should be a float between zero and one (non-inclusive)" if not os.path.exists(configPath): raise ValueError("Invalid config path `"+os.path.abspath(configPath)+"`") if not os.path.exists(weightPath): raise ValueError("Invalid weight path `"+os.path.abspath(weightPath)+"`") if not os.path.exists(metaPath): raise ValueError("Invalid data file path `"+os.path.abspath(metaPath)+"`") if netMain is None: netMain = load_net_custom(configPath.encode("ascii"), weightPath.encode("ascii"), 0, 1) # batch size = 1 if metaMain is None: metaMain = load_meta(metaPath.encode("ascii")) if altNames is None: # In Python 3, the metafile default access craps out on Windows (but not Linux) # Read the names file and create a list to feed to detect try: with open(metaPath) as metaFH: metaContents = metaFH.read() import re match = re.search("names *= *(.*)$", metaContents, re.IGNORECASE | re.MULTILINE) if match: result = match.group(1) else: result = None try: if os.path.exists(result): with open(result) as namesFH: namesList = namesFH.read().strip().split("\n") altNames = [x.strip() for x in namesList] except TypeError: pass except Exception: pass if initOnly: print("Initialized detector") return None if not os.path.exists(imagePath): raise ValueError("Invalid image path `"+os.path.abspath(imagePath)+"`") # Do the detection #detections = detect(netMain, metaMain, imagePath, thresh) # if is used cv2.imread(image) detections = detect(netMain, metaMain, imagePath.encode("ascii"), thresh) if showImage: try: from skimage import io, draw import numpy as np image = io.imread(imagePath) print("*** "+str(len(detections))+" Results, color coded by confidence ***") imcaption = [] for detection in detections: label = detection[0] confidence = detection[1] pstring = label+": "+str(np.rint(100 * confidence))+"%" imcaption.append(pstring) print(pstring) bounds = detection[2] shape = image.shape # x = shape[1] # xExtent = int(x * bounds[2] / 100) # y = shape[0] # yExtent = int(y * bounds[3] / 100) yExtent = int(bounds[3]) xEntent = int(bounds[2]) # Coordinates are around the center xCoord = int(bounds[0] - bounds[2]/2) yCoord = int(bounds[1] - bounds[3]/2) boundingBox = [ [xCoord, yCoord], [xCoord, yCoord + yExtent], [xCoord + xEntent, yCoord + yExtent], [xCoord + xEntent, yCoord] ] # Wiggle it around to make a 3px border rr, cc = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr2, cc2 = draw.polygon_perimeter([x[1] + 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr3, cc3 = draw.polygon_perimeter([x[1] - 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr4, cc4 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] + 1 for x in boundingBox], shape= shape) rr5, cc5 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] - 1 for x in boundingBox], shape= shape) boxColor = (int(255 * (1 - (confidence ** 2))), int(255 * (confidence ** 2)), 0) draw.set_color(image, (rr, cc), boxColor, alpha= 0.8) draw.set_color(image, (rr2, cc2), boxColor, alpha= 0.8) draw.set_color(image, (rr3, cc3), boxColor, alpha= 0.8) draw.set_color(image, (rr4, cc4), boxColor, alpha= 0.8) draw.set_color(image, (rr5, cc5), boxColor, alpha= 0.8) if not makeImageOnly: io.imshow(image) io.show() detections = { "detections": detections, "image": image, "caption": "\n<br/>".join(imcaption) } except Exception as e: print("Unable to show image: "+str(e)) return detections if __name__ == "__main__": print(performDetect()) ```
关于使用深度强化学习Actor-Critic算法玩gym库中CartPole游戏不收敛的问题,高分悬赏。
小弟最近在自学深度强化学习,看的莫烦大佬的视频。其中有一个用AC算法玩gym库中CartPole的游戏实例,自己写的代码不知为何不能够收敛。考虑到自己自己写的程序中将AC网络写到一个类里去了,尝试过在A网络训练时截断C网络的梯度反向传播防止干扰,但还是不收敛。 小弟小白初学者自己瞎琢磨的,实在找不出原因,高分悬赏,希望大佬们能解惑。代码如下,其中有两个文件,一个是用以运行的主程序,另一个是主程序要调用的类,大佬们跑一下试试。 另外,真心诚意提问,请勿复制粘贴答非所问。 ``` ########主程序:AC_RL_run_this########## import gym from AC_RL_brain import ACNetwork def run_game(): step = 0 for episode in range(100000): episode_reward = 0 observation = env.reset() while True: if episode_reward > 20: env.render() action = RL.choose_action(observation) observation_, reward, done, _ = env.step(action) if done: reward = -20 RL.C_learn(observation, reward, observation_) RL.A_learn(observation, action) episode_reward += reward if done: break observation = observation_ step += 1 print('%d回合总回报:%f' % (episode, episode_reward)) print('game over') env.close() if __name__ == '__main__': env = gym.make('CartPole-v0') env.seed(1) RL = ACNetwork( n_actions=env.action_space.n, n_features=env.observation_space.shape[0], gamma=0.95, A_lr=0.001, C_lr=0.01, ) run_game() ########需要调用的类:AC_RL_brain########## import tensorflow as tf import numpy as np np.random.seed(2) tf.set_random_seed(2) # reproducible class ACNetwork: def __init__( self, n_actions, n_features, gamma, A_lr, C_lr, ): self.n_actions = n_actions self.n_features = n_features self.gamma = gamma self.A_lr = A_lr self.C_lr = C_lr self.td_error_real = 0 self._build_net() self.sess = tf.Session() self.sess.run(tf.global_variables_initializer()) def _build_net(self): # placeholder self.s = tf.placeholder(tf.float32, [1, self.n_features], "state") self.v_ = tf.placeholder(tf.float32, [1, 1], "v_next") self.r = tf.placeholder(tf.float32, None, 'r') self.a = tf.placeholder(tf.int32, None, "act") # A_net l1_A = tf.layers.dense( inputs=self.s, units=20, # number of hidden units activation=tf.nn.relu, kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.acts_prob = tf.layers.dense( inputs=l1_A, units=self.n_actions, # output units activation=tf.nn.softmax, # get action probabilities kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.log_prob = tf.log(self.acts_prob[0, self.a]) self.exp_v = tf.reduce_mean(self.log_prob * self.td_error_real) # advantage (TD_error) guided loss self.train_op_A = tf.train.AdamOptimizer(self.A_lr).minimize(-self.exp_v) # minimize(-exp_v) = maximize(exp_v) # C_net l1_C = tf.layers.dense( inputs=self.s, units=20, # number of hidden units activation=tf.nn.relu, # None # have to be linear to make sure the convergence of actor. # But linear approximator seems hardly learns the correct Q. kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.v = tf.layers.dense( inputs=l1_C, units=1, # output units activation=None, kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.td_error = self.r + self.gamma * self.v_ - self.v self.loss = tf.square(self.td_error) # TD_error = (r+gamma*V_next) - V_eval self.train_op_C = tf.train.AdamOptimizer(self.C_lr).minimize(self.loss) def choose_action(self, s): s = s[np.newaxis, :] probs = self.sess.run(self.acts_prob, {self.s: s}) # get probabilities for all actions return np.random.choice(np.arange(probs.shape[1]), p=probs.ravel()) # return a int def A_learn(self, s, a): s = s[np.newaxis, :] feed_dict = {self.s: s, self.a: a} _, exp_v = self.sess.run([self.train_op_A, self.exp_v], feed_dict) def C_learn(self, s, r, s_): s, s_ = s[np.newaxis, :], s_[np.newaxis, :] v_ = self.sess.run(self.v, {self.s: s_}) self.td_error_real, _ = self.sess.run([self.td_error, self.train_op_C], {self.s: s, self.v_: v_, self.r: r}) ```
python3多进程爬虫的每个进程停止运行但是程序没有退出?
我写了一个多进程和多线程结合的爬虫(我不知道多进程和多线程怎样结合使用)所以我先说一下**我的思路**: * 首先我爬取的是[某车之家](https://www.autohome.com.cn/)的文章 * 汽车之家有很多种车,比如奥迪,宝马,奔驰,我创建一个进程池pool, 对应每一种车创建一个进程下载它的文章 * 然后,因为每种车下面有很多篇文章,我创建一个线程池,对应每一篇文章,创建一个线程来下载文章 * 创建进程池我使用的是multiprocessing.Pool * 创建线程池使用的是concurrent.futures.ThreadPoolExecutor ## 那么现在问题来了 * 当我刚开始运行我的代码的时候,因为我创建的进程池大小是cpu_count()=8,所以打开任务管理器可以看到8个python进程正在运行 ![图片说明](https://img-ask.csdn.net/upload/201901/26/1548506446_775132.png) * **然后,当代码运行一段时间后,进程池中的8个进程全部停止运行了** ![图片说明](https://img-ask.csdn.net/upload/201901/26/1548506504_930707.png) ![图片说明](https://img-ask.csdn.net/upload/201901/26/1548506544_201575.png) ## 可以看到此时代码并没有运行完毕,而且代码运行卡在这里无论等多久都不会继续运行 * 我观察发现,这些进程在下载某辆车如本田-雅阁的所有文章后,注意是将所有文章下载完毕才会停止运行,而且不再运行 ## 我想知道进程池中的进程为什么会停止运行,而我的函数没有停止?可以确定的是我的爬虫任务并没有全部完成,仅仅完成了一小部分。进程池中的每一个进程在爬取几辆车的所有文章后停止运行,求大佬解答,不甚感激。 ## 代码如下 ``` # coding=utf-8 import requests import os import re import json import time import random import threading import multiprocessing import concurrent.futures from bs4 import BeautifulSoup def change_title(title): rstr = r"[\/\\\:\*\?\"\<\>\|]" return re.sub(rstr, "", title) USER_AGENTS = [ "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)", "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", ] http_ip = list() https_ip = list() with open(r'D:\pycharm\Spider\99mm\useful_ip.txt', 'r') as fp: lines = fp.readlines() for line in lines: ips = eval(line) if str(ips['kind']) == 'HTTP': http_ip.append(ips['proxy']) else: https_ip.append(ips['proxy']) def get_all_cars(main_url, file_path): car_dict = {} html = requests.get(main_url) soup = BeautifulSoup(html.text, "html.parser") catalog = soup.find("div", id="hotcar-1").find_all("div", class_="name") for cata in catalog[-1:]: # suv, 紧凑型车, 中型车 cata_a = cata.find("a") print(cata_a["href"]) print(cata_a.get_text()) car_url = main_url + cata_a["href"] car_html = requests.get(car_url) car_soup = BeautifulSoup(car_html.text, "html.parser") # 有4个 class_="tab-content-item" car_letter_boxes = car_soup.find("div", class_="tab-content-item").find_all("div", class_="uibox") for car_letter_box in car_letter_boxes[:]: # 车牌按字母排序 A~Z, 一个字母下有很多车牌, 对每个字母进行处理 car_brand_info = car_letter_box.find("div", class_="uibox-con rank-list rank-list-pic") if car_brand_info: car_brands = car_brand_info.find_all("dl", olr=re.compile("^.*$")) for car_brand in car_brands: # 一个车牌有很多种车型, 对每个车牌进行处理 brand_name = car_brand.find("div").find("a").get_text() print("-car brand-", brand_name) car_dict[cata_a.get_text() + "-" + brand_name] = {} car_brand_path = main_path + "\\" + cata_a.get_text() + "-" + brand_name if not os.path.exists(car_brand_path): os.mkdir(car_brand_path) # os.chdir(car_brand_path) car_name_lists = car_brand.find_all("ul", class_="rank-list-ul") for car_name_list in car_name_lists: car_name_lis = car_name_list.find_all("li", id=re.compile("^.*$")) for car_name_li in car_name_lis: car_a_tag = car_name_li.find("h4").find("a") specific_car_url = "https:" + car_a_tag["href"] car_name = car_a_tag.get_text() print("\t", car_name, "\t", specific_car_url) car_dict[cata_a.get_text() + "-" + brand_name][car_name] = specific_car_url brand_cars_path = car_brand_path + "\\" + car_name if not os.path.exists(brand_cars_path): os.mkdir(brand_cars_path) # os.chdir(brand_cars_path) # 至此, 找到了每一辆车的url, 需要从这个url中找到它对应的一系列文章 # get_each_car_articles(main_url, specific_car_url) else: continue return car_dict def get_each_car_articles(main_url, specific_car_url, file_path, headers, proxies, info): # main_url, specific_car_url, file_path, headers, proxies, info = args # 传入的是每一种车的url, 即specific_car_url article_dict = {} specific_car_html = requests.get(url=specific_car_url, headers=headers, proxies=proxies) specific_car_soup = BeautifulSoup(specific_car_html.text, "html.parser") art_temp = specific_car_soup.find("div", class_="athm-sub-nav__channel athm-js-sticky") if art_temp: art = art_temp.find_all("li") else: print(f"\t\t****article is None, url is {specific_car_url}****") return part_url = art[6].find("a")["href"] specific_car_article_url = main_url + part_url right_pos = specific_car_article_url.rfind("/") specific_car_article_url = specific_car_article_url[:right_pos + 1] specific_car_article_html = requests.get(specific_car_article_url, headers=headers, proxies=proxies) specific_car_article_soup = BeautifulSoup(specific_car_article_html.text, "html.parser") page_info = specific_car_article_soup.find("div", class_="page") page_num = 1 if page_info: pages = page_info.find_all("a", target="_self") page_num = int(pages[-2].get_text()) for i in range(1, page_num + 1): if i == 1: page_url = specific_car_article_url else: page_url = specific_car_article_url[:-4] + str(i) + specific_car_article_url[-3:] # print("\t"*2, f"正在查找第{i}页的文章\t", page_url) page_html = requests.get(page_url, headers=headers, proxies=proxies) page_soup = BeautifulSoup(page_html.text, "html.parser") articles = page_soup.find("div", class_="cont-info").find_all("li") for article in articles: each_article = article.find("h3").find("a") each_article_url = "https:" + each_article["href"] each_article_title = each_article.get_text() article_dict[each_article_title] = each_article_url os.chdir(file_path) with concurrent.futures.ThreadPoolExecutor(max_workers=8) as t_executor: for key, value in article_dict.items(): t_executor.submit(download_each_article, *(value, key,info)) # thread_list = [] # for key, value in article_dict.items(): # thread_list.append(threading.Thread(target=download_each_article, args=(value, key,info))) # [thread.start() for thread in thread_list] # [thread.join() for thread in thread_list] def download_each_article(each_article_url, each_article_title, info): headers = { "User-Agent": random.choice(USER_AGENTS), "Referer": "https://www.autohome.com.cn" } proxies = {"proxy": random.choice(http_ip)} # each_article_url, each_article_title, headers, proxies, info = args print(f"\t\t--下载文章-- {info}\t{each_article_title}\t{each_article_url}") article_html = requests.get(each_article_url, headers=headers, proxies=proxies) article_soup = BeautifulSoup(article_html.text, "html.parser") article_content = article_soup.find("div", class_="container article") if article_content: with open(f"{change_title(each_article_title)}.txt", "w+", encoding="utf-8") as f: time_span = article_content.find("div", class_="article-info").find("span", class_="time") time = time_span.get_text() time_dict = {"time": time} f.write(json.dumps(time_dict) + "\n\n") article_content_div = article_content.find("div", id="articleContent") for content in article_content_div.find_all("p"): if content.get_text().strip(): content_dict = {"content": content.get_text()} f.write(json.dumps(content_dict) + "\n") else: try: imgs = content.find_all("a") for i in imgs: img = i.find("img") img_dict = {f"<[image] {img['alt']}> ": "https:" + img["src"]} f.write(json.dumps(img_dict) + "\n") except: continue pages = article_content.find("div", class_="athm-page__num") if pages: for a in pages.find_all("a", target="_self")[1:]: next_page_url = "https://www.autohome.com.cn" + a["href"] pages_html = requests.get(next_page_url, headers=headers, proxies=proxies) pages_soup = BeautifulSoup(pages_html.text, "html.parser") pages_content_div = pages_soup.find("div", class_="container article").find("div", id="articleContent") for content in pages_content_div.find_all("p"): if content.get_text().strip(): content_dict = {"content": content.get_text()} f.write(json.dumps(content_dict) + "\n") else: try: imgs = content.find_all("a") for i in imgs: img = i.find("img") img_dict = {f"<[image] {img['alt']}> ": "https:" + img["src"]} f.write(json.dumps(img_dict) + "\n") except: continue # 下载评论 f.write("\n") article_comment_span = article_content.find("div", "article-tools").find("span", class_="comment") article_comment_url = "https:" + article_comment_span.find("a")["href"] # print(article_comment_url) basic_reply_url = "https://reply.autohome.com.cn/api/comments/show.json?count=50&" \ "page={}&id={}&appid=1&datatype=jsonp&order=0&replyid=0" html = requests.get(article_comment_url, headers=headers, proxies=proxies) html_soup = BeautifulSoup(html.text, "html.parser") article_id = re.search(r"articleid=([\d]*)#", article_comment_url).groups()[0] first_json_dict = json.loads(requests.get(basic_reply_url.format(1, article_id), headers=headers, proxies=proxies).text[1:-1]) page_num = int(first_json_dict["commentcount"]) // 50 + 1 for i in range(1, page_num + 1): json_dict = json.loads(requests.get(basic_reply_url.format(i, article_id)).text[1:-1]) comment_dicts = json_dict["commentlist"] for comment in comment_dicts: comment_dict = {} comment_dict["RMemberId"] = comment["RMemberId"] comment_dict["RMemberName"] = comment["RMemberName"] comment_dict["replydate"] = comment["replydate"] comment_dict["ReplyId"] = comment["ReplyId"] comment_dict["RObjId"] = comment["RObjId"] comment_dict["RTargetReplyId"] = comment["RTargetReplyId"] comment_dict["RTargetMemberId"] = comment["RTargetMemberId"] comment_dict["RReplyDate"] = comment["RReplyDate"] comment_dict["RContent"] = comment["RContent"] comment_dict["RFloor"] = comment["RFloor"] f.write(json.dumps(comment_dict) + "\n") print(f"**{info}-{each_article_title} completed") else: print(f"\tPicture article, passed. URL is {each_article_url}") if __name__ == '__main__': main_url = r"https://www.autohome.com.cn" main_path = r"D:\pycharm\python_work\autohome\汽车之家" start_time = time.time() proxies = {'proxy': random.choice(http_ip)} headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) " "AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36", "Referer": "https://www.autohome.com.cn" } car_dict = get_all_cars(main_url, main_path) # print(car_dict) # with concurrent.futures.ProcessPoolExecutor(max_workers=8) as p_executor: # for keys, values in car_dict.items(): # for key, value in values.items(): # file_path = main_path + "\\" + str(keys) + "\\" + key # info = f"-{keys}-{key}-" # p_executor.submit(get_each_car_articles, *(main_url, value, file_path, headers, proxies, info)) pool = multiprocessing.Pool() for keys, values in car_dict.items(): print(keys, values) for key, value in values.items(): print("\t", key, value) file_path = main_path + "\\" + str(keys) + "\\" + key info = f"-{keys}-{key}-" pool.apply_async(get_each_car_articles, args=(main_url, value, file_path, headers, proxies, info)) pool.close() pool.join() end_time = time.time() print("##########已完成##########") print(f"spend time {end_time-start_time}") ```
devstack报错generate-subunit fail
各位朋友好:我在centos7环境下安装liberty版本的devstack环境时,执行./stack.sh后 报错如下: 2017-05-08 06:38:40.129 | You are using pip version 7.1.2, however version 9.0.1 is available. 2017-05-08 06:38:40.130 | You should consider upgrading via the 'pip install --upgrade pip' command. 2017-05-08 06:38:40.151 | + exit_trap 2017-05-08 06:38:40.151 | + local r=1 2017-05-08 06:38:40.152 | ++ jobs -p 2017-05-08 06:38:40.152 | + jobs= 2017-05-08 06:38:40.152 | + [[ -n '' ]] 2017-05-08 06:38:40.152 | + kill_spinner 2017-05-08 06:38:40.152 | + '[' '!' -z '' ']' 2017-05-08 06:38:40.152 | + [[ 1 -ne 0 ]] 2017-05-08 06:38:40.152 | + echo 'Error on exit' 2017-05-08 06:38:40.152 | Error on exit 2017-05-08 06:38:40.152 | + generate-subunit 1494225476 44 fail 2017-05-08 06:38:40.203 | Traceback (most recent call last): 2017-05-08 06:38:40.203 | File "/usr/bin/generate-subunit", line 7, in <module> 2017-05-08 06:38:40.203 | from os_testr.generate_subunit import main 2017-05-08 06:38:40.203 | File "/usr/lib/python2.7/site-packages/os_testr/__init__.py", line 19, in <module> 2017-05-08 06:38:40.203 | 'os_testr').version_string() 2017-05-08 06:38:40.203 | File "/usr/lib/python2.7/site-packages/pbr/version.py", line 466, in version_string 2017-05-08 06:38:40.203 | return self.semantic_version().brief_string() 2017-05-08 06:38:40.203 | File "/usr/lib/python2.7/site-packages/pbr/version.py", line 461, in semantic_version 2017-05-08 06:38:40.203 | self._semantic = self._get_version_from_pkg_resources() 2017-05-08 06:38:40.203 | File "/usr/lib/python2.7/site-packages/pbr/version.py", line 438, in _get_version_from_pkg_resources 2017-05-08 06:38:40.203 | import pkg_resources 2017-05-08 06:38:40.203 | File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 72, in <module> 2017-05-08 06:38:40.203 | import packaging.requirements 2017-05-08 06:38:40.203 | ImportError: No module named requirements 。 下面是我的local.conf配置文件 [[local|localrc]] # Define images to be automatically downloaded during the DevStack built process. IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" # Credentials DATABASE_PASSWORD=123456 ADMIN_PASSWORD=123456 SERVICE_PASSWORD=123456 SERVICE_TOKEN=pass RABBIT_PASSWORD=123456 #FLAT_INTERFACE=eth0 HOST_IP=192.168.192.130 SERVICE_HOST=192.168.192.130 MYSQL_HOST=192.168.192.130 RABBIT_HOST=192.168.192.130 GLANCE_HOSTPORT=192.168.192.130:9292 ## Neutron options Q_USE_SECGROUP=True FLOATING_RANGE=192.168.192.0/24 FIXED_RANGE=10.0.0.0/24 Q_FLOATING_ALLOCATION_POOL=start=192.168.192.202,end=192.168.192.210 PUBLIC_NETWORK_GATEWAY=192.168.192.2 Q_L3_ENABLED=True PUBLIC_INTERFACE=eth0 Q_USE_PROVIDERNET_FOR_PUBLIC=True OVS_PHYSICAL_BRIDGE=br-ex PUBLIC_BRIDGE=br-ex OVS_BRIDGE_MAPPINGS=public:br-ex # Work offline #OFFLINE=True # Reclone each time RECLONE=False # Logging # ------- # By default ``stack.sh`` output only goes to the terminal where it runs. It can # be configured to additionally log to a file by setting ``LOGFILE`` to the full # path of the destination log file. A timestamp will be appended to the given name. LOGFILE=/opt/stack/logs/stack.sh.log VERBOSE=True LOG_COLOR=True SCREEN_LOGDIR=/opt/stack/logs # the number of days by setting ``LOGDAYS``. LOGDAYS=1 # Database Backend MySQL enable_service mysql # RPC Backend RabbitMQ enable_service rabbit # Enable Keystone - OpenStack Identity Service enable_service key # Horizon - OpenStack Dashboard Service enable_service horizon # Enable Swift - Object Storage Service without replication. enable_service s-proxy s-object s-container s-account SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_REPLICAS=1 # Enable Glance - OpenStack Image service enable_service g-api g-reg # Enable Cinder - Block Storage service for OpenStack VOLUME_GROUP="cinder-volumes" enable_service cinder c-api c-vol c-sch c-bak # Enable Heat (orchestration) Service enable_service heat h-api h-api-cfn h-api-cw h-eng # Enable Trove (database) Service enable_service trove tr-api tr-tmgr tr-cond # Enable Sahara (data_processing) Service enable_service sahara # Enable Tempest - The OpenStack Integration Test Suite enable_service tempest # Enabling Neutron (network) Service disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service q-metering enable_service neutron ## Neutron - Load Balancing enable_service q-lbaas ## Neutron - Firewall as a Service enable_service q-fwaas ## Neutron - VPN as a Service enable_service q-vpn # VLAN configuration. #Q_PLUGIN=ml2 #ENABLE_TENANT_VLANS=True # GRE tunnel configuration #Q_PLUGIN=ml2 #ENABLE_TENANT_TUNNELS=True # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan # Enable Ceilometer - Metering Service (metering + alarming) enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector ceilometer-api enable_service ceilometer-alarm-notify ceilometer-alarm-eval enable_service ceilometer-anotification ## Enable NoVNC enable_service n-novnc n-cauth # Enable the Ceilometer devstack plugin enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git # Branches KEYSTONE_BRANCH=stable/liberty NOVA_BRANCH=stable/liberty NEUTRON_BRANCH=stable/liberty SWIFT_BRANCH=stable/liberty GLANCE_BRANCH=stable/liberty CINDER_BRANCH=stable/liberty HEAT_BRANCH=stable/liberty TROVE_BRANCH=stable/liberty HORIZON_BRANCH=stable/liberty SAHARA_BRANCH=stable/liberty CEILOMETER_BRANCH=stable/liberty TROVE_BRANCH=stable/liberty # Select Keystone's token format # Choose from 'UUID', 'PKI', or 'PKIZ' # INSERT THIS LINE... KEYSTONE_TOKEN_FORMAT=${KEYSTONE_TOKEN_FORMAT:-UUID} KEYSTONE_TOKEN_FORMAT=$(echo ${KEYSTONE_TOKEN_FORMAT} | tr '[:upper:]' '[:lower:]') [[post-config|$NOVA_CONF]] [DEFAULT] # Ceilometer notification driver instance_usage_audit=True instance_usage_audit_period=hour notify_on_state_change=vm_and_task_state notification_driver=nova.openstack.common.notifier.rpc_notifier notification_driver=ceilometer.compute.nova_notifier 请教各位朋友,如何解决,十分感谢。
python一直提示对象出错怎么办
python一直提示“str”object has no attribute 'find_element_by_id'怎么办 import colorsys import urllib,os,uuid,re,time from PIL import Image from selenium.webdriver.common.action_chains import ActionChains from selenium import webdriver from StringIO import StringIO wzj="res" def openBrowser():#001_2 #global wzj wzj = webdriver.Firefox()#打开火狐浏览器 #http://bj.gsxt.gov.cn/sydq/loginSydqAction!sydq.dhtml wzj.get('http://bj.gsxt.gov.cn/sydq/loginSydqAction!sydq.dhtml') image1_url = wzj.find_elements_by_class_name('gt_cut_bg_slice')[0].get_attribute('style')#获得图片的样式 image1_url=image1_url[23:-38]#1照片URL地址 image2_url = wzj.find_elements_by_class_name('gt_cut_fullbg_slice')[0].get_attribute('style') image2_url=image2_url[23:-38]#2照片url地址 return [image1_url,image2_url] def create(locapath,fileName):#001_1 filePath=locapath+'/'+fileName if not os.path.exists(filePath): file=open(filePath,'a+')#在网站上下载的时候写入的地址 file.close() return filePath def downloadImg(): #001 list = openBrowser() for i in range(2):#range是一个创建整数列表range(2)代表是从0到2 fileName=str(i)+'_test.jpg' urllib.urlretrieve(list[i],create('C:/Users/admin/Desktop/', fileName)) #下载到本地 time.sleep(3) def inputbyid(): #001_3 #输入查询关键词 #:text: Unicode, 要输入的文本 #:element_id: 输入框网页元素id #keyword_qycx text=u"百度" element_id="keyword_qycx" input_el =wzj.find_element_by_id(element_id) input_el.clear() input_el.send_keys(text) time.sleep(3.5) def clickbyid(): #001_4 #点击查询按钮 #:element_id: 查询按钮网页元素id #popup-submit element_id01="popup-submit" wzj.find_element_by_id(element_id01).click() time.sleep(3.5)
导入模块错误:(无法从“string”导入名称“atof”)【python】
代码: import os,sys sys.path.append('/nfs3group/chlgrp/datasets/Animals_with_Attributes/code/') from numpy import * from platt import * import pickle, bz2 def nameonly(x): return x.split('\t')[1] def loadstr(openname,converter=str): return [converter(c.strip()) for c in open(openname).readlines()] def bzUnpickle(openname): return pickle.load(bz2.BZ2File(openname)) feature_pattern = '/D:/shuxingfenlei/AwA-features/Animals_with_Attributes/Features/hist/%s-%s.txt' labels_pattern = '/D:/shuxingfenlei/AwA2-features/Animals_with_Attributes2/Features/ResNet101/%s-AwA2-labels.txt' all_features = ['cq','lss','phog','sift','surf','rgsift'] attribute_matrix = 2*loadstr('/shuxingfenlei/AwA2-data/Animals_with_Attributes2/predicate-matrix-binary.txt',dtype=float)-1 classnames = loadstr('/shuxingfenlei/AwA2-data/Animals_with_Attributes2/classes.txt',nameonly) attributenames = loadstr('/shuxingfenlei/AwA2-data/Animals_with_Attributes2/predicates.txt',nameonly) def create_data(all_classes,attribute_id): featurehist={} for feature in all_features: featurehist[feature]=[] labels=[] for classname in all_classes: class_id = classnames.index(classname) class_size = 0 for feature in all_features: featurefilename = feature_pattern % (classname,feature) print ('# ',featurefilename) histopen = bzUnpickle(featurefilename) featurehist[feature].extend( histopen ) labelfilename = labels_pattern % classname print ('# ',labelfilename) print ('#') labels.extend( bzUnpickle(labelfilename)[:,attribute_id] ) for feature in all_features: featurehist[feature]=array(featurehist[feature]).T # shogun likes its data matrices shaped FEATURES x SAMPLES labels = array(labels) return featurehist,labels def train_attribute(attribute_id, C, split=0): from shogun import Classifier,Features,Kernel,Distance attribute_id = int(attribute_id) print ("# attribute ",attributenames[attribute_id]) C = float(C) print ("# C ", C) if split == 0: train_classes=loadstr('/nfs3group/chlgrp/datasets/Animals_with_Attributes/trainclasses.txt') test_classes=loadstr('/nfs3group/chlgrp/datasets/Animals_with_Attributes/testclasses.txt') else: classnames = loadstr('/nfs3group/chlgrp/datasets/Animals_with_Attributes/classnames.txt') startid= (split-1)*10 stopid = split*10 test_classes = classnames[startid:stopid] train_classes = classnames[0:startid]+classnames[stopid:] Xtrn,Ltrn = create_data(train_classes,attribute_id) Xtst,Ltst = create_data(test_classes,attribute_id) if min(Ltrn) == max(Ltrn): # only 1 class Lprior = mean(Ltrn) prediction = sign(Lprior)*ones(len(Ltst)) probabilities = 0.1+0.8*0.5*(Lprior+1.)*ones(len(Ltst)) # fallback return prediction,probabilities,Ltst #sg('loglevel', 'WARN') widths={} for feature in all_features: traindata = array(Xtrn[feature][:,::50],float) # used to be 5*offset sg('set_distance', 'CHISQUARE', 'REAL') sg('clean_features', 'TRAIN') sg('set_features', 'TRAIN', traindata) sg('init_distance', 'TRAIN') DM=sg('get_distance_matrix') widths[feature] = median(DM.flatten()) del DM s = Classifier.LibSVM() #sg('new_svm', 'LIBSVM') Lplatt_trn = concatenate([Ltrn[i::10] for i in range(9)]) # 90% for training Lplatt_val = Ltrn[9::10] # remaining 10% for platt scaling feats_trn = Features.CombinedFeatures() feats_val = Features.CombinedFeatures() for feature in all_features: Xplatt_trn = concatenate([Xtrn[feature][:,i::10] for i in range(9)], axis=1) feats_trn.append_feature_obj( Features.RealFeatures(ascontiguousarray(Xplatt_trn)) ) #sg('add_features', 'TRAIN', Xplatt_trn) Xplatt_val = Xtrn[feature][:,9::10] feats_val.append_feature_obj( Features.RealFeatures(ascontiguousarray(Xplatt_val)) ) #sg('add_features', 'TEST', Xplatt_val) del Xplatt_trn,Xplatt_val,Xtrn[feature] labels_trn = Features.Labels(Lplatt_trn) #sg('set_labels', 'TRAIN', Lplatt_trn) kernel = Kernel.CombinedKernel() #sg('set_kernel', 'COMBINED', 5000) for featureset in all_features: kernel.append_kernel( Kernel.Chi2Kernel( 5000, widths[featureset]/5. ) ) #sg('add_kernel', 1., 'CHI2', 'REAL', 10, widths[featureset]/5. ) kernel.init(feats_trn,feats_trn) K=kernel.get_kernel_matrix() K.tofile('/scratch/chl/cvfold%d_C%g_%02d-trn.kernel' % (split, C, attribute_id)) del K s.set_max_train_time(600*60.) #sg('svm_max_train_time', 600*60.) # one hour should be plenty s.set_C(C,C) #sg('c', C) s.set_kernel(kernel) s.set_labels(labels_trn) #sg('init_kernel', 'TRAIN') try: s.train() #sg('train_classifier') except (RuntimeWarning,RuntimeError): # can't train, e.g. all samples have the same labels Lprior = mean(Ltrn) prediction = sign(Lprior) * ones(len(Ltst)) probabilities = 0.1+0.8*0.5*(Lprior+1.) * ones(len(Ltst)) savetxt('./DAP/cvfold%d_C%g_%02d.txt' % (split, C, attribute_id), prediction) savetxt('./DAP/cvfold%d_C%g_%02d.prob' % (split, C, attribute_id), probabilities) savetxt('./DAP/cvfold%d_C%g_%02d.labels' % (split, C, attribute_id), Ltst) return prediction,probabilities,Ltst bias = s.get_bias() alphas = s.get_alphas() #[bias, alphas]=sg('get_svm') #print bias,alphas kernel.init(feats_trn,feats_val) K=kernel.get_kernel_matrix() K.tofile('/scratch/chl/cvfold%d_C%g_%02d-val.kernel' % (split, C, attribute_id)) del K #sg('init_kernel', 'TEST') try: prediction=s.classify().get_labels() #prediction=sg('classify') platt_params = SigmoidTrain(prediction, Lplatt_val) probabilities = SigmoidPredict(prediction, platt_params) savetxt('./DAP/cvfold%d_C%g_%02d-val.txt' % (split, C, attribute_id), prediction) savetxt('./DAP/cvfold%d_C%g_%02d-val.prob' % (split, C, attribute_id), probabilities) savetxt('./DAP/cvfold%d_C%g_%02d-val.labels' % (split, C, attribute_id), Lplatt_val) savetxt('./DAP/cvfold%d_C%g_%02d-val.platt' % (split, C, attribute_id), platt_params) #print '#train-perf ',attribute_id,C,mean((prediction*Lplatt_val)>0),mean(Lplatt_val>0) #print '#platt-perf ',attribute_id,C,mean((sign(probabilities-0.5)*Lplatt_val)>0),mean(Lplatt_val>0) except RuntimeError: Lprior = mean(Ltrn) prediction = sign(Lprior)*ones(len(Ltst)) probabilities = 0.1+0.8*0.5*(Lprior+1.)*ones(len(Ltst)) print (sys.stderr, "#Error during testing. Using constant platt scaling") platt_params=[1.,0.] # ----------------------------- now apply to test classes ------------------ feats_tst = Features.CombinedFeatures() #sg('clean_features', 'TEST') for feature in all_features: feats_tst.append_feature_obj( Features.RealFeatures(ascontiguousarray(Xtst[feature])) ) del Xtst[feature] kernel.init(feats_trn,feats_tst) K=kernel.get_kernel_matrix() K.tofile('/scratch/chl/cvfold%d_C%g_%02d-tst.kernel' % (split, C, attribute_id)) del K #sg('init_kernel', 'TEST') prediction=s.classify().get_labels() #prediction=sg('classify') probabilities = SigmoidPredict(prediction, platt_params) savetxt('./DAP/cvfold%d_C%g_%02d.txt' % (split, C, attribute_id), prediction) savetxt('./DAP/cvfold%d_C%g_%02d.prob' % (split, C, attribute_id), probabilities) savetxt('./DAP/cvfold%d_C%g_%02d.labels' % (split, C, attribute_id), Ltst) return prediction,probabilities,Ltst if __name__ == '__main__': import sys try: attribute_id = int(sys.argv[1]) except IndexError: print ("Must specify attribute ID!") raise SystemExit try: split = int(sys.argv[2]) except IndexError: split = 0 try: C = float(sys.argv[3]) except IndexError: C = 10. pred,prob,Ltst = train_attribute(attribute_id,C,split) print ("Done.", attribute_id, C, split)
基于tensorflow的pix2pix代码中如何做到输入图像和输出图像分辨率不一致
问题:例如在自己制作了成对的输入(input256×256 target 200×256)后,如何让输入图像和输出图像分辨率不一致,例如成对图像中:input的分辨率是256×256, output 和target都是200×256,需要修改哪里的参数。 论文参考:《Image-to-Image Translation with Conditional Adversarial Networks》 代码参考:https://blog.csdn.net/MOU_IT/article/details/80802407?utm_source=blogxgwz0 # coding=utf-8 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import numpy as np import os import glob import random import collections import math import time # https://github.com/affinelayer/pix2pix-tensorflow train_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/train/" # 训练集输入 train_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 训练集输出 test_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/val/" # 测试集输入 test_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/test_out/" # 测试集的输出 checkpoint = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 保存结果的目录 seed = None max_steps = None # number of training steps (0 to disable) max_epochs = 200 # number of training epochs progress_freq = 50 # display progress every progress_freq steps trace_freq = 0 # trace execution every trace_freq steps display_freq = 50 # write current training images every display_freq steps save_freq = 500 # save model every save_freq steps, 0 to disable separable_conv = False # use separable convolutions in the generator aspect_ratio = 1 #aspect ratio of output images (width/height) batch_size = 1 # help="number of images in batch") which_direction = "BtoA" # choices=["AtoB", "BtoA"]) ngf = 64 # help="number of generator filters in first conv layer") ndf = 64 # help="number of discriminator filters in first conv layer") scale_size = 286 # help="scale images to this size before cropping to 256x256") flip = True # flip images horizontally no_flip = True # don't flip images horizontally lr = 0.0002 # initial learning rate for adam beta1 = 0.5 # momentum term of adam l1_weight = 100.0 # weight on L1 term for generator gradient gan_weight = 1.0 # weight on GAN term for generator gradient output_filetype = "png" # 输出图像的格式 EPS = 1e-12 # 极小数,防止梯度为损失为0 CROP_SIZE = 256 # 图片的裁剪大小 # 命名元组,用于存放加载的数据集合创建好的模型 Examples = collections.namedtuple("Examples", "paths, inputs, targets, count, steps_per_epoch") Model = collections.namedtuple("Model", "outputs, predict_real, predict_fake, discrim_loss, discrim_grads_and_vars, gen_loss_GAN, gen_loss_L1, gen_grads_and_vars, train") # 图像预处理 [0, 1] => [-1, 1] def preprocess(image): with tf.name_scope("preprocess"): return image * 2 - 1 # 图像后处理[-1, 1] => [0, 1] def deprocess(image): with tf.name_scope("deprocess"): return (image + 1) / 2 # 判别器的卷积定义,batch_input为 [ batch , 256 , 256 , 6 ] def discrim_conv(batch_input, out_channels, stride): # [ batch , 256 , 256 , 6 ] ===>[ batch , 258 , 258 , 6 ] padded_input = tf.pad(batch_input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="CONSTANT") ''' [0,0]: 第一维batch大小不扩充 [1,1]:第二维图像宽度左右各扩充一列,用0填充 [1,1]:第三维图像高度上下各扩充一列,用0填充 [0,0]:第四维图像通道不做扩充 ''' return tf.layers.conv2d(padded_input, out_channels, kernel_size=4, strides=(stride, stride), padding="valid", kernel_initializer=tf.random_normal_initializer(0, 0.02)) # 生成器的卷积定义,卷积核为4*4,步长为2,输出图像为输入的一半 def gen_conv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: return tf.layers.separable_conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 生成器的反卷积定义 def gen_deconv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: _b, h, w, _c = batch_input.shape resized_input = tf.image.resize_images(batch_input, [h * 2, w * 2], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return tf.layers.separable_conv2d(resized_input, out_channels, kernel_size=4, strides=(1, 1), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d_transpose(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 定义LReLu激活函数 def lrelu(x, a): with tf.name_scope("lrelu"): # adding these together creates the leak part and linear part # then cancels them out by subtracting/adding an absolute value term # leak: a*x/2 - a*abs(x)/2 # linear: x/2 + abs(x)/2 # this block looks like it has 2 inputs on the graph unless we do this x = tf.identity(x) return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x) # 批量归一化图像 def batchnorm(inputs): return tf.layers.batch_normalization(inputs, axis=3, epsilon=1e-5, momentum=0.1, training=True, gamma_initializer=tf.random_normal_initializer(1.0, 0.02)) # 检查图像的维度 def check_image(image): assertion = tf.assert_equal(tf.shape(image)[-1], 3, message="image must have 3 color channels") with tf.control_dependencies([assertion]): image = tf.identity(image) if image.get_shape().ndims not in (3, 4): raise ValueError("image must be either 3 or 4 dimensions") # make the last dimension 3 so that you can unstack the colors shape = list(image.get_shape()) shape[-1] = 3 image.set_shape(shape) return image # 去除文件的后缀,获取文件名 def get_name(path): # os.path.basename(),返回path最后的文件名。若path以/或\结尾,那么就会返回空值。 # os.path.splitext(),分离文件名与扩展名;默认返回(fname,fextension)元组 name, _ = os.path.splitext(os.path.basename(path)) return name # 加载数据集,从文件读取-->解码-->归一化--->拆分为输入和目标-->像素转为[-1,1]-->转变形状 def load_examples(input_dir): if input_dir is None or not os.path.exists(input_dir): raise Exception("input_dir does not exist") # 匹配第一个参数的路径中所有的符合条件的文件,并将其以list的形式返回。 input_paths = glob.glob(os.path.join(input_dir, "*.jpg")) # 图像解码器 decode = tf.image.decode_jpeg if len(input_paths) == 0: input_paths = glob.glob(os.path.join(input_dir, "*.png")) decode = tf.image.decode_png if len(input_paths) == 0: raise Exception("input_dir contains no image files") # 如果文件名是数字,则用数字进行排序,否则用字母排序 if all(get_name(path).isdigit() for path in input_paths): input_paths = sorted(input_paths, key=lambda path: int(get_name(path))) else: input_paths = sorted(input_paths) sess = tf.Session() with tf.name_scope("load_images"): # 把我们需要的全部文件打包为一个tf内部的queue类型,之后tf开文件就从这个queue中取目录了, # 如果是训练模式时,shuffle为True path_queue = tf.train.string_input_producer(input_paths, shuffle=True) # Read的输出将是一个文件名(key)和该文件的内容(value,每次读取一个文件,分多次读取)。 reader = tf.WholeFileReader() paths, contents = reader.read(path_queue) # 对文件进行解码并且对图片作归一化处理 raw_input = decode(contents) raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32) # 归一化处理 # 判断两个值知否相等,如果不等抛出异常 assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels") ''' 对于control_dependencies这个管理器,只有当里面的操作是一个op时,才会生效,也就是先执行传入的 参数op,再执行里面的op。如果里面的操作不是定义的op,图中就不会形成一个节点,这样该管理器就失效了。 tf.identity是返回一个一模一样新的tensor的op,这会增加一个新节点到gragh中,这时control_dependencies就会生效. ''' with tf.control_dependencies([assertion]): raw_input = tf.identity(raw_input) raw_input.set_shape([None, None, 3]) # 图像值由[0,1]--->[-1, 1] width = tf.shape(raw_input)[1] # [height, width, channels] a_images = preprocess(raw_input[:, :width // 2, :]) # 256*256*3 b_images = preprocess(raw_input[:, width // 2:, :]) # 256*256*3 # 这里的which_direction为:BtoA if which_direction == "AtoB": inputs, targets = [a_images, b_images] elif which_direction == "BtoA": inputs, targets = [b_images, a_images] else: raise Exception("invalid direction") # synchronize seed for image operations so that we do the same operations to both # input and output images seed = random.randint(0, 2 ** 31 - 1) # 图像预处理,翻转、改变形状 with tf.name_scope("input_images"): input_images = transform(inputs) with tf.name_scope("target_images"): target_images = transform(targets) # 获得输入图像、目标图像的batch块 paths_batch, inputs_batch, targets_batch = tf.train.batch([paths, input_images, target_images], batch_size=batch_size) steps_per_epoch = int(math.ceil(len(input_paths) / batch_size)) return Examples( paths=paths_batch, # 输入的文件名块 inputs=inputs_batch, # 输入的图像块 targets=targets_batch, # 目标图像块 count=len(input_paths), # 数据集的大小 steps_per_epoch=steps_per_epoch, # batch的个数 ) # 图像预处理,翻转、改变形状 def transform(image): r = image if flip: r = tf.image.random_flip_left_right(r, seed=seed) # area produces a nice downscaling, but does nearest neighbor for upscaling # assume we're going to be doing downscaling here r = tf.image.resize_images(r, [scale_size, scale_size], method=tf.image.ResizeMethod.AREA) offset = tf.cast(tf.floor(tf.random_uniform([2], 0, scale_size - CROP_SIZE + 1, seed=seed)), dtype=tf.int32) if scale_size > CROP_SIZE: r = tf.image.crop_to_bounding_box(r, offset[0], offset[1], CROP_SIZE, CROP_SIZE) elif scale_size < CROP_SIZE: raise Exception("scale size cannot be less than crop size") return r # 创建生成器,这是一个编码解码器的变种,输入输出均为:256*256*3, 像素值为[-1,1] def create_generator(generator_inputs, generator_outputs_channels): layers = [] # encoder_1: [batch, 256, 256, in_channels] => [batch, 128, 128, ngf] with tf.variable_scope("encoder_1"): output = gen_conv(generator_inputs, ngf) # ngf为第一个卷积层的卷积核核数量,默认为 64 layers.append(output) layer_specs = [ ngf * 2, # encoder_2: [batch, 128, 128, ngf] => [batch, 64, 64, ngf * 2] ngf * 4, # encoder_3: [batch, 64, 64, ngf * 2] => [batch, 32, 32, ngf * 4] ngf * 8, # encoder_4: [batch, 32, 32, ngf * 4] => [batch, 16, 16, ngf * 8] ngf * 8, # encoder_5: [batch, 16, 16, ngf * 8] => [batch, 8, 8, ngf * 8] ngf * 8, # encoder_6: [batch, 8, 8, ngf * 8] => [batch, 4, 4, ngf * 8] ngf * 8, # encoder_7: [batch, 4, 4, ngf * 8] => [batch, 2, 2, ngf * 8] ngf * 8, # encoder_8: [batch, 2, 2, ngf * 8] => [batch, 1, 1, ngf * 8] ] # 卷积的编码器 for out_channels in layer_specs: with tf.variable_scope("encoder_%d" % (len(layers) + 1)): # 对最后一层使用激活函数 rectified = lrelu(layers[-1], 0.2) # [batch, in_height, in_width, in_channels] => [batch, in_height/2, in_width/2, out_channels] convolved = gen_conv(rectified, out_channels) output = batchnorm(convolved) layers.append(output) layer_specs = [ (ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] (ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] (ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] (ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] (ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] ] # 卷积的解码器 num_encoder_layers = len(layers) # 8 for decoder_layer, (out_channels, dropout) in enumerate(layer_specs): skip_layer = num_encoder_layers - decoder_layer - 1 with tf.variable_scope("decoder_%d" % (skip_layer + 1)): if decoder_layer == 0: # first decoder layer doesn't have skip connections # since it is directly connected to the skip_layer input = layers[-1] else: input = tf.concat([layers[-1], layers[skip_layer]], axis=3) rectified = tf.nn.relu(input) # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels] output = gen_deconv(rectified, out_channels) output = batchnorm(output) if dropout > 0.0: output = tf.nn.dropout(output, keep_prob=1 - dropout) layers.append(output) # decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels] with tf.variable_scope("decoder_1"): input = tf.concat([layers[-1], layers[0]], axis=3) rectified = tf.nn.relu(input) output = gen_deconv(rectified, generator_outputs_channels) output = tf.tanh(output) layers.append(output) return layers[-1] # 创建判别器,输入生成的图像和真实的图像:两个[batch,256,256,3],元素值值[-1,1],输出:[batch,30,30,1],元素值为概率 def create_discriminator(discrim_inputs, discrim_targets): n_layers = 3 layers = [] # 2x [batch, height, width, in_channels] => [batch, height, width, in_channels * 2] input = tf.concat([discrim_inputs, discrim_targets], axis=3) # layer_1: [batch, 256, 256, in_channels * 2] => [batch, 128, 128, ndf] with tf.variable_scope("layer_1"): convolved = discrim_conv(input, ndf, stride=2) rectified = lrelu(convolved, 0.2) layers.append(rectified) # layer_2: [batch, 128, 128, ndf] => [batch, 64, 64, ndf * 2] # layer_3: [batch, 64, 64, ndf * 2] => [batch, 32, 32, ndf * 4] # layer_4: [batch, 32, 32, ndf * 4] => [batch, 31, 31, ndf * 8] for i in range(n_layers): with tf.variable_scope("layer_%d" % (len(layers) + 1)): out_channels = ndf * min(2 ** (i + 1), 8) stride = 1 if i == n_layers - 1 else 2 # last layer here has stride 1 convolved = discrim_conv(layers[-1], out_channels, stride=stride) normalized = batchnorm(convolved) rectified = lrelu(normalized, 0.2) layers.append(rectified) # layer_5: [batch, 31, 31, ndf * 8] => [batch, 30, 30, 1] with tf.variable_scope("layer_%d" % (len(layers) + 1)): convolved = discrim_conv(rectified, out_channels=1, stride=1) output = tf.sigmoid(convolved) layers.append(output) return layers[-1] # 创建Pix2Pix模型,inputs和targets形状为:[batch_size, height, width, channels] def create_model(inputs, targets): with tf.variable_scope("generator"): out_channels = int(targets.get_shape()[-1]) outputs = create_generator(inputs, out_channels) # create two copies of discriminator, one for real pairs and one for fake pairs # they share the same underlying variables with tf.name_scope("real_discriminator"): with tf.variable_scope("discriminator"): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_real = create_discriminator(inputs, targets) # 条件变量图像和真实图像 with tf.name_scope("fake_discriminator"): with tf.variable_scope("discriminator", reuse=True): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_fake = create_discriminator(inputs, outputs) # 条件变量图像和生成的图像 # 判别器的损失,判别器希望V(G,D)尽可能大 with tf.name_scope("discriminator_loss"): # minimizing -tf.log will try to get inputs to 1 # predict_real => 1 # predict_fake => 0 discrim_loss = tf.reduce_mean(-(tf.log(predict_real + EPS) + tf.log(1 - predict_fake + EPS))) # 生成器的损失,生成器希望V(G,D)尽可能小 with tf.name_scope("generator_loss"): # predict_fake => 1 # abs(targets - outputs) => 0 gen_loss_GAN = tf.reduce_mean(-tf.log(predict_fake + EPS)) gen_loss_L1 = tf.reduce_mean(tf.abs(targets - outputs)) gen_loss = gen_loss_GAN * gan_weight + gen_loss_L1 * l1_weight # 判别器训练 with tf.name_scope("discriminator_train"): # 判别器需要优化的参数 discrim_tvars = [var for var in tf.trainable_variables() if var.name.startswith("discriminator")] # 优化器定义 discrim_optim = tf.train.AdamOptimizer(lr, beta1) # 计算损失函数对优化参数的梯度 discrim_grads_and_vars = discrim_optim.compute_gradients(discrim_loss, var_list=discrim_tvars) # 更新该梯度所对应的参数的状态,返回一个op discrim_train = discrim_optim.apply_gradients(discrim_grads_and_vars) # 生成器训练 with tf.name_scope("generator_train"): with tf.control_dependencies([discrim_train]): # 生成器需要优化的参数列表 gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] # 定义优化器 gen_optim = tf.train.AdamOptimizer(lr, beta1) # 计算需要优化的参数的梯度 gen_grads_and_vars = gen_optim.compute_gradients(gen_loss, var_list=gen_tvars) # 更新该梯度所对应的参数的状态,返回一个op gen_train = gen_optim.apply_gradients(gen_grads_and_vars) ''' 在采用随机梯度下降算法训练神经网络时,使用 tf.train.ExponentialMovingAverage 滑动平均操作的意义在于 提高模型在测试数据上的健壮性(robustness)。tensorflow 下的 tf.train.ExponentialMovingAverage 需要 提供一个衰减率(decay)。该衰减率用于控制模型更新的速度。该衰减率用于控制模型更新的速度, ExponentialMovingAverage 对每一个(待更新训练学习的)变量(variable)都会维护一个影子变量 (shadow variable)。影子变量的初始值就是这个变量的初始值, shadow_variable=decay×shadow_variable+(1−decay)×variable ''' ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([discrim_loss, gen_loss_GAN, gen_loss_L1]) # global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step + 1) return Model( predict_real=predict_real, # 条件变量(输入图像)和真实图像之间的概率值,形状为;[batch,30,30,1] predict_fake=predict_fake, # 条件变量(输入图像)和生成图像之间的概率值,形状为;[batch,30,30,1] discrim_loss=ema.average(discrim_loss), # 判别器损失 discrim_grads_and_vars=discrim_grads_and_vars, # 判别器需要优化的参数和对应的梯度 gen_loss_GAN=ema.average(gen_loss_GAN), # 生成器的损失 gen_loss_L1=ema.average(gen_loss_L1), # 生成器的 L1损失 gen_grads_and_vars=gen_grads_and_vars, # 生成器需要优化的参数和对应的梯度 outputs=outputs, # 生成器生成的图片 train=tf.group(update_losses, incr_global_step, gen_train), # 打包需要run的操作op ) # 保存图像 def save_images(output_dir, fetches, step=None): image_dir = os.path.join(output_dir, "images") if not os.path.exists(image_dir): os.makedirs(image_dir) filesets = [] for i, in_path in enumerate(fetches["paths"]): name, _ = os.path.splitext(os.path.basename(in_path.decode("utf8"))) fileset = {"name": name, "step": step} for kind in ["inputs", "outputs", "targets"]: filename = name + "-" + kind + ".png" if step is not None: filename = "%08d-%s" % (step, filename) fileset[kind] = filename out_path = os.path.join(image_dir, filename) contents = fetches[kind][i] with open(out_path, "wb") as f: f.write(contents) filesets.append(fileset) return filesets # 将结果写入HTML网页 def append_index(output_dir, filesets, step=False): index_path = os.path.join(output_dir, "index.html") if os.path.exists(index_path): index = open(index_path, "a") else: index = open(index_path, "w") index.write("<html><body><table><tr>") if step: index.write("<th>step</th>") index.write("<th>name</th><th>input</th><th>output</th><th>target</th></tr>") for fileset in filesets: index.write("<tr>") if step: index.write("<td>%d</td>" % fileset["step"]) index.write("<td>%s</td>" % fileset["name"]) for kind in ["inputs", "outputs", "targets"]: index.write("<td><img src='images/%s'></td>" % fileset[kind]) index.write("</tr>") return index_path # 转变图像的尺寸、并且将[0,1]--->[0,255] def convert(image): if aspect_ratio != 1.0: # upscale to correct aspect ratio size = [CROP_SIZE, int(round(CROP_SIZE * aspect_ratio))] image = tf.image.resize_images(image, size=size, method=tf.image.ResizeMethod.BICUBIC) # 将数据的类型转换为8位无符号整型 return tf.image.convert_image_dtype(image, dtype=tf.uint8, saturate=True) # 主函数 def train(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(train_output_dir): os.makedirs(train_output_dir) # 加载数据集,得到输入数据和目标数据并把范围变为 :[-1,1] examples = load_examples(train_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] # 返回值: model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } with tf.name_scope("parameter_count"): parameter_count = tf.reduce_sum([tf.reduce_prod(tf.shape(v)) for v in tf.trainable_variables()]) # 只保存最新一个checkpoint saver = tf.train.Saver(max_to_keep=20) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print("parameter_count =", sess.run(parameter_count)) if max_epochs is not None: max_steps = examples.steps_per_epoch * max_epochs # 400X200=80000 # 因为是从文件中读取数据,所以需要启动start_queue_runners() # 这个函数将会启动输入管道的线程,填充样本到队列中,以便出队操作可以从队列中拿到样本。 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # 运行训练集 print("begin trainning......") print("max_steps:", max_steps) start = time.time() for step in range(max_steps): def should(freq): return freq > 0 and ((step + 1) % freq == 0 or step == max_steps - 1) print("step:", step) # 定义一个需要run的所有操作的字典 fetches = { "train": model.train } # progress_freq为 50,每50次计算一次三个损失,显示进度 if should(progress_freq): fetches["discrim_loss"] = model.discrim_loss fetches["gen_loss_GAN"] = model.gen_loss_GAN fetches["gen_loss_L1"] = model.gen_loss_L1 # display_freq为 50,每50次保存一次输入、目标、输出的图像 if should(display_freq): fetches["display"] = display_fetches # 运行各种操作, results = sess.run(fetches) # display_freq为 50,每50次保存输入、目标、输出的图像 if should(display_freq): print("saving display images") filesets = save_images(train_output_dir, results["display"], step=step) append_index(train_output_dir, filesets, step=True) # progress_freq为 50,每50次打印一次三种损失的大小,显示进度 if should(progress_freq): # global_step will have the correct step count if we resume from a checkpoint train_epoch = math.ceil(step / examples.steps_per_epoch) train_step = (step - 1) % examples.steps_per_epoch + 1 rate = (step + 1) * batch_size / (time.time() - start) remaining = (max_steps - step) * batch_size / rate print("progress epoch %d step %d image/sec %0.1f remaining %dm" % ( train_epoch, train_step, rate, remaining / 60)) print("discrim_loss", results["discrim_loss"]) print("gen_loss_GAN", results["gen_loss_GAN"]) print("gen_loss_L1", results["gen_loss_L1"]) # save_freq为500,每500次保存一次模型 if should(save_freq): print("saving model") saver.save(sess, os.path.join(train_output_dir, "model"), global_step=step) # 测试 def test(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(test_output_dir): os.makedirs(test_output_dir) if checkpoint is None: raise Exception("checkpoint required for test mode") # disable these features in test mode scale_size = CROP_SIZE flip = False # 加载数据集,得到输入数据和目标数据 examples = load_examples(test_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } sess = tf.InteractiveSession() saver = tf.train.Saver(max_to_keep=1) ckpt = tf.train.get_checkpoint_state(checkpoint) saver.restore(sess,ckpt.model_checkpoint_path) start = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for step in range(examples.count): results = sess.run(display_fetches) filesets = save_images(test_output_dir, results) for i, f in enumerate(filesets): print("evaluated image", f["name"]) index_path = append_index(test_output_dir, filesets) print("wrote index at", index_path) print("rate", (time.time() - start) / max_steps) if __name__ == '__main__': train() #test()
CNN网络不知道载入的数据集是什么格式的?
CNN初学者,最近自己在github上拿了个项目练手,问题是数据集不公开,只能自己做数据集,但是却看不懂数据集应该怎么制作。 代码如下 应该就是DAC_DATASET类中 class DAC_Dataset(RNGDataFlow): def __init__(self, dataset_dir, train, all_classes): self.images = [] if all_classes == 1: for directory in listdir(dataset_dir): for file in listdir(dataset_dir + '/' + directory): if '.jpg' in file: for c in classes: if c[0] in directory: label = c[1] break self.images.append([dataset_dir + '/' + directory + '/' + file, label]) else: for file in listdir(dataset_dir): if '.jpg' in file: self.images.append([dataset_dir + '/' + file, 0]) shuffle(self.images) if train == 0: self.images = self.images[0:1000] def get_data(self): for image in self.images: xml_name = image[0].replace('jpg','xml') im = cv2.imread(image[0], cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) im = im.reshape((square_size, square_size, 3)) meta = None if os.path.isfile(image[0].replace('jpg','xml')): meta = xml.etree.ElementTree.parse(xml_name).getroot() label = np.array(image[1]) bndbox = {} bndbox['xmin'] = 0 bndbox['xmax'] = 0 bndbox['ymin'] = 0 bndbox['ymax'] = 0 if meta is not None: obj = meta.find('object') if obj is not None: box = obj.find('bndbox') if box is not None: bndbox['xmin'] = int(box.find('xmin').text) bndbox['xmax'] = int(box.find('xmax').text) bndbox['ymin'] = int(box.find('ymin').text) bndbox['ymax'] = int(box.find('ymax').text) bndbox['xmin'] = int(bndbox['xmin']*(square_size/IMAGE_WIDTH)) bndbox['xmax'] = int(bndbox['xmax']*(square_size/IMAGE_WIDTH)) bndbox['ymin'] = int(bndbox['ymin']*(square_size/IMAGE_HEIGHT)) bndbox['ymax'] = int(bndbox['ymax']*(square_size/IMAGE_HEIGHT)) iou = np.zeros( (height_width, height_width) ) for h in range(0, height_width): for w in range(0, height_width): rect = {} rect['xmin'] = int(w*down_sample_factor) rect['xmax'] = int((w+1)*down_sample_factor) rect['ymin'] = int(h*down_sample_factor) rect['ymax'] = int((h+1)*down_sample_factor) if DEMO_DATASET == 0: if intersection(rect, bndbox) == 0.0: iou[h,w] = 0.0 else: iou[h,w] = 1.0 else: if intersection(rect, bndbox) < 0.5: iou[h,w] = 0.0 else: iou[h,w] = 1.0 # if iou[h,w] > 0: # cv2.rectangle(im, (int(rect['xmin']),int(rect['ymin'])), (int(rect['xmax']),int(rect['ymax'])), (0,0,iou[h,w]*255), 1) iou = iou.reshape( (height_width, height_width, 1) ) valid = np.zeros((height_width, height_width, 4), dtype='float32') relative_bndboxes = np.zeros((height_width, height_width, 4), dtype='float32') for h in range(0, height_width): for w in range(0, height_width): if iou[h, w] > 0.0: valid[h,w,0] = 1.0 valid[h,w,1] = 1.0 valid[h,w,2] = 1.0 valid[h,w,3] = 1.0 relative_bndboxes[h, w, 0] = bndbox['xmin'] - w*down_sample_factor relative_bndboxes[h, w, 1] = bndbox['ymin'] - h*down_sample_factor relative_bndboxes[h, w, 2] = bndbox['xmax'] - w*down_sample_factor relative_bndboxes[h, w, 3] = bndbox['ymax'] - h*down_sample_factor else: relative_bndboxes[h, w] = np.zeros(4) # cv2.rectangle(im, (bndbox['xmin'],bndbox['ymin']), (bndbox['xmax'],bndbox['ymax']), (255,0,0), 1) # cv2.imshow('image', im) # cv2.waitKey(1000) yield [im, label, iou, valid, relative_bndboxes] def size(self): return len(self.images) class Model(ModelDesc): def _get_inputs(self): return [InputDesc(tf.float32, [None, square_size, square_size, 3], 'input'), InputDesc(tf.int32, [None], 'label'), InputDesc(tf.float32, [None, height_width, height_width, 1], 'ious'), InputDesc(tf.float32, [None, height_width, height_width, 4], 'valids'), InputDesc(tf.float32, [None, height_width, height_width, 4], 'bndboxes')] def _build_graph(self, inputs): image, label, ious, valids, bndboxes = inputs image = tf.round(image) fw, fa, fg = get_dorefa(BITW, BITA, BITG) old_get_variable = tf.get_variable def monitor(x, name): if MONITOR == 1: return tf.Print(x, [x], message='\n\n' + name + ': ', summarize=1000, name=name) else: return x def new_get_variable(v): name = v.op.name if not name.endswith('W') or 'conv1' in name or 'conv_obj' in name or 'conv_box' in name: return v else: logger.info("Quantizing weight {}".format(v.op.name)) if MONITOR == 1: return tf.Print(fw(v), [fw(v)], message='\n\n' + v.name + ', Quantized weights are:', summarize=100) else: return fw(v) def activate(x): if BITA == 32: return tf.nn.relu(x) else: return fa(tf.nn.relu(x)) def bn_activate(name, x): x = BatchNorm(name, x) x = monitor(x, name + '_noact_out') return activate(x) def halffire(name, x, num_squeeze_filters, num_expand_3x3_filters, skip): out_squeeze = Conv2D('squeeze_conv_' + name, x, out_channel=num_squeeze_filters, kernel_shape=1, stride=1, padding='SAME') out_squeeze = bn_activate('bn_squeeze_' + name, out_squeeze) out_expand_3x3 = Conv2D('expand_3x3_conv_' + name, out_squeeze, out_channel=num_expand_3x3_filters, kernel_shape=3, stride=1, padding='SAME') out_expand_3x3 = bn_activate('bn_expand_3x3_' + name, out_expand_3x3) if skip == 0: return out_expand_3x3 else: return tf.add(x, out_expand_3x3) def halffire_noact(name, x, num_squeeze_filters, num_expand_3x3_filters): out_squeeze = Conv2D('squeeze_conv_' + name, x, out_channel=num_squeeze_filters, kernel_shape=1, stride=1, padding='SAME') out_squeeze = bn_activate('bn_squeeze_' + name, out_squeeze) out_expand_3x3 = Conv2D('expand_3x3_conv_' + name, out_squeeze, out_channel=num_expand_3x3_filters, kernel_shape=3, stride=1, padding='SAME') return out_expand_3x3 with remap_variables(new_get_variable), \ argscope([Conv2D, FullyConnected], use_bias=False, nl=tf.identity), \ argscope(BatchNorm, decay=0.9, epsilon=1e-4): image = monitor(image, 'image_out') l = Conv2D('conv1', image, out_channel=32, kernel_shape=3, stride=2, padding='SAME') l = bn_activate('bn1', l) l = monitor(l, 'conv1_out') l = MaxPooling('pool1', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool1_out') l = halffire('fire1', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire1_out') l = MaxPooling('pool2', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool2_out') l = halffire('fire2', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire2_out') l = MaxPooling('pool3', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool3_out') l = halffire('fire3', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire3_out') l = halffire('fire4', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire4_out') l = halffire('fire5', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire5_out') l = halffire('fire6', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire6_out') l = halffire('fire7', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire7_out') # Classification classify = Conv2D('conv_class', l, out_channel=12, kernel_shape=1, stride=1, padding='SAME') classify = bn_activate('bn_class', classify) classify = monitor(classify, 'conv_class_out') logits = GlobalAvgPooling('pool_class', classify) class_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=label) class_loss = tf.reduce_mean(class_loss, name='cross_entropy_loss') wrong = prediction_incorrect(logits, label, 1, name='wrong-top1') add_moving_summary(tf.reduce_mean(wrong, name='train-error-top1')) # Object Detection l = tf.concat([l, classify], axis=3) objdetect = Conv2D('conv_obj', l, out_channel=1, kernel_shape=1, stride=1, padding='SAME') objdetect = tf.identity(objdetect, name='objdetect_out') objdetect_loss = tf.losses.hinge_loss(labels=ious, logits=objdetect) bndbox = Conv2D('conv_box', l, out_channel=4, kernel_shape=1, stride=1, padding='SAME') bndbox = tf.identity(bndbox, name='bndbox_out') bndbox = tf.multiply(bndbox, valids, name='mult0') bndbox_loss = tf.losses.mean_squared_error(labels=bndboxes, predictions=bndbox) # weight decay on all W of fc layers # reg_cost = regularize_cost('(fire7|conv_obj|conv_box).*/W', l2_regularizer(1e-5), name='regularize_cost') # cost = class_loss*objdetect_loss*bndbox_loss # cost = class_loss + objdetect_loss + bndbox_loss + reg_cost cost = class_loss + 10*objdetect_loss + bndbox_loss add_moving_summary(class_loss, objdetect_loss, bndbox_loss, cost) self.cost = cost tf.get_variable = old_get_variable def _get_optimizer(self): lr = tf.get_variable('learning_rate', initializer=1e-2, trainable=False) opt = tf.train.AdamOptimizer(lr, epsilon=1e-5) # lr = tf.get_variable('learning_rate', initializer=1e-1, trainable=False) # opt = tf.train.MomentumOptimizer(lr, momentum=0.9) return opt def get_data(dataset_dir, train): if DEMO_DATASET == 0: all_classes = 1 else: all_classes = 0 ds = DAC_Dataset(dataset_dir, train, all_classes) ds = BatchData(ds, BATCH_SIZE, remainder=False) ds = PrefetchDataZMQ(ds, nr_proc=8, hwm=6) return ds def get_config(): logger.auto_set_dir() data_train = get_data(args.data, 1) data_test = get_data(args.data, 0) if DEMO_DATASET == 0: return TrainConfig( dataflow=data_train, callbacks=[ ModelSaver(max_to_keep=10), HumanHyperParamSetter('learning_rate'), ScheduledHyperParamSetter('learning_rate', [(40, 0.001), (60, 0.0001), (90, 0.00001)]) ,InferenceRunner(data_test, [ScalarStats('cross_entropy_loss'), ClassificationError('wrong-top1', 'val-error-top1')]) ], model=Model(), max_epoch=150 ) else: return TrainConfig( dataflow=data_train, callbacks=[ ModelSaver(max_to_keep=10), HumanHyperParamSetter('learning_rate'), ScheduledHyperParamSetter('learning_rate', [(100, 0.001), (200, 0.0001), (250, 0.00001)]) ], model=Model(), max_epoch=300 ) def run_image(model, sess_init, image_dir): print('Running image!') output_names = ['objdetect_out', 'bndbox_out'] pred_config = PredictConfig( model=model, session_init=sess_init, input_names=['input'], output_names=output_names ) predictor = OfflinePredictor(pred_config) images = [] metas = [] for file in listdir(image_dir): if '.jpg' in file: images.append(file) if '.xml' in file: metas.append(file) images.sort() metas.sort() THRESHOLD = 0 index = 0 for image in images: meta = xml.etree.ElementTree.parse(image_dir + '/' + metas[index]).getroot() true_bndbox = {} true_bndbox['xmin'] = 0 true_bndbox['xmax'] = 0 true_bndbox['ymin'] = 0 true_bndbox['ymax'] = 0 if meta is not None: obj = meta.find('object') if obj is not None: box = obj.find('bndbox') if box is not None: true_bndbox['xmin'] = int(box.find('xmin').text) true_bndbox['xmax'] = int(box.find('xmax').text) true_bndbox['ymin'] = int(box.find('ymin').text) true_bndbox['ymax'] = int(box.find('ymax').text) index += 1 im = cv2.imread(image_dir + '/' + image, cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) im = im.reshape((1, square_size, square_size, 3)) outputs = predictor([im]) im = cv2.imread(image_dir + '/' + image, cv2.IMREAD_COLOR) objdetect = outputs[0] bndboxes = outputs[1] max_pred = -100 max_h = -1 max_w = -1 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if objdetect[0, h, w] > max_pred: max_pred = objdetect[0, h, w] max_h = h max_w = w sum_labels= 0; bndbox = {} bndbox['xmin'] = 0 bndbox['ymin'] = 0 bndbox['xmax'] = 0 bndbox['ymax'] = 0 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if (objdetect[0, h, w] > THRESHOLD and (h == max_h-1 or h == max_h or h == max_h+1) and (w == max_w-1 or w == max_w or w == max_w+1)) or (h == max_h and w == max_w): sum_labels += 1 bndbox['xmin'] += int( (bndboxes[0,h,w,0] + w*down_sample_factor) ) bndbox['ymin'] += int( (bndboxes[0,h,w,1] + h*down_sample_factor) ) bndbox['xmax'] += int( (bndboxes[0,h,w,2] + w*down_sample_factor) ) bndbox['ymax'] += int( (bndboxes[0,h,w,3] + h*down_sample_factor) ) temp_xmin = int( (bndboxes[0,h,w,0] + w*down_sample_factor) *(IMAGE_WIDTH/square_size) ) temp_ymin = int( (bndboxes[0,h,w,1] + h*down_sample_factor) *(IMAGE_HEIGHT/square_size) ) temp_xmax = int( (bndboxes[0,h,w,2] + w*down_sample_factor) *(IMAGE_WIDTH/square_size) ) temp_ymax = int( (bndboxes[0,h,w,3] + h*down_sample_factor) *(IMAGE_HEIGHT/square_size) ) cv2.rectangle(im, (temp_xmin,temp_ymin), (temp_xmax,temp_ymax), (255,0,0), 1) bndbox['xmin'] = int(bndbox['xmin']*(1/sum_labels)) bndbox['ymin'] = int(bndbox['ymin']*(1/sum_labels)) bndbox['xmax'] = int(bndbox['xmax']*(1/sum_labels)) bndbox['ymax'] = int(bndbox['ymax']*(1/sum_labels)) bndbox['xmin'] = int(bndbox['xmin']*(IMAGE_WIDTH/square_size)) bndbox['ymin'] = int(bndbox['ymin']*(IMAGE_HEIGHT/square_size)) bndbox['xmax'] = int(bndbox['xmax']*(IMAGE_WIDTH/square_size)) bndbox['ymax'] = int(bndbox['ymax']*(IMAGE_HEIGHT/square_size)) bndbox2 = {} bndbox2['xmin'] = int( bndboxes[0,max_h,max_w,0] + max_w*down_sample_factor) bndbox2['ymin'] = int( bndboxes[0,max_h,max_w,1] + max_h*down_sample_factor) bndbox2['xmax'] = int( bndboxes[0,max_h,max_w,2] + max_w*down_sample_factor) bndbox2['ymax'] = int( bndboxes[0,max_h,max_w,3] + max_h*down_sample_factor) bndbox2['xmin'] = int(bndbox2['xmin']*(IMAGE_WIDTH/square_size)) bndbox2['ymin'] = int(bndbox2['ymin']*(IMAGE_HEIGHT/square_size)) bndbox2['xmax'] = int(bndbox2['xmax']*(IMAGE_WIDTH/square_size)) bndbox2['ymax'] = int(bndbox2['ymax']*(IMAGE_HEIGHT/square_size)) print('----------------------------------------') print(str(max_h*14+max_w)) print('xmin: ' + str(bndbox2['xmin'])) print('xmax: ' + str(bndbox2['xmax'])) print('ymin: ' + str(bndbox2['ymin'])) print('ymax: ' + str(bndbox2['ymax'])) cv2.rectangle(im, (int(max_w*down_sample_factor*(IMAGE_WIDTH/square_size)),int(max_h*down_sample_factor*(IMAGE_HEIGHT/square_size))), (int((max_w+1)*down_sample_factor*(IMAGE_WIDTH/square_size)),int((max_h+1)*down_sample_factor*(IMAGE_HEIGHT/square_size))), (0,0,255), 1) cv2.rectangle(im, (true_bndbox['xmin'], true_bndbox['ymin']), (true_bndbox['xmax'], true_bndbox['ymax']), (255,0,0), 2) cv2.rectangle(im, (bndbox2['xmin'], bndbox2['ymin']), (bndbox2['xmax'],bndbox2['ymax']), (0,255,0), 2) cv2.imshow('image', im) cv2.imwrite('images_log/' + image, im) cv2.waitKey(800) def run_single_image(model, sess_init, image): print('Running single image!') if MONITOR == 1: monitor_names = ['conv_class_out', 'image_out', 'conv1_out', 'pool1_out', 'fire1_out', 'pool2_out', 'pool3_out', 'fire5_out', 'fire6_out', 'fire7_out'] else: monitor_names = [] output_names = ['objdetect_out', 'bndbox_out'] output_names.extend(monitor_names) pred_config = PredictConfig( model=model, session_init=sess_init, input_names=['input'], output_names=output_names ) predictor = OfflinePredictor(pred_config) if REAL_IMAGE == 1: im = cv2.imread(image, cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) cv2.imwrite('test_image.png', im) im = im.reshape((1, square_size, square_size, 3)) else: im = np.zeros((1, square_size, square_size, 3)) k = 0 for h in range(0, square_size): for w in range(0,square_size): for c in range (0,3): # im[0][h][w][c] = 0 im[0][h][w][c] = k%256 k += 1 outputs = predictor([im]) objdetect = outputs[0] bndboxes = outputs[1] max_pred = -100 max_h = -1 max_w = -1 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if objdetect[0, h, w] > max_pred: max_pred = objdetect[0, h, w] max_h = h max_w = w bndbox2 = {} bndbox2['xmin'] = int( bndboxes[0,max_h,max_w,0] + max_w*down_sample_factor) bndbox2['ymin'] = int( bndboxes[0,max_h,max_w,1] + max_h*down_sample_factor) bndbox2['xmax'] = int( bndboxes[0,max_h,max_w,2] + max_w*down_sample_factor) bndbox2['ymax'] = int( bndboxes[0,max_h,max_w,3] + max_h*down_sample_factor) bndbox2['xmin'] = int(bndbox2['xmin']*(640/square_size)) bndbox2['ymin'] = int(bndbox2['ymin']*(360/square_size)) bndbox2['xmax'] = int(bndbox2['xmax']*(640/square_size)) bndbox2['ymax'] = int(bndbox2['ymax']*(360/square_size)) # im = cv2.imread(image, cv2.IMREAD_COLOR) # cv2.rectangle(im, (bndbox2['xmin'], bndbox2['ymin']), (bndbox2['xmax'],bndbox2['ymax']), (0,255,0), 2) # cv2.imshow('image', im) # cv2.waitKey(2000) print('max_h: ' + str(max_h)) print('max_w: ' + str(max_w)) print('objdetect: ' + str(objdetect)) print('bndboxes: ' + str(bndboxes[0,max_h,max_w])) index = 2 for o in monitor_names: print(o + ', shape: ' + str(outputs[index].shape) ) if 'image' not in o: print(str(outputs[index])) if len(outputs[index].shape) == 4: file_name = o.split('/')[-1] print('Writing file... ' + file_name) if not os.path.exists('./log'): os.makedirs('./log') with open('./log/' + file_name + '.log', 'w') as f: for sample in range(0, outputs[index].shape[0]): for h in range(0, outputs[index].shape[1]): for w in range(0, outputs[index].shape[2]): res = '' for c in range(0, outputs[index].shape[3]): if 'image' in file_name: res = hexFromInt( int(outputs[index][sample, h, w, c]), 8 ) + '_' + res elif 'noact' in file_name: temp = (2**FACTOR_SCALE_BITS)*outputs[index][sample, h, w, c] res = hexFromInt( int(temp), 32 ) + '_' + res else: res = hexFromInt( int(outputs[index][sample, h, w, c]), BITA) + '_' + res f.write('0x' + res + '\n') index += 1 def dump_weights(meta, model, output): fw, fa, fg = get_dorefa(BITW, BITA, BITG) with tf.Graph().as_default() as G: tf.train.import_meta_graph(meta) init = get_model_loader(model) sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) sess.run(tf.global_variables_initializer()) init.init(sess) with sess.as_default(): if output: if output.endswith('npy') or output.endswith('npz'): varmanip.dump_session_params(output) else: var = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) var.extend(tf.get_collection(tf.GraphKeys.MODEL_VARIABLES)) var_dict = {} for v in var: name = varmanip.get_savename_from_varname(v.name) var_dict[name] = v logger.info("Variables to dump:") logger.info(", ".join(var_dict.keys())) saver = tf.train.Saver( var_list=var_dict, write_version=tf.train.SaverDef.V2) saver.save(sess, output, write_meta_graph=False) network_model = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) network_model.extend(tf.get_collection(tf.GraphKeys.MODEL_VARIABLES)) target_frequency = 200000000 target_FMpS = 300 non_quantized_layers = ['conv1/Conv2D', 'conv_obj/Conv2D', 'conv_box/Conv2D'] json_out, layers_list, max_cycles = generateLayers(sess, BITA, BITW, non_quantized_layers, target_frequency, target_FMpS) achieved_FMpS = target_frequency/max_cycles if DEMO_DATASET == 0: generateConfig(layers_list, 'halfsqueezenet-config.h') genereateHLSparams(layers_list, network_model, 'halfsqueezenet-params.h', fw) else: generateConfig(layers_list, 'halfsqueezenet-config_demo.h') genereateHLSparams(layers_list, network_model, 'halfsqueezenet-params_demo.h', fw) print('|---------------------------------------------------------|') print('target_FMpS: ' + str(target_FMpS) ) print('achieved_FMpS: ' + str(achieved_FMpS) ) if __name__ == '__main__': print('Start') parser = argparse.ArgumentParser() parser.add_argument('dump2_train1_test0', help='dump(2), train(1) or test(0)') parser.add_argument('--model', help='model file') parser.add_argument('--meta', help='metagraph file') parser.add_argument('--output', help='output for dumping') parser.add_argument('--gpu', help='the physical ids of GPUs to use') parser.add_argument('--data', help='DAC dataset dir') parser.add_argument('--run', help='directory of images to test') parser.add_argument('--weights', help='weights file') args = parser.parse_args() print('Using GPU ' + str(args.gpu)) if args.gpu: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu print(str(args.dump2_train1_test0)) if args.dump2_train1_test0 == '1': if args.data == None: print('Provide DAC dataset path with --data') sys.exit() config = get_config() if args.model: config.session_init = SaverRestore(args.model) SimpleTrainer(config).train() elif args.dump2_train1_test0 == '0': if args.run == None: print('Provide images with --run ') sys.exit() if args.weights == None: print('Provide weights file (.npy) for testing!') sys.exit() assert args.weights.endswith('.npy') run_image(Model(), DictRestore(np.load(args.weights, encoding='latin1').item()), args.run) elif args.dump2_train1_test0 == '2': if args.meta == None: print('Provide meta file (.meta) for dumping') sys.exit() if args.model == None: print('Provide model file (.data-00000-of-00001) for dumping') sys.exit() dump_weights(args.meta, args.model, args.output) elif args.dump2_train1_test0 == '3': if args.run == None: print('Provide image with --run ') sys.exit() if args.weights == None: print('Provide weights file (.npy) for testing!') sys.exit() assert args.weights.endswith('.npy') run_single_image(Model(), DictRestore(np.load(args.weights, encoding='latin1').item()), args.run)
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
Python 植物大战僵尸代码实现(2):植物卡片选择和种植
这篇文章要介绍的是: - 上方植物卡片栏的实现。 - 点击植物卡片,鼠标切换为植物图片。 - 鼠标移动时,判断当前在哪个方格中,并显示半透明的植物作为提示。
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
记一次腾讯面试:进程之间究竟有哪些通信方式?如何通信? ---- 告别死记硬背
有一次面试的时候,被问到进程之间有哪些通信方式,不过由于之前没深入思考且整理过,说的并不好。想必大家也都知道进程有哪些通信方式,可是我猜很多人都是靠着”背“来记忆的,所以今天的这篇文章,讲给大家详细着讲解他们是如何通信的,让大家尽量能够理解他们之间的区别、优缺点等,这样的话,以后面试官让你举例子,你也能够顺手拈来。 1、管道 我们来看一条 Linux 的语句 netstat -tulnp | gr...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
网络(8)-HTTP、Socket、TCP、UDP的区别和联系
TCP/IP协议是传输层协议,主要解决数据如何在网络中传输,而HTTP是应用层协议,主要解决如何包装数据。 一、TCP与UDP的不同 1. 是否需要建立连接。 UDP在传送数据之前不需要先建立连接;TCP则提供面向连接的服务; 2. 是否需要给出确认 对方的传输层在收到UDP报文后,不需要给出任何确认,而 TCP需要给出确认报文,要提供可靠的、面向连接的传输服务。 3.虽然UDP不提供可靠交...
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
防劝退!数据结构和算法难理解?可视化动画带你轻松透彻理解!
大家好,我是 Rocky0429,一个连数据结构和算法都不会的蒟蒻… 学过数据结构和算法的都知道这玩意儿不好学,没学过的经常听到这样的说法还没学就觉得难,其实难吗?真难! 难在哪呢?当年我还是个小蒟蒻,初学数据结构和算法的时候,在忍着枯燥看完定义原理,之后想实现的时候,觉得它们的过程真的是七拐八绕,及其难受。 在简单的链表、栈和队列这些我还能靠着在草稿上写写画画理解过程,但是到了数论、图...
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
开挂的人生!那些当选院士,又是ACM/IEEE 双料Fellow的华人学者们
昨日,2019年两院院士正式官宣,一时间抢占了各大媒体头条。 朋友圈也是一片沸腾,奔走相告,赶脚比自己中了大奖还嗨皮! 谁叫咱家导师就是这么厉害呢!!! 而就在最近,新一年度的IEEE/ACM Fellow也将正式公布。 作为学术届的顶级荣誉,不自然地就会将院士与Fellow作比较,到底哪个含金量更高呢? 学术君认为,同样是专业机构对学者的认可,考量标准不一,自然不能一概而论。 但...
聊聊C语言和指针的本质
坐着绿皮车上海到杭州,24块钱,很宽敞,在火车上非正式地聊几句。 很多编程语言都以 “没有指针” 作为自己的优势来宣传,然而,对于C语言,指针却是与生俱来的。 那么,什么是指针,为什么大家都想避开指针。 很简单, 指针就是地址,当一个地址作为一个变量存在时,它就被叫做指针,该变量的类型,自然就是指针类型。 指针的作用就是,给出一个指针,取出该指针指向地址处的值。为了理解本质,我们从计算机模型说起...
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
代码详解:如何用Python快速制作美观、炫酷且有深度的图表
全文共12231字,预计学习时长35分钟生活阶梯(幸福指数)与人均GDP(金钱)正相关的正则图本文将探讨三种用Python可视化数据的不同方法。以可视化《2019年世界幸福报告》的数据为例,本文用Gapminder和Wikipedia的信息丰富了《世界幸福报告》数据,以探索新的数据关系和可视化方法。《世界幸福报告》试图回答世界范围内影响幸福的因素。报告根据对“坎特里尔阶梯问题”的回答来确定幸...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
(经验分享)作为一名普通本科计算机专业学生,我大学四年到底走了多少弯路
今年正式步入了大四,离毕业也只剩半年多的时间,回想一下大学四年,感觉自己走了不少弯路,今天就来分享一下自己大学的学习经历,也希望其他人能不要走我走错的路。 (一)初进校园 刚进入大学的时候自己完全就相信了高中老师的话:“进入大学你们就轻松了”。因此在大一的时候自己学习的激情早就被抛地一干二净,每天不是在寝室里玩游戏就是出门游玩,不过好在自己大学时买的第一台笔记本性能并不是很好,也没让我彻底沉...
如何写一篇技术博客,谈谈我的看法
前言 只有光头才能变强。 文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y 我一直推崇学技术可以写技术博客去沉淀自己的知识,因为知识点实在是太多太多了,通过自己的博客可以帮助自己快速回顾自己学过的东西。 我最开始的时候也是只记笔记,认为自己能看得懂就好。但如果想验证自己是不是懂了,可以写成技术博客。在写技术博客的...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
面试还搞不懂redis,快看看这40道面试题(含答案和思维导图)
Redis 面试题 1、什么是 Redis?. 2、Redis 的数据类型? 3、使用 Redis 有哪些好处? 4、Redis 相比 Memcached 有哪些优势? 5、Memcache 与 Redis 的区别都有哪些? 6、Redis 是单进程单线程的? 7、一个字符串类型的值能存储最大容量是多少? 8、Redis 的持久化机制是什么?各自的优缺点? 9、Redis 常见性...
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
互联网公司的裁员,能玩出多少种花样?
裁员,也是一门学问,可谓博大精深!以下,是互联网公司的裁员的多种方法:-正文开始-135岁+不予续签的理由:千禧一代网感更强。95后不予通过试用期的理由:已婚已育员工更有责任心。2通知接下来要过苦日子,让一部分不肯同甘共苦的员工自己走人,以“兄弟”和“非兄弟”来区别员工。3强制996。员工如果平衡不了工作和家庭,可在离婚或离职里二选一。4不布置任何工作,但下班前必须提交千字工作日报。5不给活干+...
【设计模式】单例模式的八种写法分析
网上泛滥流传单例模式的写法种类,有说7种的,也有说6种的,当然也不排除说5种的,他们说的有错吗?其实没有对与错,刨根问底,写法终究是写法,其本质精髓大体一致!因此完全没必要去追究写法的多少,有这个时间还不如跟着宜春去网吧偷耳机、去田里抓青蛙得了,一天天的....
《面试宝典》:检验是否为合格的初中级程序员的面试知识点,你都知道了吗?查漏补缺
欢迎关注文章系列,一起学习 《提升能力,涨薪可待篇》 《面试知识,工作可待篇》 《实战演练,拒绝996篇》 也欢迎关注公 众 号【Ccww笔记】,原创技术文章第一时间推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《面试知识,工作可待篇》-Java笔试面试基础知识大全 前言 是不是感觉找工作面试是那么难呢? 在找工作面试应在学习的基础进行总结面试知识点,工作也指日可待,欢...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
微博推荐算法简述
在介绍微博推荐算法之前,我们先聊一聊推荐系统和推荐算法。有这样一些问题:推荐系统适用哪些场景?用来解决什么问题、具有怎样的价值?效果如何衡量? 推荐系统诞生很早,但真正被大家所重视,缘起于以”facebook”为代表的社会化网络的兴起和以“淘宝“为代表的电商的繁荣,”选择“的时代已经来临,信息和物品的极大丰富,让用户如浩瀚宇宙中的小点,无所适从。推荐系统迎来爆发的机会,变得离用户更近: 快...
相关热词 c#开发的dll注册 c#的反射 c# grid绑定数据源 c#多线程怎么循环 c# 鼠标左键 c# char占位符 c# 日期比较 c#16进制转换为int c#用递归求顺序表中最大 c#小型erp源代码
立即提问