c#如何获取dll中的references 20C

如何获取dll文件中的references,就是给定dll文件名,输出dll里面的引用,实现类似.net Reflector的功能

6个回答

dll里面没有什么reference,reference是配置在项目中的,编译器用它寻找符号。在目标程序集中是没有reference的。

Reflector也没有你说的“给定dll文件名,输出dll里面的引用”,相反,它遇到无法解析的符号,会一样弹出对话框,让你手工去加载。

BITxingxing
BITxingxing 我想知道他是怎么解析出references里有啥的,求大神指点
接近 4 年之前 回复
BITxingxing
BITxingxing 我想知道他是怎么解析出references里有啥的,求大神指点
接近 4 年之前 回复
BITxingxing
BITxingxing 我想知道他是怎么解析出references里有啥的,求大神指点
接近 4 年之前 回复
BITxingxing
BITxingxing Reflector里添加dll之后就可以查看References 我就是想实现这个功能,有啥思路吗
接近 4 年之前 回复

DLL里面的引用是指什么

BITxingxing
BITxingxing 就是这个DLL中用到了哪些引用 references using namespace XXX
接近 4 年之前 回复

图片说明

各位就是这个功能 读取dll的引用中到底有啥

图片说明

各位就是这个功能 读取dll的引用中到底有啥

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Framework and References引用Dll库
使用VisualStudio2010开发平台,关于MFC Dll的引用问题,工程属性中有两种方法都可以把 Dll库引用进来,分别是Framework and References 和 Additional Dependencies,但不是特别清楚这两种方法具体的使用场合。 以下是个人对这两种方法的理解: Framework and References,只能引用本地Solution下自己编译生成的Dll库 Additional Dependencies,既可以引用本地Solution下自己编译生成的Dll库,也可以引用第三方提供的Dll(就是看不到源代码的Dll) 如果理解的不正确,请指教! ![图片说明](https://img-ask.csdn.net/upload/201512/24/1450928047_312122.jpg) ![图片说明](https://img-ask.csdn.net/upload/201512/24/1450928080_841614.jpg)
c#调用webservice 服务器无法处理请求。 ---> 未将对象引用设置到对象的实例
“/”应用程序中的服务器错误。 服务器无法处理请求。 ---> 未将对象引用设置到对象的实例。 说明: 执行当前 Web 请求期间,出现未经处理的异常。请检查堆栈跟踪信息,以了解有关该错误以及代码中导致错误的出处的详细信息。 异常详细信息: System.ServiceModel.FaultException: 服务器无法处理请求。 ---> 未将对象引用设置到对象的实例。 源错误: 行 128: 行 129: public string[] getWeather(string theCityCode, string theUserID) { 行 130: return base.Channel.getWeather(theCityCode, theUserID); 行 131: } 行 132: 源文件: c:\Users\Administrator\Desktop\WebApplication2\Service References\WeatherWS\Reference.cs 行: 130 堆栈跟踪: [FaultException: 服务器无法处理请求。 ---> 未将对象引用设置到对象的实例。] System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) +10733331 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) +336 WebApplication2.WeatherWS.WeatherWSSoap.getWeather(String theCityCode, String theUserID) +0 WebApplication2.WeatherWS.WeatherWSSoapClient.getWeather(String theCityCode, String theUserID) in c:\Users\Administrator\Desktop\WebApplication2\Service References\WeatherWS\Reference.cs:130 WebApplication2.WebForm1.test() in c:\Users\Administrator\Desktop\WebApplication2\WebForm1.aspx.cs:19 WebApplication2.WebForm1.Page_Load(Object sender, EventArgs e) in c:\Users\Administrator\Desktop\WebApplication2\WebForm1.aspx.cs:34 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +51 System.Web.UI.Control.OnLoad(EventArgs e) +92 System.Web.UI.Control.LoadRecursive() +54 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +772 namespace WebApplication2 { public partial class WebForm1 : System.Web.UI.Page { protected void test() { WeatherWS.WeatherWSSoapClient w = new WeatherWS.WeatherWSSoapClient("WeatherWSSoap"); string[] infos = new string[50]; if(w.getWeather("天津","")!=null) infos = w.getWeather("天津", " "); } protected void Page_Load(object sender, EventArgs e) { test(); }
c# webservice 天气 无法导入wsdl
![](C:\Users\Administrator\Desktop\QQ图片20141205185414.jpg) 想做一个天气预报结果碰到的这问题,在别的电脑也是,webservice的引用 警告 1 自定义工具警告: 无法导入 wsdl:binding 详细信息: 未处理命名空间“http://schemas.xmlsoap.org/wsdl/http/”中必需的 WSDL 扩展元素“binding”。 错误来源的 XPath: //wsdl:definitions[@targetNamespace='http://WebXml.com.cn/']/wsdl:binding[@name='WeatherWebServiceHttpGet'] G:\csdn下载\WindowsFormsApplication2\WindowsFormsApplication2\WindowsFormsApplication2\Service References\Weather\Reference.svcmap 1 1 callWebservice
C# .net SQL创建表格语句提示“附近有语法错误。 ”
具体的功能是实现点击按钮,然后创建以TextBox里文本为表名的表格。 同样的代码我放在SQL SERVER里测试能够创建表格,但是在VISUAL STUDIO里就不能实现并且提示“XXX附近有语法错误。 ” 麻烦前辈们帮忙看看 下面是代码: ``` command.CommandText = string.Format("create table Course."+"'{0}' "+ " (SId varchar(20), "+ " Score varchar(20), "+ " FOREIGN KEY(SId) REFERENCES tb_Student(SId) ,"+ " primary key(SId))",this.TxtCId.Text); command.ExecuteNonQuery(); ``` 下面是几张截图: ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563093408_600833.png) 这是前端 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563093448_963193.png) 这是后端截图 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563093479_925071.png)输入这样的内容后 出现上述错误 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563093506_541839.png)报错 麻烦前辈们了!
SQL数据库问题RESTRICT
FOREIGN KEY(C#) REFERENCES C(C#) ON DELETE RESTRICT 这个语句报了错误, 消息 156,级别 15,状态 1,第 7 行 关键字 'RESTRICT' 附近有语法错误。 找不出错误原因,哪个大神能帮我看一下?
mysql创建自参照表怎么插入第一条记录
Create table Course ( Cno char(4) primary key, #课程编号 Cname char(40) not null, #课程名 列级完整性约束条件,Cname不能为空 Cpno char(4), #先修课程编号 Ccredit SMALLINT, #学分 FOREIGN key (Cpno) REFERENCES Course(Cno) # 此句说明外码的参照表与被参照表可为同一个表 # FOREIGN key (Cpno) REFERENCES Course(Cpno) # 此句说明外码与被参照码不可为同一表的同一属性 ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
unity打包问题,unity 和android studio交互
CommandInvokationFailure: Gradle build failed. D:\Unity2018\Editor\Data\PlaybackEngines\AndroidPlayer/Tools\OpenJDK\Windows\bin\java.exe -classpath "D:\Unity2018\Editor\Data\PlaybackEngines\AndroidPlayer\Tools\gradle\lib\gradle-launcher-5.1.1.jar" org.gradle.launcher.GradleMain "-Dorg.gradle.jvmargs=-Xmx4096m" "assembleRelease" stderr[ FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':processReleaseResources'. > Android resource linking failed error: resource style/Theme.AppCompat.Light.DarkActionBar (aka com.ab1.ProductName:style/Theme.AppCompat.Light.DarkActionBar) not found. D:\WorkProject\test\TestPath\Temp\gradleOut\src\main\res\values\styles.xml:6:1-7:9: AAPT: error: style attribute 'attr/colorPrimary (aka com.ab1.ProductName:attr/colorPrimary)' not found. D:\WorkProject\test\TestPath\Temp\gradleOut\src\main\res\values\styles.xml:6:1-7:9: AAPT: error: style attribute 'attr/colorPrimaryDark (aka com.ab1.ProductName:attr/colorPrimaryDark)' not found. D:\WorkProject\test\TestPath\Temp\gradleOut\src\main\res\values\styles.xml:6:1-7:9: AAPT: error: style attribute 'attr/colorAccent (aka com.ab1.ProductName:attr/colorAccent)' not found. error: failed linking references. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 8s ] stdout[ > Task :preBuild UP-TO-DATE > Task :preReleaseBuild > Task :compileReleaseAidl NO-SOURCE > Task :compileReleaseRenderscript NO-SOURCE > Task :checkReleaseManifest > Task :generateReleaseBuildConfig > Task :prepareLintJar > Task :generateReleaseSources > Task :javaPreCompileRelease > Task :mainApkListPersistenceRelease > Task :generateReleaseResValues > Task :generateReleaseResources > Task :mergeReleaseResources > Task :createReleaseCompatibleScreenManifests > Task :processReleaseManifest > Task :processReleaseResources FAILED Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/5.1.1/userguide/command_line_interface.html#sec:command_line_warnings 11 actionable tasks: 11 executed ] exit code: 1 UnityEditor.Android.Command.WaitForProgramToRun (UnityEditor.Utils.Program p, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.Command.Run (System.Diagnostics.ProcessStartInfo psi, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.Command.Run (System.String command, System.String args, System.String workingdir, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.AndroidJavaTools.RunJava (System.String args, System.String workingdir, System.Action`1[T] progress, System.String error) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, System.String workingdir, System.String task, System.Action`1[T] progress) (at <67e4f96bbb8d486db6550813353bb5eb>:0) Rethrow as GradleInvokationException: Gradle build failed UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, System.String workingdir, System.String task, System.Action`1[T] progress) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.PostProcessor.Tasks.BuildGradleProject.Execute (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEditor.Android.PostProcessor.PostProcessRunner.RunAllTasks (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <67e4f96bbb8d486db6550813353bb5eb>:0) UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
怎么使用sql语句truncate table 删除有外键约束的表,
使用MSSSQL 创建表 carete table PartInfo ( tid int primary key identity(1,1), PartName varchar(40), parentID int FOREIGN KEY REFERENCES PartInfo(tid) ) 如何清空这个表让tid重新由1开始lei'j
Springboot启动时报错:Caused by: java.sql.SQLSyntaxErrorException: ORA-02275: 此表中已经存在这样的引用约束条件。关于JPA的使用
使用的是Oracle数据库, 在数据库中我已经创建好了表,表之间的主外键关系我是手动来创建的。 userrolerecid references md_role(recid) not null, userclassrecid references md_class(recid) not null , 类似这样的关系。 然后在使用jpa的时候 我是这样创建bean的 @OneToOne(cascade=CascadeType.ALL) //级联操作 @JoinColumn(name="userrolerecid",referencedColumnName="recid") private RoleInfo roleInfo; @OneToOne(cascade=CascadeType.ALL) //级联操作 @JoinColumn(name="userclassrecid",referencedColumnName="recid") private ClassInfo classInfo; 然后就报类似于这样的错误 Caused by: java.sql.SQLSyntaxErrorException: ORA-02275: 此表中已经存在这样的引用约束条件 请问怎么解决?
Linux mono 添加 Mysql.Data.dll 文件,无法找到命名空间。
系统opensuse 12.3 ,mono 版本3.0.4 (软件库安装)。 加入gac成功(Installed MySql.Data.dll into the gac (/usr/lib/mono/gac)), 无法 使用 命名空间 MySql.Data,在 edit References 找不到 引用,在gac下有MySql.Data的文件夹。 使用的是mysql-connector-net-6.6.5-noinstall 驱动包
References 程序的实现
Description Editors of an electronic magazine make draft versions of the documents in the form of text files. However, publications should meet some requirements, in particular, concerning the rules of reference use. Unfortunately, lots of the draft articles violate some rules. It is desirable to develop a computer program that will make a publication satisfy all the rules from a draft version. Let's call a "paragraph" a set of lines in the article going one after another, so that paragraphs are separated by at least one empty line (an "empty line" is a line that containing no characters different from spaces). Any paragraph can contain an arbitrary number of references. A reference is a positive integer not greater than 999 enclosed in square brackets (for example: [23]). There will be no spaces between the brackets and the number. The square brackets are not used in any other context but reference. There can be two types of paragraph - "regular" and "reference description". Reference description differs from the regular paragraph because it begins with the reference it describes, for example: [23] It is the description ... The opening square bracket will be at the first position of the first line of the "reference description" paragraph (i.e. there will be no spaces before it). No reference description paragraph will contain references inside itself. Each reference will have exactly one corresponding description and each description will have at least one reference to it. To convert a draft version to a publication you have to use the following rules. References should be renumbered by the successive integer numbers starting from one in the order of their first appearance in the regular paragraphs of the source draft version of the document. Reference descriptions should be placed at the end of the article ordered by their number. The order of "regular" paragraphs in the document should be preserved. Your program should not make any other changes to the paragraphs. Input The input file will be a text file containing a draft article your program should process. All lines will be no more than 80 characters long. Any reference description will contain no more than 3 lines. The input file will contain up to 40000 lines. Output The output file contains the result of processing. All paragraphs should be separated by one "true" empty line (i.e. a line that contains no characters at all). There should be no empty lines before the first paragraph. Sample Input [5] Brownell, D, "Dynamic Reverse Address Resolution Protocol (DRARP)", Work in Progress. The Reverse Address Resolution Protocol (RARP) [10] (through the extensions defined in the Dynamic RARP (DRARP) [5]) explicitly addresses the problem of network address discovery, and includes an automatic IP address assignment mechanism. [10] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A Reverse Address Resolution Protocol", RFC 903, Stanford, June 1984. [16] Postel, J., "Internet Control Message Protocol", STD 5, RFC 792, USC/Information Sciences Institute, September 1981. The Trivial File Transfer Protocol (TFTP) [20] provides for transport of a boot image from a boot server. The Internet Control Message Protocol (ICMP) [16] provides for informing hosts of additional routers via "ICMP redirect" messages. [20] Sollins, K., "The TFTP Protocol (Revision 2)", RFC 783, NIC, June 1981. Works [10], [16] and [20] can be obtained via Internet. Sample Output The Reverse Address Resolution Protocol (RARP) [1] (through the extensions defined in the Dynamic RARP (DRARP) [2]) explicitly addresses the problem of network address discovery, and includes an automatic IP address assignment mechanism. The Trivial File Transfer Protocol (TFTP) [3] provides for transport of a boot image from a boot server. The Internet Control Message Protocol (ICMP) [4] provides for informing hosts of additional routers via "ICMP redirect" messages. Works [1], [4] and [3] can be obtained via Internet. [1] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A Reverse Address Resolution Protocol", RFC 903, Stanford, June 1984. [2] Brownell, D, "Dynamic Reverse Address Resolution Protocol (DRARP)", Work in Progress. [3] Sollins, K., "The TFTP Protocol (Revision 2)", RFC 783, NIC, June 1981. [4] Postel, J., "Internet Control Message Protocol", STD 5, RFC 792, USC/Information Sciences Institute, September 1981.
#数据库查询 数据库新手一枚,球球大佬们垂怜
问题:从学生表中查询联系方式的最后一位不是4、5、6的学生信息 相关代码: CREATE TABLE Student ( Sno CHAR(8) PRIMARY KEY, /*住宿学生学号*/ Sname VARCHAR(10) NOT NULL,/*住宿学生姓名*/ Ssex CHAR(2) DEFAULT '男'CHECK(Ssex IN('男', '女')),/*学生性别*/ Sspec VARCHAR(20) NOT NULL,/*学生专业*/ Sphone CHAR(11) NULL,/*联系方式*/ Sdate DATETIME NOT NULL,/*新生搬入时间*/ Dbno CHAR(2) NOT NULL,/*学生住的楼号*/ Dno CHAR(3) NOT NULL,/*学生住的宿舍号*/ FOREIGN KEY (Dbno) REFERENCES Dormitory(Dbno), FOREIGN KEY (Dno) REFERENCES Dormitory(Dno) ) insert into Student values('001', '李浩', '男','计算机', 1234,'2016-08-30',1,101) insert into Student values('003', '陈光', '男','信息管理', 5678,'2014-08-30',2,250) insert into Student values('006', '郑琦', '女','电子商务', 1112,'2015-08-30',4,250) insert into Student values('008', '张颐', '女','计算机', 1314,'2016-08-30',6,121)
同一个动态链接库,debug版行了,但release版不行
请求高手,大师点拨 在VC++6.0环境下,我把一个运算程序做成动态链接库,取名为(wsxdll),在debug状态下生成的(wsxdll.dll和wsxdll.lib)调试通过,其外部函数也能被另一个应用程序(usewsx)正常调用完成运算。但同一个(wsxdll),在release状态下生成的(wsxdll.dll和wsxdll.lib)虽也能被应用程序(usewsx)调用,但运算出错。以下是一些细节说明 ◆在release状态下编译(wsxdll)虽通过,但有warning Linking... Creating library Release/wsxdll.lib and object Release/wsxdll.exp LINK : warning LNK4089: all references to "GDI32.dll" discarded by /OPT:REF wsxdll.dll - 0 error(s), ◆(wsxdll)中用了堆结构的数据变量, 我发现,用debug版的(wsxdll.dll和wsxdll.lib)时,给堆分配物理内存后,堆变量中的默认值正常,如整型变量的默认值是(-842150451),但用Release版的(wsxdll.dll和wsxdll.lib)时,给堆分配物理内存后,堆变量中的默认值不正常,如整型变量的默认值是(3736336)或(0),甚至是(-1.#QNAN)。 注:给堆分配物理内存和释放分配于之的物理内存已在(wsxdll)中正确使用,否则debug版也通不过。
Unity生成的文件反编译后references有两个相同的mscorlib
本人半小白,今天闲着无聊把朋友Unity的apk用ILSpy反编译了,发现references下有两 个同名的mscorlib,导致我修改代码后无法成功complie,请问该如何解决,谢谢 ![两个同名mscorlib](https://img-ask.csdn.net/upload/201610/02/1475417435_770703.jpg) ![complie不成功](https://img-ask.csdn.net/upload/201610/02/1475417458_934253.jpg)
新人求助关于错误validates resource references inside
本人按照书上的内容敲了一下,用到android:entries="@array/books"(activity_main.xml) , 然后在另一个XML文件里面写了 <string-array name = "books">(arrays.xml),activity_main.xml里面报错validates resource references inside,arrays.xml里面报错Attribute is missing the Android namespace prefix。。。![![图片说明](https://img-ask.csdn.net/upload/201704/05/1491399866_47189.png)图片说明](https://img-ask.csdn.net/upload/201704/05/1491399849_538989.png)
Mysql中设置表的外键的时候报错
这个是原句子: ALTER TABLE emp ADD CONSTRAINT id_fk FOREIGN KEY (deptno) REFERENCES Dept (deptno); 就是想把emp表的deptno设置为外键,该列的数据引用Dept表的主键列deptno的数据。然后,报错信息如下: Cannot add or update a child row: a foreign key constraint fails (`emp`.`#sql-1ad8_1`, CONSTRAINT `id_fk` FOREIGN KEY (`deptno`) REFERENCES `dept` (`deptno`))
Lazy Evaluation 怎么实现
Problem Description Most of the programming languages used in practice perform strictevaluation. When a function is called, the expressions passed in the function arguments (for instance a + b in f (a + b, 3)) are evaluated first,and the resulting values are passed to the function. However, this is not the only way how expressions can be evaluated, and in fact, even such a mainstream language as C++ is sometimes performing lazy evaluation: the operators && and jj evaluate only those arguments that are needed to determine the value of the expression. Pal Christian is now working on a comparative study of the performance of the lazy and strict evaluation. Pal wants to evaluate in both ways a set of expressions that follow this simplied syntax: --an expression is either a constant, a name, or a function call --a constant is a signed 32-bit integer; for example, 3, 0, -2, 123 are constants; --a name is a name of a built-in or user-dened function, for example, f , or add are names; names are words containing up to 32 lowercase letters from the English alphabet; --function call has the form: ( f unction arg1 . . . argN ), where f unction is an expression that evaluates to some function of N arguments, and arg1 . . . argN are expressions that evaluate to arguments passed to the function. For example, ( f 3 5 ) , or ( add 2 ( add 1 2 )) are valid function calls. Expressions are evaluated according to the following simple rules: --constants evaluate to themselves --names evaluate to the functions they denote --function calls: in lazy evaluation: the first expression is evaluated first to obtain a function, whose function body, with formal parameters substituted for the expressions provided as the arguments, is evaluated; however, whenever some argument gets evaluated while evaluating the function body, the resulting value will replace all occurences of the same parameter in that function body. In other words, the expression passed in the argument is never evaluated more than once. in strict evaluation: all expressions are evaluated first: the first expression should evaluate to a function, the remaining to values that are used as function arguments; the result is the result of evaluating the corresponding function body, where all occurences of formal parameters are replaced by the values obtained by evaluating the arguments. The following built-in functions are available: add x y - sum of the constants x and y, sub x y - returns the value x-y, mult x y - product of x and y, div x y - integer division, and rem x y - remainder (same semantics as '/' and '%' in C, C++ or Java), true x y - always returns x, f alse x y - always returns y , eq x y - returns true if x and y is the same constant, otherwise returns f alse, gt x y - returns true if x is greater than y, otherwise returns f alse. User-defined functions are defined using the following syntax: f unction name arg1 . . . argN = body, where arg1 . . . argN are distinct words (up to 32 English lowercase letters), denoting the formal parameters of the function, and the body is an expression. The formal parameters can occur in the body of the function in place of constants or names. The function name and the formal parameters are separated by a single space. There is one space on both sides of the "=". Functions with zero (no) rguments are legal. Note that the formal parameters can overshadow the function names (i.e. op in definition of not in sample input overshadows the function name op), but each function must have a unique name. Input The first part of the input contains (less than 1000) lines with one function denition each, followed by a single empty line. Forward references (that is, referring to functions dened later in the input) and recursion are legal. The second part of the input contains less than 1000 test expressions. Each test expression is an expresion occupying a single line. Function names and the arguments are always separated by a single space, but there are no extra spaces around parentheses (see sample input). There is an empty line after the last expression. Expressions are to be evaluated by both the lazy and the strict evaluation. You can assume that all function definitions and expressions are syntactically correct, and that the arithmetic built-in functions (add, sub, mult, div, rem, eq, gt) will always be called with integers only, and no division by 0 occurs. Overows outside the 32-bit integer range are legal and do not require any special treatment (just use the value produced by C, C++, or Java operators +, -, *, /, or %). In strict evaluation, built-in functions evaluate all their arguments too. In lazy evaluation, arithmetic built-in functions always evaluate all their arguments. All lines on the input contain no more than 255 characters including spaces. Output The program should produce a table in exactly the following format: operator lazy_evaluation strict_evaluation add addlazy addstrict sub sublazy substrict mult multlazy multstrict div divlazy divstrict rem remlazy remstrict where each is an integer - how many times op has been executed in lazy evaluation of all expressions, and is the number of evaluations of in strict evaluation. Spaces can occur arbitrarily. If the evaluation of a test expression does not terminate after a total of 2345 function evaluations , you can assume that it is in an innite loop, the program should skip that expression, and do not count it into the totals (omit counting operations both in lazy and strict evaluation of this expression). Sample Input if cond truepart elsepart = (cond truepart elsepart) fact x = (facta x 1) facta x a = (if (eq x 0) a (facta (sub x 1) (mult a x))) and x y = (x y false) ident x = x two = 2 op op x = ((if (eq op 1) add sub) op x) not op = (op false true) sum n = (suma n 0) suma n a = (((gt n 1) suma false) (sub n 1) (add a n)) (true (add 1 2) (mult 1 2)) 5 true (and (gt (op (sub 2 1) 1) 5) (eq (two) (op 1 1))) (false (sub 1 2) (sum 4)) ((eq (true 1 2) (false 2 1)) (add 1 2) (sub 1 2)) (fact 3) Sample Output operator lazy_evaluation strict_evaluation add 7 8 sub 4 7 mult 0 1 div 0 0 rem 0 0
INSERT 语句与 FOREIGN KEY 约束"fk_zyxbdm"冲突。该冲突发生于数据库"xuanke",表"dbo.系部", column '系部代码'。为什么会这样?
我百度了一下 觉得我的主键外键没弄错啊 怎么会这样 INSERT语句是 use xuanke insert 专业 (专业代码,专业名称,系部代码) values ('0103','电子商务','01') use xuanke create table 系部 ( 系部代码 char(2) constraint pk_xbdm primary key, 系部名称 varchar(30) not null, 系主任 char(8) ) go create table 专业 ( 专业代码 char(4) constraint pk_zydm primary key, 专业名称 varchar(20) not null, 系部代码 char(2) constraint fk_zyxbdm references 系部(系部代码) ) go create table 班级 ( 班级代码 char(9) constraint pk_bjdm primary key, 班级名称 varchar(20), 专业代码 char(4) constraint fk_bjzydm references 专业(专业代码), 系部代码 char(2) constraint fk_bjxbdm references 系部(系部代码), 备注 varchar(50) ) go create table 学生 ( 学号 char(12) constraint pk_xh primary key, 姓名 char(8), 性别 char(2), 出生日期 datetime, 入学日期 datetime, 班级代码 char(9) constraint fk_xsbjdm references 班级(班级代码), 系部代码 char(2) constraint fk_xsxbdm references 系部(系部代码), 专业代码 char(4) constraint fk_xszydm references 专业(专业代码) ) go create table 课程 ( 课程号 char(4) constraint pk_kc primary key, 课程名 char(20) not null, 学分 smallint ) go create table 教师 ( 教师编号 char(12) constraint pk_jsbh primary key, 姓名 char(8) not null, 性别 char(2), 出生日期 datetime, 学历 char(10), 职务 char(10), 职称 char(10), 系部代码 char(2) constraint fk_jsxbdm references 系部(系部代码), 专业 char(20), 备注 varchar(50) ) go create table 教学计划 ( 课程号 char(4) constraint fk_jxjhch references 课程(课程号), 专业代码 char(4) constraint fk_jxjhzydm references 专业(专业代码), 专业学级 char(4), 课程类型 char(8), 开课学期 tinyint, 学分 tinyint ) go create table 教师任课 ( 教师编号 char(12) constraint fk_jsrkjsbh references 教师(教师编号), 课程号 char(4) constraint fk_jsrkch references 课程(课程号), 专业学级 char(4), 专业代码 char(4) constraint fk_jsrkzydm references 专业(专业代码), 学年 char(4), 学期 tinyint, 学生数 smallint )
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载 点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。 ...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队 译者 | 凯隐 编辑 | Jane 出品 | AI科技大本营(ID:rgznai100) 本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
4G EPS 第四代移动通信系统
目录 文章目录目录4G 与 LTE/EPCLTE/EPC 的架构E-UTRANE-UTRAN 协议栈eNodeBEPCMMES-GWP-GWHSSLTE/EPC 协议栈概览 4G 与 LTE/EPC 4G,即第四代移动通信系统,提供了 3G 不能满足的无线网络宽带化,主要提供数据(上网)业务。而 LTE(Long Term Evolution,长期演进技术)是电信领域用于手机及数据终端的高速无线通...
相关热词 c# 二进制截断字符串 c#实现窗体设计器 c#检测是否为微信 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片
立即提问