db2数据库中,同时包含where和group by 和order by 的查询语句怎么写!!!

请教大神们,在db2数据库中写执行语句时,遇到问题。

这是我写的sql语句:select * from db2admin.sys_orgsystem as o where o.orgdept like '1' group by o.orgparentguid;

下面是报的错:
SQL0119N 在 SELECT 子句、HAVING 子句 或 ORDER BY 子句中指定的以 "ORGGUID"
开始的表达式未在 GROUP BY 子句中指定,或者它在 SELECT 子句、HAVING 子句或 ORDER
BY 子句中,具有列函数,但未指定 GROUP BY 子句。 SQLSTATE=42803

SQL0119N 在 SELECT 子句、HAVING 子句 或 ORDER BY 子句中指定的以 "ORGGUID " 开始的表达式未在 GROUP BY 子句中指定,或者它在 SELECT 子句、HAVING 子句或 ORDER BY 子句中,具有列函数,但未指定 GROUP BY 子句。

说明:

SELECT 语句有下列其中一种错误:

  • 标识的表达式和列函数包含在 SELECT 子句、HAVING 子句或 ORDER BY 子句 中,但无 GROUP BY 子句
  • 标识的表达式包含在 SELECT 子句、HAVING 子句或 ORDER BY 子句中,但不在 GROUP BY 子句中。

标识的表达式是以 "<表达式开头>" 开始的表达式。表达式可以是单个列名。

如果在 HAVING 子句中指定了 NODENUMBER 或 PARTITION 函数,那么认为基础表
的所有分区键列都处在 HAVING 子句中。

不能处理该语句。

3个回答

group by 必须包含 select 的非聚合函数的所有字段,select * 肯定不对 啊、、、

lm121342074
lm121342074 因为字段太多,我没贴上。我如果把*换成相应的字段后,在group by的后面要全加上才行,但是那样的话,跟我想要的结果不一致
2 年多之前 回复

可不可以先查一点信息,group by里就不会有很多了,根据这点信息再拿其他你需要的其他信息,就是用子查询 ,不过好像麻烦了点

lm121342074
lm121342074 恩,好的,我试试看,谢谢了
2 年多之前 回复

告诉你的秘籍
select... from...where ... group by...having...order by...
只要记住这个就知道先后顺序了。不管是db2还是oracle还是mysql

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Oracle中的sql在DB2中不知道怎么转换
select a.x xs, coalesce(b.y, 0) y from (select to_char(to_date(?, 'yyyy-MM-dd') + (level - 1), 'yyyy-MM-dd') x from sysibm.sysdummy1 connect by trunc(to_date(?, 'yyyy-MM-dd') + level - 1) <=trunc(to_date(?, 'yyyy-MM-dd'))) a,(select t.x, count(1) y from (select to_char(a.x, 'yyyy-MM-dd') x from (select h.createdate x from wfl_taskinfo_his h where h.tradeno in ('000040','000080') and to_char(h.createdate, 'yyyy-MM-dd') >= ? and to_char(h.createdate, 'yyyy-MM-dd') <= ?) a) t group by t.x) b where b.x(+) = a.x order by a.x asc 这里面的connect by level在DB2中不支持,请问怎么转换啊?
Oracle中的rownum与DB2中的row_number()over()
select rownum , m.* from (select '汇总' as MENUNAMES,count(*) as NUMCOUNT from PEVENTRECORD where (EVENT='CLICK' OR EVENT = 'VIEW') AND TO_CHAR(TRANSTIME,'yyyy-MM-dd') >= '2015-02-02' AND TO_CHAR(TRANSTIME,'yyyy-MM-dd') <= '2015-05-11' union select EVENTKEY as MENUNAMES,count(*) as NUMCOUNT from PEVENTRECORD where (EVENT='CLICK' OR EVENT='VIEW') AND TO_CHAR(TRANSTIME,'yyyy-MM-dd') >= '2015-02-02' AND TO_CHAR(TRANSTIME,'yyyy-MM-dd') <= '2015-05-11' group by EVENTKEY order by NUMCOUNT) m 怎么将这个sql改成DB2的sql啊?用row_number()over()代替的。谢谢 各位大牛!
求数据库大神帮忙,SQL优化
小弟数据库方面的知识非常浅薄,只会写sql语句实现查询,但是确不会做优化,以下有两个sql语句,在数据库中执行的非常慢,DB2,有没有数据库大神,帮忙优化一下,不胜感激。 --单证号是否连续 select * from ( select double(replace(c.doc_nbr, 'QZ', '')) as nbr,c.doc_id, b.pip_nbr,c.inv_code,'1' as t from op_bill a,op_bill_detail b,SS_CITIC_BILL_NBR c where a.BILL_FILE_ID = b.BILL_FILE_ID and b.pk_id = c.pk_id and a.pvdr_id = 6252 and to_char(c.sts_date,'yyyyMMdd') = '20150819' and doc_nbr <> '' and c.doc_id = 108 union all select double(replace(c.doc_nbr, 'QZ', '')) as nbr,c.doc_id, b.pip_nbr,c.inv_code,'0' as t from op_bill a,op_bill_detail b,SS_CITIC_WAST_NBR c where a.BILL_FILE_ID = b.BILL_FILE_ID and b.pk_id = c.pk_id and a.pvdr_id = 6252 and to_char(c.sts_date,'yyyyMMdd') = '20150819' and doc_nbr <> '' and c.doc_id = 108 ) order by nbr asc --单证号是否重复 select t.inv_code,t.doc_id,t.doc_nbr,count(*) from (select a.inv_code,a.doc_id,a.doc_nbr,b.pvdr_id from SS_CITIC_BILL_NBR a,op_bill b,op_bill_detail d where a.bill_file_id=b.bill_file_id and a.pk_id=d.pk_id and b.pvdr_id =6252 and to_char(a.sts_date,'yyyyMMdd')<='20150819' and a.doc_nbr <>'' union all select a.inv_code,a.doc_id,a.doc_nbr,b.pvdr_id from SS_CITIC_WAST_NBR a,op_bill b,op_bill_detail d where a.bill_file_id=b.bill_file_id and a.pk_id=d.pk_id and b.pvdr_id =6252 and to_char(a.sts_date,'yyyyMMdd')<='20150819' and a.doc_nbr <>'') t where t.doc_nbr in( select doc_nbr from SS_CITIC_BILL_NBR s,op_bill b,op_bill_detail d where s.bill_file_id=b.bill_file_id and s.pk_id=d.pk_id and b.pvdr_id =t.pvdr_id and to_char(s.sts_date,'yyyyMMdd')='20150819' and s.doc_nbr <>'' and t.doc_nbr=s.doc_nbr and t.inv_code=s.inv_code union all select doc_nbr from SS_CITIC_WAST_NBR s,op_bill b,op_bill_detail d where s.bill_file_id=b.bill_file_id and s.pk_id=d.pk_id and b.pvdr_id =t.pvdr_id and to_char(s.sts_date,'yyyyMMdd')='20150819' and s.doc_nbr <>'' and t.doc_nbr=s.doc_nbr and t.inv_code=s.inv_code ) group by t.doc_nbr,t.doc_id,t.inv_code having count(*)>1
谁能告诉我一下orader by的问题
我在用access的时候,出现了一个问题,不知道为什么ORDER BY 不起作用,谁能帮我解答一下。 SELECT announce_master.counter ,(SELECT TOP 1 m_group.gname FROM m_user,m_group,group_info WHERE announce_master.uid = m_user.uid AND m_user.uid = group_info.uid AND m_group.gid = group_info.gid AND group_info.uid = m_user.uid AND m_group.gid <> '00000000000000000000000000000000' AND m_group.gid <> '11111111111111111111111111111111' ORDER BY m_group.viewid) as gname FROM ((announce_master LEFT OUTER JOIN m_user ON announce_master.uid = m_user.uid) LEFT OUTER JOIN announce_category ON announce_master.category_id = announce_category.category_id) WHERE announce_master.counter IN (28,9,10,11,12,29,30) ORDER BY 2 ASC
JAVA 使用jpa原生sql查询字段 别名无值,小白求大神慷慨赐教
sql查询语句 ,其中 realPrice 是别名 ``` @Query(value = "select *,case when db.order_type_code = 'Db2bOrder' then d.RealTotal else df.TotalMoney end realPrice from db2border_bill_detail db " + "left join db2border d on d.fID =db.source_id " + "left join db2brefund df on df.OrderID = db.source_id where db.group_id = ?1 and db.db2border_bill_id = ?2", nativeQuery = true) List<Db2BorderBillDetail> findAllByGroupIdAndDb2BorderBillId(String groupId,Long Id); ``` 在Db2BorderBillDetail实体类增加对应字段,这样查询正常 ,返回的realPrice的字段有值 ,但是发现保存报错 提示“Unknown column 'db2borderb0_.realPrice' in 'field list'” ``` @Transient() private BigDecimal realPrice; public BigDecimal getRealPrice() { return realPrice; } public void setRealPrice(BigDecimal realPrice) { this.realPrice = realPrice; } ``` 改成这样 则保存正常,但是sql查询的值就为null ``` private BigDecimal realPrice; @Transient() public BigDecimal getRealPrice() { return realPrice; } public void setRealPrice(BigDecimal realPrice) { this.realPrice = realPrice; } ``` 求大神赐教下,怎么才能查询保存都正常
regexp_substr拆分表中用逗号隔开的多条数据
``` with t as (select wm_concat(cbbmmc) cbbmmc,wm_concat(cbbmid) cbbmid,substr(cjrq,6,2) month from DCDB_DJLB_DB a where a.lclx='lzps' and zt='1' and cbbmmc is not null and substr(cjrq,0,4) = 2019 group by substr(cjrq,6,2) order by month desc) select regexp_substr(cbbmmc, '[^,]+', 1, rownum) cbbmmc, regexp_substr(cbbmid, '[^,]+', 1, rownum) cbbmid, month from t connect by rownum <= length(regexp_replace(cbbmmc, '[^,]+')) +1 ``` 其中 t的查询结果为 ![图片说明](https://img-ask.csdn.net/upload/201909/16/1568621154_319523.png) 这个语句查询之后 为什么第二条就不拆分了,只显示了空? 查询结果如图 ![图片说明](https://img-ask.csdn.net/upload/201909/16/1568621299_129891.png) 请大神帮忙看看。
db2出现 SQLCODE=-440, SQLSTATE=42884, DRIVER=4.12.55
SELECT DISTINCT A.stockcode AS stockcode, B.VHCMODEL AS VHCMODEL, count (A.stockcode) AS STOCKCODESUM, sum (decode (A.STATUS, '3', 1, 0)) AS FACHEJIHUAYINGXIANGTAISHU, sum (decode (A.STATUS, '2', 1, 0)) AS DIAOBOYINGXIANGTAISHU, sum (decode (A.STATUS, '1', 1, 0)) AS DANGRIJIHUATAISHU, sum (decode (A.STATUS, '0', 1, 0)) AS WEIWANCHENGTAISHU, case when decode (count (A.stockcode), 0, 0.00, decode( sum (decode (A.STATUS, '1', 1, 0)))) >= 1 then round (decode (count (A.stockcode), 0, 0.00, decode( sum (decode (A.STATUS, '1', 1))) / ( count (A.stockcode) - sum (decode (A.STATUS, '3', 1, 0)) - sum (decode (A.STATUS, '2', 1, 0))), 4)) * 100 when decode (count (A.stockcode), 0, 0.00, decode( sum (decode (A.STATUS, '1', 1, 0)))) = 0 then 0.00 end AS DANGRIJIHUAWANCHENGLV FROM VHC_WASHTASK_TB A, VHC_LEDJER_TB B WHERE PLAN_NO = '20161115' AND A.VINNO = B.VINNO GROUP BY A.stockcode, B.VHCMODEL ORDER BY A.STOCKCODE 这个是我的sql,在最后一个列中decode出现问题了。求指教怎么修改
MYSQL语句优化,在线等
SELECT tt.int_interpreting_data_id,tt.dt_report_date,tn.str_norm_number,tnt.str_norm_type_name,tt.int_whether,tt.str_value,tt.str_detection_result,tt.str_remark,tn.str_name,tn.int_response_type,tt.str_device_name,tt.str_ip,tt.db_name,tn.str_code FROM t_norm tn LEFT JOIN ( SELECT tid1.str_code,tid1.int_interpreting_data_id,tid1.dt_report_date,tid1.str_norm_type_name,tid1.int_whether,tid1.str_value,tid1.str_detection_result,tid1.str_remark,tid1.str_device_name,tid1.str_ip,tid1.db_name,tid1.str_norm_number FROM t_interpreting_data tid1, ( SELECT str_code,int_interpreting_data_id,str_norm_type_name,int_whether,str_value,str_detection_result,str_remark,str_device_name,str_ip,db_name,str_norm_number,max(dt_report_date) max_date FROM t_interpreting_data WHERE 1=1 AND str_property_id='Windows 7_852' AND STR_TO_DATE(dt_report_date,'%Y-%m-%d') ='2015-06-05' GROUP BY str_norm_number,db_name ) tid2 WHERE tid1.dt_report_date=tid2.max_date GROUP BY tid1.str_norm_number,tid1.db_name ORDER BY tid1.dt_report_date DESC ) tt on tn.str_norm_number=tt.str_norm_number LEFT JOIN t_norm_type tnt ON tn.int_norm_type_id=tnt.int_norm_type_id WHERE tn.str_code IN (SELECT str_code FROM t_interpreting_data WHERE str_property_id='Windows 7_852' GROUP BY str_code) ORDER BY tt.str_norm_number DESC 1.以t_norm表为标准 2.查找 t_interpreting_data表中同一个类型最新的记录,同时根据db_name分组
请问在C# 中我用N''在数据库里可以执行 为什么在C#中就会报错呢
![图片说明](https://img-ask.csdn.net/upload/201906/18/1560856555_286003.png)![图片说明](https://img-ask.csdn.net/upload/201906/18/1560856568_48315.png) ## # 这是部分存储过程(太长 只贴一部分了)这其实就是U8的客户科目余额表的存储过程 USE [UFDATA_150_2019] GO /****** Object: StoredProcedure [dbo].[gl_assReport] Script Date: 2019-06-18 16:09:51 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[gl_assReport] @tblname NVARCHAR(60) , @iBeginPeriod INT , --月份 @iEndPeriod INT , --月份 @strass NVARCHAR(2000) ,--辅助项,金额9 @whereSql NVARCHAR(2000) ,--过滤条件 @wheresqlcode NVARCHAR(2000) , --科目过滤条件 @bcdc INT ,-- -1 方向 借 1 贷 0 @bVouch BIT, --记账标志11 @citem_class NVARCHAR(10), -- 项目大类 @bDisplayCreditLine bit =0, --信用额度 @sAuth NVARCHAR(4000), --权限字符串 @isshowzeroaccumulation bit=1, @detailedObject nvarchar(100)='', --交叉项 @subtotalitem nvarchar(100)='', --小计项 @collectitem nvarchar(100)='', --汇总项 @isPhone tinyint=0 --@isPhone:1移动报表按期间汇总 AS DECLARE @isUseMultiCurrency AS BIT --是否启用多币种核算 SELECT @isUseMultiCurrency=case when UPPER(cvalue)=N'TRUE' then 1 else 0 end from accinformation where cname=N'isUseMultiCurrency' and csysid=N'GL' --统计项 -- declare @staticsticsItem as nvarchar(100) -- select @staticsticsItem=enumcode from v_aa_enum where EnumName =@statistics and EnumType ='GL_code' -- if @staticsticsItem=N'部门' -- begin -- set @strass=replace(@strass,'department','') -- set @strass ='department,' +@strass --end -- else if @staticsticsItem=N'项目' --begin -- set @strass=replace(@strass,'item','') -- set @strass ='item,' +@strass --end -- else if @staticsticsItem=N'客户' -- begin -- set @strass=replace(@strass,'customer','') -- set @strass ='customer,' +@strass -- end -- else if @staticsticsItem=N'供应商' -- begin --set @strass=replace(@strass,'vendor','') --set @strass ='vendor,' +@strass --end -- --明细对象 -- declare @detailedObjectItem as nvarchar(100) -- declare @tempStr as nvarchar(100) -- select @detailedObjectItem=enumcode from v_aa_enum where EnumName =@detailedObject and EnumType ='gl_code' --print @detailedObjectItem --set @tempStr=substring(@strass,CHARINDEX(',',@strass)+1,LEN(@strass)) --if @detailedObjectItem=N'供应商' -- set @strass=replace(@strass,@tempStr,'vendor') --else if @detailedObjectItem=N'部门' -- set @strass=replace(@strass,@tempStr,'department') --else if @detailedObjectItem=N'项目' -- set @strass=replace(@strass,@tempStr,'department') --else if @detailedObjectItem=N'客户' -- set @strass=replace(@strass,@tempStr,'customer') --else if @detailedObjectItem=N'科目' -- set @strass=replace(@strass,@tempStr,'code') -- --end --默认币种 declare @accid as char(3) set @accid=(select SUBSTRING(DB_NAME(),8,3)) declare @RMB as nvarchar(50) set @RMB=isnull( (select cCurName from ufsystem..UA_Account where cAcc_Id=@accid),N'人民币') --启用日期 declare @dStartDate as datetime set @dStartDate=(select cvalue from accinformation where csysid ='gl' and cname = 'dGLStartDate') declare @AccBeginPeriod as int declare @AccPeriod as int declare @startYear as int if exists(select 1 from ufsystem..ua_period where dbegin <=@dStartDate and dend >= @dStartDate and cacc_id=@accid) select @AccPeriod=iid,@startYear=iyear from ufsystem..ua_period where dbegin <=@dStartDate and dend >= @dStartDate and cacc_id=@accid else select top 1 @AccPeriod=iid,@startYear=iyear from ufsystem..ua_period where cacc_id=@accid order by iyear,iid set @AccBeginPeriod=@AccPeriod set @AccPeriod=convert(int,str(@startYear,4)+(case when @AccPeriod>9 then str(@AccPeriod,2) else '0'+str(@AccPeriod,1) end) ) print @AccPeriod --启用期间 --无起始日期 if right(str(@iBeginPeriod),2)='00' set @iBeginPeriod=@iBeginPeriod+1 declare @regPeriod as int--用来决定取accsum表的期间 IF @bVouch = 1 --如果包含未记账,求最大记账期间 Begin set @regPeriod=(select isnull(max(iyPeriod),convert(int,str(@startYear,4))*100) from gl_accvouch where ibook=1 and iPeriod <13 and iPeriod >0 ) End ELSE set @regPeriod=@iBeginPeriod set @regPeriod=case when @regPeriod=convert(int,str(@startYear,4))*100 then @AccPeriod else @regPeriod end set @regPeriod=case when @regPeriod>@iBeginPeriod then @iBeginPeriod else @regPeriod end --最大使用凭证的期间 可能为 00 if right(str(@regPeriod,9),2)='00' set @regPeriod=@regPeriod+1 --查询起始日期在第二个年度以后 set @regPeriod=case when @iBeginPeriod/100>@regPeriod/100 then @iBeginPeriod/100*100+1 else @regPeriod end print '@regPeriod:'+cast(@regPeriod as nvarchar) DECLARE @withIndex nvarchar(100) --强制使用索引提高效率 set @withIndex=' ' if exists(SELECT * FROM sysindexes WHERE name = N'idx_GL_accvouch_iYPeriod_ibook_iflag_code' and id=object_id('gl_accvouch')) set @withIndex=' with (index(idx_GL_accvouch_iYPeriod_ibook_iflag_code))' --DECLARE @tsqltemptable NVARCHAR(60) DECLARE @sql NVARCHAR(4000) DECLARE @tblnameA NVARCHAR(60) DECLARE @tblnameB NVARCHAR(60) DECLARE @tblnameM NVARCHAR(60) declare @tmpCode nvarchar(60) SET @tblnameA = @tblname + 'a' SET @tblnameB = @tblname + 'b' SET @tblnameM = @tblname + 'M' set @tmpCode=@tblname +'code' set @sql='if not exists(select 1 from tempdb..sysobjects where name='''+@tmpcode+''' and xtype=''u'') select * into tempdb..'+@tmpcode+' from code ' exec(@sql) IF CHARINDEX('tempdb..', @tblnameA) > 0 BEGIN SET @tblnameA = SUBSTRING(@tblnameA, LEN('tempdb..') + 1, LEN(@tblnameA)) END --SET @tsqltemptable = 'tempdb..gl_sqltemptable' SET @sql = ' if exists( select * from tempdb..sysobjects where id=object_id(''' + 'tempdb..' + @tblnameA + ''') and type=''u'') drop table tempdb..' + @tblnameA EXEC(@sql) SET @sql = ' if exists( select * from tempdb..sysobjects where id=object_id(''' + 'tempdb..' + @tblnameB + ''') and type=''u'') drop table tempdb..' + @tblnameB EXEC(@sql) SET @sql = ' if exists( select * from tempdb..sysobjects where id=object_id(''' + 'tempdb..' + @tblnameM + ''') and type=''u'') drop table tempdb..' + @tblnameM EXEC(@sql) set @sql=' if exists( select * from tempdb..sysobjects where id=object_id(''' + 'tempdb..' + @tblname + ''') and type=''u'') drop table tempdb..' + @tblname EXEC(@sql) DECLARE @OldSelectSql AS NVARCHAR(2000) DECLARE @selectSql AS NVARCHAR(2000) DECLARE @selectSqlA AS NVARCHAR(2000) DECLARE @selectSqlAllA AS NVARCHAR(2000) --declare @whereSql as nvarchar(2000) DECLARE @oldGroupbySql AS NVARCHAR(2000) DECLARE @groupbySql AS NVARCHAR(2000) DECLARE @joinonSql AS NVARCHAR(2000) DECLARE @oldjoinonSql AS NVARCHAR(2000) DECLARE @ibanlanceSql AS NVARCHAR(200) DECLARE @ass AS NVARCHAR(50) DECLARE @asscodebegin AS NVARCHAR(50) DECLARE @asscodeend AS NVARCHAR(50) --declare @wheresqlcode as nvarchar(2000) SET @selectSql = ' select ' SET @selectSqlA = ' select ' SET @selectSqlAllA = '' --SET @whereSql = ' where 1=1 ' SET @groupbySql = ' group by ' SET @joinonSql = ' ' set @oldjoinonSql= '' set @whereSql=' where 1=1 ' + @whereSql --set @wheresqlcode='' declare @leftjoinonSql as nvarchar(2000) set @leftjoinonSql =' ' declare @assCount int --显示项个数 set @assCount=0 --declare @subTotalSql nvarchar(2000) --小计更新 declare @subtotalSelectSql nvarchar(2000) --小计字段cdept_id|cdept_id,cperson_id --set @subTotalSql='' set @subtotalSelectSql='' DECLARE sqlcursor CURSOR FOR SELECT ass,asscodeBegin,asscodeEnd FROM f_split(@strass,',') OPEN sqlcursor FETCH NEXT FROM sqlcursor INTO @ass,@asscodebegin,@asscodeend ; WHILE @@FETCH_STATUS = 0 BEGIN set @assCount=@assCount+1 /* SET @subTotalSql = @subTotalSql + CASE WHEN @ass = 'code' THEN ' a.ccode=Null,' WHEN @ass = 'customer' THEN ' a.ccus_id=Null,' WHEN @ass = 'department' THEN ' a.cdept_id=Null,' WHEN @ass = 'person' THEN 'a.cperson_id=Null,' WHEN @ass = 'vendor' THEN 'a.csup_id=Null,' WHEN @ass = ' itemclass ' THEN 'a.citem_class=Null,' WHEN @ass = 'item' THEN 'a.citem_id=Null,' WHEN @ass = 'citem' THEN 'citemccode=Null,' WHEN @ass = 'ccustomer' THEN ' ccccode=Null,' WHEN @ass = 'cvendor' THEN ' cvccode=Null,' WHEN @ass = 'dcustomer' THEN ' cdccode=Null,' WHEN @ass = 'dvendor' THEN ' cdccode=Null,' WHEN @ass = 'groupcode' THEN ' cgroupcode=Null,' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE 'a.'+@ass+'=Null,' END */ SET @selectSql = @selectSql + CASE WHEN @ass = 'code' THEN ' a.ccode,' WHEN @ass = 'customer' THEN ' a.ccus_id,' WHEN @ass = 'department' THEN ' a.cdept_id,' WHEN @ass = 'person' THEN 'a.cperson_id,' WHEN @ass = 'vendor' THEN 'a.csup_id,' WHEN @ass = ' itemclass ' THEN 'a.citem_class,' WHEN @ass = 'item' THEN 'a.citem_id,' WHEN @ass = 'citem' THEN 'citemccode,' WHEN @ass = 'ccustomer' THEN ' ccccode,' WHEN @ass = 'cvendor' THEN ' cvccode,' WHEN @ass = 'dcustomer' THEN ' cdccode,' WHEN @ass = 'dvendor' THEN ' cdccode,' WHEN @ass = 'groupcode' THEN ' cgroupcode,' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE 'a.'+@ass+',' END --自定义项小计:@subtotalitem=N'cdefine10' if charindex(','+@ass+',',','+@subtotalitem+',')>0 begin set @subtotalSelectSql = @subtotalSelectSql + replace(@selectSql,'select ','') + '|' print '@subtotalSelectSql:'+@subtotalSelectSql end SET @selectSqlAllA = @selectSqlAllA + CASE WHEN @ass = 'code' THEN ' a.ccode,' WHEN @ass = 'customer' THEN ' a.ccus_id,' WHEN @ass = 'department' THEN ' a.cdept_id,' WHEN @ass = 'person' THEN 'a.cperson_id,' WHEN @ass = 'vendor' THEN 'a.csup_id,' WHEN @ass = ' itemclass ' THEN 'a.citem_class,' WHEN @ass = 'item' THEN 'a.citem_id,' WHEN @ass = 'citem' THEN 'a.citemccode,' WHEN @ass = 'ccustomer' THEN ' a.ccccode,' WHEN @ass = 'cvendor' THEN ' a.cvccode,' WHEN @ass = 'dcustomer' THEN ' a.cdccode,' WHEN @ass = 'dvendor' THEN ' a.cdccode,' WHEN @ass = 'groupcode' THEN ' a.cgroupcode,' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE 'a.'+@ass+',' END SET @selectSqlA = @selectSqlA + CASE WHEN @ass = 'code' THEN ' a.ccode,' WHEN @ass = 'customer' THEN ' a.ccus_id,' WHEN @ass = 'department' THEN ' a.cdept_id,' WHEN @ass = 'person' THEN 'a.cperson_id,' WHEN @ass = 'vendor' THEN 'a.csup_id,' WHEN @ass = ' itemclass ' THEN 'a.citem_class,' WHEN @ass = 'item' THEN 'a.citem_id,' WHEN @ass = 'citem' THEN (case when @citem_class='ch' then 'cinvccode as citemccode,' else 'citemccode,' end) WHEN @ass = 'ccustomer' THEN ' ccccode,' WHEN @ass = 'cvendor' THEN ' cvccode,' WHEN @ass = 'dcustomer' THEN ' cdccode,' WHEN @ass = 'dvendor' THEN ' cdccode,' WHEN @ass = 'groupcode' THEN ' cgroupcode,' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE 'a.'+@ass+',' END --set @whereSql=@whereSql + --case --when @ass='ccode' then (case when @asscodebegin='' then '' else ' and ccode>=''' + @asscodebegin +'''' end) --+ (case when @asscodeend='' then '' else ' and ccode<=''' + @asscodeend +'''' end ) + ' ' --when @ass='cus' then (case when @asscodebegin='' then '' else ' and ccus_id>=''' + @asscodebegin + '''' end) --+ (case when @asscodeend='' then '' else ' and ccus_id<=''' + @asscodeend +'''' end ) + ' and ccus_id is not null' --when @ass='dept' then (case when @asscodebegin='' then '' else ' and cdept_id>=''' + @asscodebegin + '''' end ) --+ (case when @asscodeend='' then '' else ' and cdept_id<=''' + @asscodeend end ) + ' and cdept_id is not null' --when @ass='person' then (case when @asscodebegin='' then '' else ' and cperson_id>=''' + @asscodebegin +'''' end ) --+ (case when @asscodeend='' then '' else ' and cperson_id<=' + @asscodeend+'''' end ) + ' and cperson_id is not null' --when @ass='sup' then (case when @asscodebegin='' then '' else ' and csup_id>=' + @asscodebegin+'''' end ) --+ (case when @asscodeend='' then '' else ' and csup_id<=''' + @asscodeend+'''' end ) + ' and csup_id is not null' --when @ass=' itemclass ' then (case when @asscodebegin='' then '' else ' and citem_class=''' + @asscodebegin+'''' end ) --+' and citem_class is not null' --when @ass='itemid' then (case when @asscodebegin='' then '' else ' and citem_id>=''' + @asscodebegin+'''' end ) --+ (case when @asscodeend='' then '' else ' and citem_id<=' + @asscodeend+'''' end ) + ' and citem_id is not null' --else '' --end SET @groupbySql = @groupbySql + CASE WHEN @ass = 'code' THEN ' a.ccode,' WHEN @ass = 'customer' THEN ' a.ccus_id,' WHEN @ass = 'department' THEN ' a.cdept_id,' WHEN @ass = 'person' THEN 'a.cperson_id,' WHEN @ass = 'vendor' THEN 'a.csup_id,' WHEN @ass = ' itemclass ' THEN 'a.citem_class,' WHEN @ass = 'item' THEN 'a.citem_id,' WHEN @ass = 'citem' THEN (case when @citem_class='ch' then 'cinvccode,' else 'citemccode,' end) WHEN @ass = 'ccustomer' THEN ' customer.ccccode,' WHEN @ass = 'cvendor' THEN ' vendor.cvccode,' WHEN @ass = 'dcustomer' THEN ' customer.cdccode,' WHEN @ass = 'dvendor' THEN ' vendor.cdccode,' WHEN @ass = 'groupcode' THEN ' GF_VgroupStruct.cgroupcode,' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE 'a.'+@ass+',' END SET @whereSql = @whereSql + CASE WHEN @ass = 'customer' THEN ' and not a.ccus_id is Null ' WHEN @ass = 'department' THEN ' and not a.cdept_id is Null ' WHEN @ass = 'person' THEN ' and not a.cperson_id is Null ' WHEN @ass = 'vendor' THEN ' and not a.csup_id is Null ' WHEN @ass = ' itemclass ' THEN ' and not a.citem_class is Null ' WHEN @ass = 'item' or @ass = 'citem' THEN ' and not a.citem_id is Null ' WHEN @ass = 'ccustomer' THEN ' and not customer.ccccode is Null ' WHEN @ass = 'cvendor' THEN ' and not vendor.cvccode is Null ' WHEN @ass = 'dcustomer' THEN ' and not customer.cdccode is Null' WHEN @ass = 'dvendor' THEN ' and not vendor.cdccode is Null' WHEN @ass = '' or @ass='ibanlance' or @ass='code' THEN '' ELSE ' ' END /* SET @wheresqlcode = @wheresqlcode + CASE WHEN @ass = 'customer' THEN ' and code.bcus=1 ' WHEN @ass = 'department' THEN ' and code.bdept=1 and code.bperson=0 ' WHEN @ass = 'person' THEN ' and code.bperson=1 and code.bdept=0 ' WHEN @ass = 'vendor' THEN ' and code.bsup=1 ' WHEN @ass = 'item' THEN ' and code.bitem=1 ' WHEN @ass = 'ccustomer' THEN ' and code.bcus=1 ' WHEN @ass = 'cvendor' THEN ' and code.bsup=1 ' WHEN @ass = 'dcustomer' THEN ' and code.bcus=1 ' WHEN @ass = 'dvendor' THEN ' and code.bsup=1 ' WHEN @ass like 'cdefine%' THEN ' and code.b'+@ass+'=1 ' ELSE '' END --个人,部门同时存在,条件冲突,去掉部门条件 if CHARINDEX('and code.bdept=1 and code.bperson=0', @wheresqlcode)>0 and CHARINDEX('and code.bperson=1 and code.bdept=0', @wheresqlcode)>0 SET @wheresqlcode=Replace(@wheresqlcode,'and code.bdept=1 and code.bperson=0','') */ SET @joinonSql = @joinonSql + CASE WHEN @ass = 'code' THEN ' isnull(a.ccode,'''') = isnull(b.ccode,'''') and' WHEN @ass = 'customer' THEN ' isnull(a.ccus_id,'''') = isnull(b.ccus_id,'''') and' WHEN @ass = 'department' THEN ' isnull(a.cdept_id,'''') = isnull(b.cdept_id,'''') and' WHEN @ass = 'person' THEN ' isnull(a.cperson_id,'''') = isnull(b.cperson_id,'''') and' WHEN @ass = 'vendor' THEN ' isnull(a.csup_id,'''') = isnull(b.csup_id,'''') and' WHEN @ass = ' itemclass ' THEN ' isnull(a.citem_class,'''') = isnull(b.citem_class,'''') and' WHEN @ass = 'item' THEN ' isnull(a.citem_id,'''') = isnull(b.citem_id,'''') and' WHEN @ass = 'citem' THEN ' isnull(a.citemccode,'''') = isnull(b.citemccode,'''') and' WHEN @ass = 'groupcode' THEN ' isnull(a.cgroupcode,'''') = isnull(b.cgroupcode,'''') and' WHEN @ass = 'ccustomer' THEN ' isnull(a.ccccode,'''') = isnull(b.ccccode,'''') and' WHEN @ass = 'cvendor' THEN ' isnull(a.cvccode,'''') = isnull(b.cvccode,'''') and' WHEN @ass = 'dcustomer' or @ass = 'dvendor' THEN ' isnull(a.cdccode,'''') = isnull(b.cdccode,'''') and' WHEN @ass = 'cname' THEN ' isnull(a.cname,'''') = isnull(b.cname,'''') and' WHEN @ass = '' or @ass='ibanlance' THEN '' ELSE ' a.' + @ass + '=b.' + @ass + ' and' END SET @leftjoinonSql = @leftjoinonSql + CASE WHEN @ass = 'ccustomer' and charindex('left join customer on A.ccus_id=customer.ccuscode',@leftjoinonSql)=0 THEN ' left join customer on A.ccus_id=customer.ccuscode' WHEN @ass = 'cvendor' and charindex('left join vendor on A.csup_id=vendor.cvencode',@leftjoinonSql)=0 THEN ' left join vendor on A.csup_id=vendor.cvencode' WHEN @ass = 'dcustomer' and charindex('left join customer on A.ccus_id=customer.ccuscode',@leftjoinonSql)=0 THEN ' left join customer on A.ccus_id=customer.ccuscode ' WHEN @ass = 'dvendor' and charindex('left join vendor on A.csup_id=vendor.cvencode',@leftjoinonSql)=0 THEN ' left join vendor on A.csup_id=vendor.cvencode ' WHEN @ass = 'citem' THEN ' left join '+(case when @citem_class='ch' then 'inventory f on a.citem_id=f.cinvcode' else 'fitemss'+@citem_class+' f on A.citem_id=f.citemcode ' end) WHEN @ass = 'groupcode' THEN ' left join GF_VgroupStruct on A.cdept_id=GF_VgroupStruct.cComCode ' ELSE '' END SET @ibanlanceSql = CASE WHEN @ass = 'ibanlance' THEN ( CASE WHEN @asscodebegin = '' THEN '' ELSE ' and abs(me)>=' + @asscodebegin + '' END ) + ( CASE WHEN @asscodeend = '' THEN '' ELSE ' and abs(me)<=' + @asscodeend + '' END ) + ' ' ELSE '' END FETCH NEXT FROM sqlcursor INTO @ass,@asscodebegin,@asscodeend ; END CLOSE sqlcursor DEALLOCATE sqlcursor print 'selectSql:-'+@selectSql
SQL join 多表关联问题
table:log_create_player player_id uid created_time 1 11 121221 2 22 121212 table:log_login player_id action_time 1 22323 1 33434 table:payment pay_user_id pay_amount exchage 22 33 0.333 想得到 player_id max(action_time) times doll 1 33434 2 9.999 SQL语句: DB::connection($this->db_name)->select("select lcp.player_id,lcp.player_name,from_unixtime(created_time),from_unixtime(max(action_time)),count(1) as times, sum(o.pay_amount*o.exchange) from log_create_player lcp join log_login ll on lcp.player_id=ll.player_id left join `{$this->db_payment}`.pay_order o on lcp.uid = o.pay_user_id where created_time between {$start_time} and {$end_time} and o.get_payment = 1 group by player_id having max(action_time)<{$interval}"); 为什么最后的出来的是空????
corseek 中文检索时搜不出结果 搜英文单词正常
[root@abc testpack]# /usr/local/coreseek/bin/indexer -c etc/sphinx.conf --all Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... indexing index 'test1'... WARNING: Attribute count is 0: switching to none docinfo collected 5 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 5 docs, 186 bytes total 0.064 sec, 2870 bytes/sec, 77.16 docs/sec total 2 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 6 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg 检索中文 不出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf '水火不容' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query '水火不容 ': returned 0 matches of 0 total in 0.000 sec words: 1. '水火': 0 documents, 0 hits 2. '不容': 0 documents, 0 hits 检索英文就能出结果 [root@abc testpack]# /usr/local/coreseek/bin/search -c etc/sphinx.conf 'apple' Coreseek Fulltext 4.1 [ Sphinx 2.0.2-dev (r2922)] Copyright (c) 2007-2011, Beijing Choice Software Technologies Inc (http://www.coreseek.com) using config file 'etc/sphinx.conf'... index 'test1': query 'apple ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=5, weight=2780 id=5 title=apple content=apple,banana words: 1. 'apple': 1 documents, 2 hits 这个是数据库 mysql> select * from tt; +----+--------------+-----------------+ | id | title | content | +----+--------------+-----------------+ | 1 | 西水 | 水水 | | 2 | 水火不容 | 水火不容 | | 3 | 水啊啊 | 啊水货 | | 4 | 东南西水 | 啊西西哈哈 | | 5 | apple | apple,banana | +----+--------------+-----------------+ 5 rows in set (0.00 sec) 下面是配置那个文件 # # Sphinx configuration file sample # # WARNING! While this sample file mentions all available options, # it contains (very) short helper descriptions only. Please refer to # doc/sphinx.html for details. # ############################################################################# ## data source definition ############################################################################# source src1 { # data source type. mandatory, no default value # known types are mysql, pgsql, mssql, xmlpipe, xmlpipe2, odbc type = mysql ##################################################################### ## SQL settings (for 'mysql' and 'pgsql' types) ##################################################################### # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = 123456 sql_db = haha sql_port = 3306 # optional, default is 3306 # UNIX socket name # optional, default is empty (reuse client library defaults) # usually '/var/lib/mysql/mysql.sock' on Linux # usually '/tmp/mysql.sock' on FreeBSD # sql_sock = /var/lib/mysql/mysql.sock # MySQL specific client connection flags # optional, default is 0 # # mysql_connect_flags = 32 # enable compression # MySQL specific SSL certificate settings # optional, defaults are empty # # mysql_ssl_cert = /etc/ssl/client-cert.pem # mysql_ssl_key = /etc/ssl/client-key.pem # mysql_ssl_ca = /etc/ssl/cacert.pem # MS SQL specific Windows authentication mode flag # MUST be in sync with charset_type index-level setting # optional, default is 0 # # mssql_winauth = 1 # use currently logged on user credentials # MS SQL specific Unicode indexing flag # optional, default is 0 (request SBCS data) # # mssql_unicode = 1 # request Unicode data from server # ODBC specific DSN (data source name) # mandatory for odbc source type, no default value # # odbc_dsn = DBQ=C:\data;DefaultDir=C:\data;Driver={Microsoft Text Driver (*.txt; *.csv)}; # sql_query = SELECT id, data FROM documents.csv # ODBC and MS SQL specific, per-column buffer sizes # optional, default is auto-detect # # sql_column_buffers = content=12M, comments=1M # pre-query, executed before the main fetch query # multi-value, optional, default is empty list of queries # sql_query_pre = SET NAMES utf8 sql_query_pre = SET SESSION query_cache_type=OFF # main document fetch query # mandatory, integer document ID field MUST be the first selected column sql_query = \ SELECT id, title, content FROM tt # joined/payload field fetch query # joined fields let you avoid (slow) JOIN and GROUP_CONCAT # payload fields let you attach custom per-keyword values (eg. for ranking) # # syntax is FIELD-NAME 'from' ( 'query' | 'payload-query' ); QUERY # joined field QUERY should return 2 columns (docid, text) # payload field QUERY should return 3 columns (docid, keyword, weight) # # REQUIRES that query results are in ascending document ID order! # multi-value, optional, default is empty list of queries # # sql_joined_field = tags from query; SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC # sql_joined_field = wtags from payload-query; SELECT docid, tag, tagweight FROM tags ORDER BY docid ASC # file based field declaration # # content of this field is treated as a file name # and the file gets loaded and indexed in place of a field # # max file size is limited by max_file_field_buffer indexer setting # file IO errors are non-fatal and get reported as warnings # # sql_file_field = content_file_path # sql_query_info = SELECT * FROM tt WHERE id=$id # range query setup, query that must return min and max ID values # optional, default is empty # # sql_query will need to reference $start and $end boundaries # if using ranged query: # # sql_query = \ # SELECT doc.id, doc.id AS group, doc.title, doc.data \ # FROM documents doc \ # WHERE id>=$start AND id<=$end # # sql_query_range = SELECT MIN(id),MAX(id) FROM documents # range query step # optional, default is 1024 # # sql_range_step = 1000 # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # optional bit size can be specified, default is 32 # # sql_attr_uint = author_id # sql_attr_uint = forum_id:9 # 9 bits for forum_id #sql_attr_uint = group_id # boolean attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # equivalent to sql_attr_uint with 1-bit size # # sql_attr_bool = is_deleted # bigint attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares a signed (unlike uint!) 64-bit attribute # # sql_attr_bigint = my_bigint_id # UNIX timestamp attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # similar to integer, but can also be used in date functions # # sql_attr_timestamp = posted_ts # sql_attr_timestamp = last_edited_ts #sql_attr_timestamp = date_added # string ordinal attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # sorts strings (bytewise), and stores their indexes in the sorted list # sorting by this attr is equivalent to sorting by the original strings # # sql_attr_str2ordinal = author_name # floating point attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # values are stored in single precision, 32-bit IEEE 754 format # # sql_attr_float = lat_radians # sql_attr_float = long_radians # multi-valued attribute (MVA) attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # MVA values are variable length lists of unsigned 32-bit integers # # syntax is ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE [;QUERY] [;RANGE-QUERY] # ATTR-TYPE is 'uint' or 'timestamp' # SOURCE-TYPE is 'field', 'query', or 'ranged-query' # QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs # RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range' # # sql_attr_multi = uint tag from query; SELECT docid, tagid FROM tags # sql_attr_multi = uint tag from ranged-query; \ # SELECT docid, tagid FROM tags WHERE id>=$start AND id<=$end; \ # SELECT MIN(docid), MAX(docid) FROM tags # string attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you store and retrieve strings # # sql_attr_string = stitle # wordcount attribute declaration # multi-value (an arbitrary number of these is allowed), optional # lets you count the words at indexing time # # sql_attr_str2wordcount = stitle # combined field plus attribute declaration (from a single column) # stores column as an attribute, but also indexes it as a full-text field # # sql_field_string = author # sql_field_str2wordcount = title # post-query, executed on sql_query completion # optional, default is empty # # sql_query_post = # post-index-query, executed on successful indexing completion # optional, default is empty # $maxid expands to max document ID actually fetched from DB # # sql_query_post_index = REPLACE INTO counters ( id, val ) \ # VALUES ( 'max_indexed_id', $maxid ) # ranged query throttling, in milliseconds # optional, default is 0 which means no delay # enforces given delay before each query step sql_ranged_throttle = 0 # document info query, ONLY for CLI search (ie. testing and debugging) # optional, default is empty # must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM tt WHERE id=$id # kill-list query, fetches the document IDs for kill-list # k-list will suppress matches from preceding indexes in the same query # optional, default is empty # # sql_query_killlist = SELECT id FROM documents WHERE edited>=@last_reindex # columns to unpack on indexer side when indexing # multi-value, optional, default is empty list # # unpack_zlib = zlib_column # unpack_mysqlcompress = compressed_column # unpack_mysqlcompress = compressed_column_2 # maximum unpacked length allowed in MySQL COMPRESS() unpacker # optional, default is 16M # # unpack_mysqlcompress_maxsize = 16M ##################################################################### ## xmlpipe2 settings ##################################################################### # type = xmlpipe # shell command to invoke xmlpipe stream producer # mandatory # # xmlpipe_command = cat /usr/local/coreseek/var/test.xml # xmlpipe2 field declaration # multi-value, optional, default is empty # # xmlpipe_field = subject # xmlpipe_field = content # xmlpipe2 attribute declaration # multi-value, optional, default is empty # all xmlpipe_attr_XXX options are fully similar to sql_attr_XXX # # xmlpipe_attr_timestamp = published # xmlpipe_attr_uint = author_id # perform UTF-8 validation, and filter out incorrect codes # avoids XML parser choking on non-UTF-8 documents # optional, default is 0 # # xmlpipe_fixup_utf8 = 1 } # inherited source example # # all the parameters are copied from the parent source, # and may then be overridden in this source definition source src1throttled : src1 { sql_ranged_throttle = 100 } ############################################################################# ## index definition ############################################################################# # local index example # # this is an index which is stored locally in the filesystem # # all indexing-time options (such as morphology and charsets) # are configured per local index index test1 { # index type # optional, default is 'plain' # known values are 'plain', 'distributed', and 'rt' (see samples below) # type = plain # document source(s) to index # multi-value, mandatory # document IDs must be globally unique across all sources source = src1 # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended #path = /usr/local/coreseek/var/data/test1 # document attribute values (docinfo) storage mode # optional, default is 'extern' # known values are 'none', 'extern' and 'inline' docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping # optional, default is 0 (do not mlock) # requires searchd to be run from root mlock = 0 # a list of morphology preprocessors to apply # optional, default is empty # # builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru', # 'soundex', and 'metaphone'; additional preprocessors available from # libstemmer are 'libstemmer_XXX', where XXX is algorithm code # (see libstemmer_c/libstemmer/modules.txt) # # morphology = stem_en, stem_ru, soundex # morphology = libstemmer_german # morphology = libstemmer_sv morphology = none # minimum word length at which to enable stemming # optional, default is 1 (stem everything) # # min_stemming_len = 1 path = /root/rearch_dir # stopword files list (space separated) # optional, default is empty # contents are plain text, charset_table and stemming are both applied # # stopwords = /usr/local/coreseek/var/data/stopwords.txt # wordforms file, in "mapfrom > mapto" plain text format # optional, default is empty # # wordforms = /usr/local/coreseek/var/data/wordforms.txt # tokenizing exceptions file # optional, default is empty # # plain text, case sensitive, space insensitive in map-from part # one "Map Several Words => ToASingleOne" entry per line # # exceptions = /usr/local/coreseek/var/data/exceptions.txt # minimum indexed word length # default is 1 (index everything) min_word_len = 1 # charset encoding type # optional, default is 'sbcs' # known types are 'sbcs' (Single Byte CharSet) and 'utf-8' charset_type = zh_cn.utf-8 charset_dictpath = /usr/local/mmseg3/etc/ # charset definition and case folding rules "table" # optional, default value depends on charset_type # # defaults are configured to include English and Russian characters only # you need to change the table to include additional ones # this behavior MAY change in future versions # # 'sbcs' default value is # charset_table = 0..9, A..Z->a..z, _, a..z, U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF # # 'utf-8' default value is #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F # ignored characters list # optional, default value is empty # # ignore_chars = U+00AD # minimum word prefix length to index # optional, default is 0 (do not index prefixes) # # min_prefix_len = 0 # minimum word infix length to index # optional, default is 0 (do not index infixes) # # min_infix_len = 0 # list of fields to limit prefix/infix indexing to # optional, default value is empty (index all fields in prefix/infix mode) # # prefix_fields = filename # infix_fields = url, domain # enable star-syntax (wildcards) when searching prefix/infix indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not use wildcard syntax) # # enable_star = 1 # expand keywords with exact forms and/or stars when searching fit indexes # search-time only, does not affect indexing, can be 0 or 1 # optional, default is 0 (do not expand keywords) # # expand_keywords = 1 # n-gram length to index, for CJK indexing # only supports 0 and 1 for now, other lengths to be implemented # optional, default is 0 (disable n-grams) # ngram_len = 0 # n-gram characters list, for CJK indexing # optional, default is empty # # ngram_chars = U+3000..U+2FA1F # phrase boundary characters list # optional, default is empty # # phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis # phrase boundary word position increment # optional, default is 0 # # phrase_boundary_step = 100 # blended characters list # blended chars are indexed both as separators and valid characters # for instance, AT&T will results in 3 tokens ("at", "t", and "at&t") # optional, default is empty # # blend_chars = +, &, U+23 # blended token indexing mode # a comma separated list of blended token indexing variants # known variants are trim_none, trim_head, trim_tail, trim_both, skip_pure # optional, default is trim_none # # blend_mode = trim_tail, skip_pure # whether to strip HTML tags from incoming documents # known values are 0 (do not strip) and 1 (do strip) # optional, default is 0 html_strip = 0 # what HTML attributes to index if stripping HTML # optional, default is empty (do not index anything) # # html_index_attrs = img=alt,title; a=title; # what HTML elements contents to strip # optional, default is empty (do not strip element contents) # # html_remove_elements = style, script # whether to preopen index data files on startup # optional, default is 0 (do not preopen), searchd-only # # preopen = 1 # whether to keep dictionary (.spi) on disk, or cache it in RAM # optional, default is 0 (cache in RAM), searchd-only # # ondisk_dict = 1 # whether to enable in-place inversion (2x less disk, 90-95% speed) # optional, default is 0 (use separate temporary files), indexer-only # # inplace_enable = 1 # in-place fine-tuning options # optional, defaults are listed below # # inplace_hit_gap = 0 # preallocated hitlist gap size # inplace_docinfo_gap = 0 # preallocated docinfo gap size # inplace_reloc_factor = 0.1 # relocation buffer size within arena # inplace_write_factor = 0.1 # write buffer size within arena # whether to index original keywords along with stemmed versions # enables "=exactform" operator to work # optional, default is 0 # # index_exact_words = 1 # position increment on overshort (less that min_word_len) words # optional, allowed values are 0 and 1, default is 1 # # overshort_step = 1 # position increment on stopword # optional, allowed values are 0 and 1, default is 1 # # stopword_step = 1 # hitless words list # positions for these keywords will not be stored in the index # optional, allowed values are 'all', or a list file name # # hitless_words = all # hitless_words = hitless.txt # detect and index sentence and paragraph boundaries # required for the SENTENCE and PARAGRAPH operators to work # optional, allowed values are 0 and 1, default is 0 # # index_sp = 1 # index zones, delimited by HTML/XML tags # a comma separated list of tags and wildcards # required for the ZONE operator to work # optional, default is empty string (do not index zones) # # index_zones = title, h*, th } # inherited index example # # all the parameters are copied from the parent index, # and may then be overridden in this index definition #index test1stemmed : test1 #{ # path = /usr/local/coreseek/var/data/test1stemmed # morphology = stem_en #} # distributed index example # # this is a virtual index which can NOT be directly indexed, # and only contains references to other local and/or remote indexes #index dist1 #{ # 'distributed' index type MUST be specified # type = distributed # local index to be searched # there can be many local indexes configured # local = test1 # local = test1stemmed # remote agent # multiple remote agents may be specified # syntax for TCP connections is 'hostname:port:index1,[index2[,...]]' # syntax for local UNIX connections is '/path/to/socket:index1,[index2[,...]]' # agent = localhost:9313:remote1 # agent = localhost:9314:remote2,remote3 # agent = /var/run/searchd.sock:remote4 # blackhole remote agent, for debugging/testing # network errors and search results will be ignored # # agent_blackhole = testbox:9312:testindex1,testindex2 # remote agent connection timeout, milliseconds # optional, default is 1000 ms, ie. 1 sec # agent_connect_timeout = 1000 # remote agent query timeout, milliseconds # optional, default is 3000 ms, ie. 3 sec # agent_query_timeout = 3000 #} # realtime index example # # you can run INSERT, REPLACE, and DELETE on this index on the fly # using MySQL protocol (see 'listen' directive below) #index rt #{ # 'rt' index type must be specified to use RT index #type = rt # index files path and file name, without extension # mandatory, path must be writable, extensions will be auto-appended # path = /usr/local/coreseek/var/data/rt # RAM chunk size limit # RT index will keep at most this much data in RAM, then flush to disk # optional, default is 32M # # rt_mem_limit = 512M # full-text field declaration # multi-value, mandatory # rt_field = title # rt_field = content # unsigned integer attribute declaration # multi-value (an arbitrary number of attributes is allowed), optional # declares an unsigned 32-bit attribute # rt_attr_uint = gid # RT indexes currently support the following attribute types: # uint, bigint, float, timestamp, string # # rt_attr_bigint = guid # rt_attr_float = gpa # rt_attr_timestamp = ts_added # rt_attr_string = content #} ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 256M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M # maximum file field adaptive buffer size # optional, default is 8M, minimum is 1M # # max_file_field_buffer = 32M } ############################################################################# ## searchd settings ############################################################################# searchd { # [hostname:]port[:protocol], or /unix/socket/path to listen on # known protocols are 'sphinx' (SphinxAPI) and 'mysql41' (SphinxQL) # # multi-value, multiple listen points are allowed # optional, defaults are 9312:sphinx and 9306:mysql41, as below # # listen = 127.0.0.1 # listen = 192.168.0.1:9312 # listen = 9312 # listen = /var/run/searchd.sock listen = 9312 #listen = 9306:mysql41 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = /usr/local/coreseek/var/log/searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = /usr/local/coreseek/var/log/query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = /usr/local/coreseek/var/log/searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 1 (preopen everything) preopen_indexes = 0 # whether to unlink .old index copies on succesful rotation. # optional, default is 1 (do unlink) unlink_old = 1 # attribute updates periodic flush timeout, seconds # updates will be automatically dumped to disk this frequently # optional, default is 0 (disable periodic flush) # # attr_flush_period = 900 # instance-wide ondisk_dict defaults (per-index value take precedence) # optional, default is 0 (precache all dictionaries in RAM) # # ondisk_dict_default = 1 # MVA updates pool size # shared between all instances of searchd, disables attr flushes! # optional, default size is 1M mva_updates_pool = 1M # max allowed network packet size # limits both query packets from clients, and responses from agents # optional, default size is 8M max_packet_size = 8M # crash log path # searchd will (try to) log crashed query to 'crash_log_path.PID' file # optional, default is empty (do not create crash logs) # # crash_log_path = /usr/local/coreseek/var/log/crash # max allowed per-query filter count # optional, default is 256 max_filters = 256 # max allowed per-filter values count # optional, default is 4096 max_filter_values = 4096 # socket listen queue length # optional, default is 5 # # listen_backlog = 5 # per-keyword read buffer size # optional, default is 256K # # read_buffer = 256K # unhinted read size (currently used when reading hits) # optional, default is 32K # # read_unhinted = 32K # max allowed per-batch query count (aka multi-query count) # optional, default is 32 max_batch_queries = 32 # max common subtree document cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_docs_cache = 4M # max common subtree hit cache size, per-query # optional, default is 0 (disable subtree optimization) # # subtree_hits_cache = 8M # multi-processing mode (MPM) # known values are none, fork, prefork, and threads # optional, default is fork # workers = threads # for RT to work # max threads to create for searching local parts of a distributed index # optional, default is 0, which means disable multi-threaded searching # should work with all MPMs (ie. does NOT require workers=threads) # # dist_threads = 4 # binlog files path; use empty string to disable binlog # optional, default is build-time configured data directory # # binlog_path = # disable logging # binlog_path = /usr/local/coreseek/var/data # binlog.001 etc will be created there # binlog flush/sync mode # 0 means flush and sync every second # 1 means flush and sync every transaction # 2 means flush every transaction, sync every second # optional, default is 2 # # binlog_flush = 2 # binlog per-file size limit # optional, default is 128M, 0 means no limit # # binlog_max_log_size = 256M # per-thread stack size, only affects workers=threads mode # optional, default is 64K # # thread_stack = 128K # per-keyword expansion limit (for dict=keywords prefix searches) # optional, default is 0 (no limit) # # expansion_limit = 1000 # RT RAM chunks flush period # optional, default is 0 (no periodic flush) # # rt_flush_period = 900 # query log file format # optional, known values are plain and sphinxql, default is plain # # query_log_format = sphinxql # version string returned to MySQL network protocol clients # optional, default is empty (use Sphinx version) # # mysql_version_string = 5.0.37 # trusted plugin directory # optional, default is empty (disable UDFs) # # plugin_dir = /usr/local/sphinx/lib # default server-wide collation # optional, default is libc_ci # # collation_server = utf8_general_ci # server-wide locale for libc based collations # optional, default is C # # collation_libc_locale = ru_RU.UTF-8 # threaded server watchdog (only used in workers=threads mode) # optional, values are 0 and 1, default is 1 (watchdog on) # # watchdog = 1 # SphinxQL compatibility mode (legacy columns and their names) # optional, default is 0 (SQL compliant syntax and result sets) # # compat_sphinxql_magics = 1 } # --eof-- 求救一下 不知道哪里错了 中文搜不出结果来
监控页面输入登陆login.php后无法显示页面
页面显示以下字符,缺少了什么,才能显示监控页面 get_one($sql); if ($row != NULL) { $login = true; $user_id = $row["uid"]; if ($row["userport"] != NULL) $user_port = explode(":", $row["userport"]); $sql = "select gid from `group` where member like '%".$user_id."%' order by gid"; $row = $db - >get_one($sql); $user_level = $row["gid"]; } if ($login) { session_start(); $_SESSION['pass_id'] = $user_id; if ($user_level == 1) { $_SESSION['pass_name'] = encodePdu("admin"); echo ""; echo ""; echo ""; } if ($user_level == 2) { $_SESSION['pass_name'] = encodePdu("alarm"); echo ""; echo ""; echo ""; } if ($user_level != 1 && $user_level != 2) { $_SESSION['pass_name'] = encodePdu($user_name); $tmp_path = "/user_if.php;/user_event.php;/user_node.php;/user_pass.php;"; for ($n = 0; $n "; echo ""; echo ""; } } else{ if(!$login){ echo "\n "; } } ?>"
批量定时(使用quartz.jar) hibernate3.1.3+Spring1.2.8 程序运行一段时间停止
程序使用Hibernate 3.1.3+Spring 1.2.8,使用dbcp数据库连接,在程序中使用Quartz.jar定时四个任务,每天定时执行,运行几天后所有任务都不执行了,好像死锁了一样,无异常、无错误抛出,重新启动后又可正常执行。求高手指点,日志如下: 2013-09-17 21:00:41,859 org.hibernate.cfg.SettingsFactory - JDBC driver: Oracle JDBC driver, version: 10.2.0.1.0 2013-09-17 21:00:41,890 org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.Oracle9Dialect 2013-09-17 21:00:41,890 org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 2013-09-17 21:00:41,890 org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Connection release mode: on_close 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 1 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.classic.ClassicQueryTranslatorFactory 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query cache: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Cache provider: org.hibernate.cache.EhCacheProvider 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Statistics: enabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 2013-09-17 21:00:41,921 org.hibernate.impl.SessionFactoryImpl - building session factory 2013-09-17 21:00:41,937 net.sf.ehcache.config.Configurator - No configuration found. Configuring ehcache from ehcache-failsafe.xml found in the classpath: jar:file:/D:/workspace/ZCCL_BatchServer/lib/ehcache-1.1.jar!/ehcache-failsafe.xml 2013-09-17 21:00:42,500 org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured 2013-09-17 21:00:42,578 org.springframework.aop.framework.DefaultAopProxyFactory - CGLIB2 available: proxyTargetClass feature enabled 2013-09-17 21:00:42,640 com.dcits.server.StartServer - Roman-----------systemEnv serverPort-----18171 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - =======管理线程就绪======= 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 线程[1]启动完成 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 服务器忙,增加新线程 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 线程[2]启动完成 2013-09-17 21:00:42,640 com.dcits.server.AcceptServer - G0_L1_CR:=======正在监听端口[18171]======= 2013-09-17 21:00:42,640 com.dcits.server.AcceptServer - G0_L2_CR:=======正在监听端口[18171]======= 2013-09-17 21:00:42,671 org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] 2013-09-17 21:00:42,687 org.springframework.jdbc.support.SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, HSQL, MS-SQL, MySQL, Oracle, Informix, PostgreSQL, Sybase] Hibernate: select * from ( select this_.JOB_ID as JOB1_15_0_, this_.JOB_NAME as JOB2_15_0_, this_.JOB_CLASS as JOB3_15_0_, this_.JOB_TYPE as JOB4_15_0_, this_.JOB_TRIGGER_STATE as JOB5_15_0_, this_.JOB_CRON_EXPRESSION as JOB6_15_0_, this_.JOB_METHOD as JOB7_15_0_, this_.JOB_GROUP as JOB8_15_0_, this_.JOB_SPRING_BEAN as JOB9_15_0_, this_.JOB_EXE_LAST_TIME as JOB10_15_0_, this_.JOB_EXE_NEXT_TIME as JOB11_15_0_, this_.JOB_START_TIME as JOB12_15_0_, this_.JOB_END_TIME as JOB13_15_0_, this_.JOB_TRIGGER_TYPE as JOB14_15_0_, this_.JOB_PRI as JOB15_15_0_, this_.JOB_TRIGGER_NAME as JOB16_15_0_, this_.JOB_TRIGGER_GROUP as JOB17_15_0_, this_.JOB_EXE_STATE as JOB18_15_0_ from CHQB_JOB this_ ) where rownum <= ? 2013-09-17 21:00:42,859 com.dcits.job.util.JobUtil - Roman--------------jobList's size------5 2013-09-17 21:00:42,859 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 0/3 * * * ? 2013-09-17 21:00:42,921 org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main 2013-09-17 21:00:42,953 org.quartz.simpl.RAMJobStore - RAMJobStore initialized. 2013-09-17 21:00:42,953 org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 2013-09-17 21:00:42,953 org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 1.4.2 2013-09-17 21:00:42,968 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 01 11 * * ? 2013-09-17 21:00:42,984 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 00 16 * * ? 2013-09-17 21:00:43,000 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 0/1 * * * ? 2013-09-17 21:00:43,046 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 50 16 * * ? 2013-09-17 21:00:43,062 org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. 2013-09-17 21:01:00,015 com.dcits.job.base.BaseJob - ----START------TrpLoadBatchJob---------- 2013-09-17 21:01:00,015 com.dcits.job.TrpLoadBatchJob - Tue Sep 17 21:01:00 CST 2013################START RUN TrpLoadBatchJob ##########0
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队 译者 | 凯隐 编辑 | Jane 出品 | AI科技大本营(ID:rgznai100) 本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
4G EPS 第四代移动通信系统
目录 文章目录目录4G 与 LTE/EPCLTE/EPC 的架构E-UTRANE-UTRAN 协议栈eNodeBEPCMMES-GWP-GWHSSLTE/EPC 协议栈概览 4G 与 LTE/EPC 4G,即第四代移动通信系统,提供了 3G 不能满足的无线网络宽带化,主要提供数据(上网)业务。而 LTE(Long Term Evolution,长期演进技术)是电信领域用于手机及数据终端的高速无线通...
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100)【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的Tencent Kona JDK,分布式HTAP数据库 TBase,企业级容器平台TKEStack,以及高性能图计算框架Plato。短短一周之内,腾讯开源了五大重点项目。其中,TubeMQ是腾讯大数据平台部门应用的核心组件,...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
MySQL数据库总结
一、数据库简介 数据库(Database,DB)是按照数据结构来组织,存储和管理数据的仓库。 典型特征:数据的结构化、数据间的共享、减少数据的冗余度,数据的独立性。 关系型数据库:使用关系模型把数据组织到数据表(table)中。现实世界可以用数据来描述。 主流的关系型数据库产品:Oracle(Oracle)、DB2(IBM)、SQL Server(MS)、MySQL(Oracle)。 数据表:数...
相关热词 c# 图片上传 c# gdi 占用内存 c#中遍历字典 c#控制台模拟dos c# 斜率 最小二乘法 c#进程延迟 c# mysql完整项目 c# grid 总行数 c# web浏览器插件 c# xml 生成xsd
立即提问