python怎么将数组中某一数字全部改为另一数字

。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。

4个回答

import numpy as np
arr = np.array([1,2,3,4,3,2,3,6,2,5])
num = 2 # 想要替换的数字
NUM = 10 # 替换后的数字
index = (arr == num)
arr[index] = NUM
print(arr)


# -*- coding: UTF-8 -*-
arr = [1,2,3,4,4,5,6,7,4,3,1,2,2]
find = 2
replacewith = -1
for i in range(0, len(arr)):
    if arr[i] == find:
        arr[i] = replacewith
print(arr)

[1, -1, 3, 4, 4, 5, 6, 7, 4, 3, 1, -1, -1]

arr = [1,2,3,4,4,5,6,7,4,3,1,2,2]
f = 2
replacewith = -1
for i in range(0, len(arr)):
if arr[i] == f:
arr[i] = replacewith
print(arr)

list = [4,5,6,7,8,9]
num = 6 ##想要替换掉的数字
new_num = 10 ##替换后的新的数字
index = list.index(num) ##找出想要替换掉的数字的位置
list[index] = new_num

print(list)

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python3中pandas批量汇总多个Excel文件,如何将中文日期命名的文件名变成数字日期并写入Excel第一行?
文件名:2019年11月1日.xlsx~2019年11月30日.xlsx,文件里面数据没有标识日期,汇总后数据混乱,我需要把文件名改为数字日期并放在Excel第一行。 每个Excel中有多个工作表,初学Python,请各位指点。 代码如下 ``` ``` import pandas as pd import os import re list2 = os.listdir(r'c:\python\2019年11月\\') list1 = [] for name in list2: if re.findall('^2019年11月\d+日.xlsx', name): list1.append(name) # print(list1) dflist = [] for i in range(len(list1)): dflist.append(pd.read_excel(list1[i], header=2)) # print(dflist) data = pd.concat(dflist) data.to_excel('./123.xlsx') # 数据保存路径
python 返回函数的问题
最近学习python的返回函数,遇到了一个问题。此练习题目为: **利用闭包返回一个计数器函数,每次调用它返回递增整数:** 我所作的答案为: ``` def createCounter(): li = [0] def counter(): li[0] += 1 return li[0] return counter counterA = createCounter() print(counterA(), counterA(), counterA(), counterA(), counterA()) # 1 2 3 4 5 counterB = createCounter() if [counterB(), counterB(), counterB(), counterB()] == [1, 2, 3, 4]: print('测试通过!') else: print('测试失败!') ``` 得到的结果为: 1 2 3 4 5 测试通过! 本来已经得出结论,但是我试着将createCounter()中的返回值从li[0]改为li,即: ``` def createCounter(): li = [0] def counter(): li[0] += 1 return li return counter ``` 最终的结果变为: [5] [5] [5] [5] [5] 测试失败! 我向询问一下,为什么将函数返回值从一个数字改为一个列表以后,得到的结果为什么不是[1] [2] [3] [4] [5]
编写程序伪代码,计算一个数字的第一个和最后一个数字,使用%和log(x,10)
如输入数字为:23456 输出的数字分别为2和6 当前我的代码: import math a = input("请输入一行数字") a= int(a) b = a% 10 print("最后一位数为:",b) c= str(a) d = c[0] d= int(d) while ((a//10)<(10**(d-1)) or (a//10)>=(10**(d))): a=a//10 e =int(math.log(a,10)) print("第一位数字为:",e) 输入数据依次增大则正确,数字之间是乱序的话,不正确。 这个代码可以怎么修改?
python error 80 的问题
工作中需要把第一张图示的.mp4文件名修改成第二张图示的编号.mp4的格式 ![图片说明](https://img-ask.csdn.net/upload/201505/27/1432729507_234696.jpg) ![图片说明](https://img-ask.csdn.net/upload/201505/27/1432729699_257463.jpg) 我直接根据文件命名特点,提取相同模式的方式做了一遍,这里使用os.rename()方法没有报任何错误 ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import fnmatch import os import re def iterfindfiles(path, fnexp): for root, dirs, files in os.walk(path): for filename in fnmatch.filter(files, fnexp): yield os.path.join(root, filename) def fiterFiles(): path=raw_input("input dir:") filterfiletype=raw_input("input file filter type:") #根据观察文件类型,发现所有文件的命名方式都是类似的,所以抓住这一特点进行修改 #keyword=str(filterfiletype) for filename in iterfindfiles(path, filterfiletype): index=filename.find("=") if(index>=0): indexOther=filename.find(".") deletename=filename[index:indexOther-1] end=".mp4" pos=filename.find(deletename) preStr=filename[0:pos] newname=preStr+end print newname os.rename(filename,newname) def main(): fiterFiles() if __name__ == "__main__": main() ``` 但是这种方法很死板,一次只能修改.mp4的文件,不能修改同样命名的其他格式的文件 ,所以我将代码进行了优化,根据提取头6为数字和尾部文件类型的方式获取到新的文件名 用同样分方法去重命名,可是这次一次报错error 80的错误,希望大虾帮忙看下是怎么回事儿。 ``` import fnmatch import os import re def iterfindfiles(path, fnexp): for root, dirs, files in os.walk(path): for filename in fnmatch.filter(files, fnexp): yield os.path.join(root, filename) def fiterFiles(): path=raw_input("input dir:") filterfiletype=raw_input("input file filter type:") for filename in iterfindfiles(path, filterfiletype): top6=re.findall('[0-9]{6}',filename)#查找文件名中的六位数字 end4=re.findall('\..+',filename)#查找文件命中包括.之后的所有字符 for top in top6: next for end in end4: newname=top+end#将找到的6位字符和文件类型连接成新的命名 #print newname os.rename(filename,newname) def main(): fiterFiles() if __name__ == "__main__": main() ``` 以下是报错的内容: Traceback (most recent call last): File "D:\软件包\批处理工具\modify.py", line 92, in <module> main() File "D:\软件包\批处理工具\modify.py", line 89, in main fiterFiles() File "D:\软件包\批处理工具\modify.py", line 85, in fiterFiles os.rename(filename,newname) WindowsError: [Error 80] ![图片说明](https://img-ask.csdn.net/upload/201505/27/1432730172_232214.jpg)
python3使用ctypes有些c类型没有怎么办?
被实习作业折磨到摸不着头脑 菜狗一只,平常总是靠度娘解决问题,但这次真的搜不到了(╥╯^╰╥) 实在不想用C++......而且已经丢了很久了...... 学校实习任务开发指纹仪的相关软件,调用dll函数。 问题1: dll函数:HANDLE_stdcall sensorOpen(int index) 这个类型是句柄HANDLE,其返回值需要用到,但是Python没有直接HANDLE,我用int接收暂没出现问题,怕之后会遇到先问一个。 问题2: dll函数:int_stdcall sensorCapture(HANDLE handle,unsigned char* imageBuffer, int imageBufferSize)//获取图像 handle就是上面那个,unsigned char* 文档解释是接收的图像即是输入的参数也会输出,后续指纹算法需要,ctypes也没有,怎么实现? 我目前使用imageBuffer = create_string_buffer(所需空间大小),但是没有采集到图像(我检验了该函数返回值,为0,即无图像) 我也试过了create_unicode_buffer,结果也一样。到底是使用方法问题还是其他导致我真的没采集到图像 问题3:(虽然前面的还没解决但先问下)(准确的说目前卡在这里) dll函数:HANDLE stdcall BIOKEY_INIT(int License, WORD *isize, BYTE*Params, BYTE *Buffer, intImageFlag) 问题出在isize:文档描述:[in] 传入一个无符号双字节长度为22 的数组,且 isize[0]和isize[20]为传入图像宽度,isize[1]和 isize[21]为传入图像高度 这isize怎么提前定义? 我尝试使用isize = bytearray()处理,但是毕竟是双字节,光是宽度就有300多,但byte要在0-255之间,而且还不确定都是数字(我试过定义为整型数组,报错don't konw how to convert parameter 2)。作为指纹图像,虽然有函数可以修改采集图像的大小,但是太小了的话数据的完整性和可使用性就大打折扣。所以这个双字节咋整啊 问题4: 有些函数传入参数为int*,咋整......我先去看看ctypes的文档,但是我记得上次看的时候好像没有针对这种指针的...直接使用c_char_p吗?(目前没法验证这个方法是否可行,因为前面就走不通。。。) 总之目前就这些了,实习作业周三开始检查希望会的大佬帮帮我!谢谢啦! 我先滚去看文档了。。。
Python 使用write写进文件,在hive中转换数据出现null
今天接到一批新数据,需要清洗入库,这些数据有些时间没有hour,只有年月日和分秒,细细观察后发现都是在第9小时的数据的时间格式不对,所以使用python重新将小时写进,入到hive的外部表,在使用hql将时间格式化入到另一个表,但是在最后的数据中,除了我修改过的时间值是null,其他均正常,我想不明白这是什么原因,还请大神帮助解决? 最初的源数据:(其中"|”前面试是时间,精确到毫秒,后面是一个数字串,外部表只有两个字段) 2016/09/04 07:54:03 109.636|0139 2016/09/04 07:54:03 133.878|1390 2016/09/04 07:54:03 133.753|1396 2016/09/04 37:19 974.281|1380 2016/09/04 37:19 974.414|1385 2016/09/04 37:19 975.497|1525 使用的py脚本,就是在分钟前加上 09: ``` # coding:utf-8 import sys import os lines=open('./'+sys.argv[1]) output=open(sys.argv[1]+'.p','wb') r=lines.readlines(); for s in r: if s[16]!=':': output.write(s[:11] + '09:' + s[11:]) else: output.write(s) lines.close() output.close() ``` 入到hive中的外部表是对的, ``` hive>select * from zsyh_data; 2016/09/04 07:54:03 109.636 0139 2016/09/04 07:54:03 133.878 1390 2016/09/04 07:54:03 133.753 1396 2016/09/04 09:37:19 974.281 1380 2016/09/04 09:37:19 974.414 1385 2016/09/04 09:37:19 975.497 1525 ``` 下面是使用insert into 插入一个新表中,其中stat_time 是timastamp 类型的 ``` insert into zsyh_data_new select from_unixtime( unix_timestamp( substr(stat_time,1,20), 'yyyy/MM/dd HH:mm:ss' ), 'yyyy-MM-dd HH:mm:ss' ) AS stat_time,Digital from zsyh_data; ``` 使用insert into到另一个表中之后,09小时的数据时间就为null ``` hive>select * from zsyh_data_new; 2016/09/04 07:54:03 0139 2016/09/04 07:54:03 1390 2016/09/04 07:54:03 1396 null 1380 null 1385 null 1525 ``` 查看编码方式是 uft-8 。如果以上的python方式使用vi编辑则时间显示正常 请知道的大神指点一二。
使用了mattkang 大佬发布的数字阿拉伯转繁体中文的代码,发现有点问题,但是找不到出问题的地方。
因为是初次接触python,学了一点基础的应用就想自己做点东西玩,搬了mattkang 大佬写的代码自己又改了改,发现能用还挺开心,试了几个数字之后发现有问题:当输入的数字十位数为零时,返回的繁体金额会少一分钱。。。。求大佬解答问题出在哪里啊?还有其他的问题吗? 代码如下: # -*- coding: utf-8 -*- #抄的mattkang大佬的代码(CSDN) from __future__ import unicode_literals def convert(n): units = ['', '万', '亿'] nums = ['零', '壹', '贰', '叁', '肆', '伍', '陆', '柒', '捌', '玖'] decimal_label = ['角', '分'] small_int_label = ['', '拾', '佰', '仟'] int_part, decimal_part = str(int(n)), str(n - int(n))[2:] # 分离整数和小数部分 res = [] if decimal_part: res.append(''.join([nums[int(x)] + y for x, y in zip(decimal_part, decimal_label) if x != '0'])) if int_part != '0': res.append('圆') while int_part: small_int_part, int_part = int_part[-4:], int_part[:-4] tmp = ''.join([nums[int(x)] + (y if x != '0' else '') for x, y in list(zip(small_int_part[::-1], small_int_label))[::-1]]) tmp = tmp.rstrip('零').replace('零零零', '零').replace('零零', '零') unit = units.pop(0) if tmp: tmp += unit res.append(tmp) return ''.join(res[::-1]) n=float(input('请输入金额:',)) print(convert(n)) 返回的结果例子 D:\>python 金额繁体转换.py 请输入金额:2090104829.19 贰拾亿玖仟零壹拾万肆仟捌佰贰拾玖圆壹角玖分 D:\>python 金额繁体转换.py 请输入金额:2345103.19 贰佰叁拾肆万伍仟壹佰零叁圆壹角捌分 D:\>python 金额繁体转换.py 请输入金额:203.19 贰佰零叁圆壹角捌分 D:\>python 金额繁体转换.py 请输入金额:203.17 贰佰零叁圆壹角陆分
一道python实现简单编译器的问题
请基于四则运算简单语法,增加带符号(正号或负号)数的语法,并修改所给的程序,实现对带符号数四则运算的求值。 为支持带符号数的编译,在教材案例基础上,添加了: *signop = oneOf('+ -'),用于标示正副符号。 增加了符号的运算优先级的设定:表示signop的优先级最高,单操作数运算符,且是右结合的,识别出符号数后,由EvalSignOp类进行处理。 arith_expr = operatorPrecedence(operand, [(signop, 1, opAssoc.RIGHT, EvalSignOp), (multop, 2, opAssoc.LEFT, EvalMultOp), (plusop, 2, opAssoc.LEFT, EvalAddOp), ]) EvalSignOp类:处理符号数的类,以识别出来的符号tokens列表的第0项为处理对象,该项包含了符号及数值。因此,类初始化函数中要从tokens[0]中提取出sign和value。在eval方法中,根据sign和value得到实际的值。 请在指定位置编写程序,完成EvalSignOp类的定义。 相关知识 程序设计语言的语法定义了一个程序的组成结构或组织形式,即构成语言的各个部分之间的组合规则,但不涉及这些部分的特定含义,也不涉及使用者。如在英语中“Cat dog boy”是不满足语法的句子,因为英语语法中未定义形为“<Noun><noun><noun>”的句子。又例如,Python表达式“3.2+3.2”是语法正确的,但是“3.2 3.2”不是语法正确的。 语义表示满足语法的程序的含义,即各个语法单元的特定含义。如“I was born on the 30th February”,语法上是正确的,但是语义上是错误的,因为2月没有30号。又例如,Python表达式“3.2/'abc'”语法上是正确的,但是语义上是错误的,因为Python的语义定义不允许用一个数去除以一个字符串。程序设计语言的语义包括静态语义和动态语义。静态语义指的是在编写程序时就可以确定的含义,而动态语义则必须在程序运行时才可以确定的含义。语义不清,计算机就无法知道所要解决问题的步骤,也就无法执行程序。 用自然语言描述程序设计语言的语法,存在不同程度的二义性。这种模糊、不确定的方式无法精确定义一门程序设计语言。最著名的文法描述形式是由Backus定义Algol60语言时提出的Backus-Naur范式(Backus-Naur Form, BNF)及其扩展形式EBNF。 下面以各程序设计语言都常见的实数四则运算语法表示为例,简单介绍EBNF范式。 digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; nonzero = "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; integer = nonzero, { digit } ; real = integer, ".", integer operand = integer | real operator = "+" | "-" | "*" | "/"; expression = operand | "(", expression, operator, expression, ")"; 上述EBNF定义中,“=”的含义是定义,如digit定义为0~9的数字,“|”表示或,即digit可以是0~9的任一数字,“,”表示字符拼接,“;”是结束符,“{}”表示重复出现,可以是0次,也可以是多次。因此,上述EBNF定义了实数和整数的四则算术运算,整数是由0~9的数字构成的字符串,允许有前0;实数是由两个整数中间夹一个“.”构成。四则运算的表达式可以是一个数,即operand,也可以由括号括起的由运算符拼接两个表达式而成。 基于该EBNF定义,利用Python的pyparsing库,可以为其定义的四则运算表达式实现一个编译器,识别表达式,并求表达式的值。main函数是进行EBNF解析的主要函数,可以看到,与EBNF定义相对应,利用pyparsing提供的机制,对EBNF定义进行了表述转换,例如,integer是0~9数字构成的无前0的字符串,在main 函数中integer定义为"0"|Word(nums[1:],nums),即无前0的数字串。实数定义成由整数数字串、小数点和数字串构成的数字串。 def main(strExpr): integer = "0"|Word(nums[1:],nums) real = Combine(integer + "." + Word(nums)) variable = Word(alphas) operand = real | integer | variable multop = oneOf('* / // %') plusop = oneOf('+ -') operand.setParseAction(EvalConstant) arith_expr = operatorPrecedence(operand, [(multop, 2, opAssoc.LEFT, EvalMultOp), (plusop, 2, opAssoc.LEFT, EvalAddOp), ]) ret = arith_expr.parseString(strExpr, parseAll=True)[0] result = ret.eval() return result 上述定义的main函数中EvalConstant等以Eval开头的是一系列类,用于在识别出相应的语法结构时采取相应的动作。例如EvalConstant用于在识别出一个整数或实数时,创建一个EvalConstant对象。Eval系列类定义如下所示。 class EvalConstant(): def __init__(self, tokens): self.value = tokens[0] def eval(self): try: return int(self.value) except: return float(self.value) def operatorOperands(tokenlist): it = iter(tokenlist) while 1: try: o1 = next(it) o2 = next(it) yield (o1, o2) except StopIteration: break class EvalMultOp(): def __init__(self, tokens): self.value = tokens[0] def eval(self): prod = self.value[0].eval() for op, val in operatorOperands(self.value[1:]): if op == '*': prod *= val.eval() if op == '/': prod /= val.eval() return prod class EvalAddOp(): def __init__(self, tokens): self.value = tokens[0] def eval(self): sum = self.value[0].eval() for op, val in operatorOperands(self.value[1:]): if op == '+': sum += val.eval() if op == '-': sum -= val.eval() return sum 由上述代码可知每个类有一个eval方法,用来对识别出来的语法元素进行语义解释,例如,EvalConstant的eval方法将识别出来的整数或实数数字串分别转换为整数或实数。EvalAddOp的eval方法对识别出来的加(减)法字符串中的运算数,根据运算符(加或减)的不同,做相应的运算。EvalMultOp的eval方法功能是类似的。 以各种四则运算表达式字符串为参数运行main函数,即可得到该表达式的值。例如main(‘1+2+3+4’)结果是10。 编程要求 本关的编程任务是补全11-2.py文件中EvalSignOp类的__init__函数和eval函数,以实现简单编译器的要求。具体要求如下: 本关要求通过补全11-2.py文件中EvalSignOp类的__init__函数和eval函数来实现对带符号数四则运算的求值。 具体请参见后续测试样例。 本关涉及的代码文件11-2.py的代码框架如下: from pyparsing import Word, nums, alphas, Combine, oneOf, opAssoc, operatorPrecedence class EvalConstant(): def __init__(self, tokens): self.value = tokens[0] def eval(self): try: return int(self.value) except: return float(self.value) class EvalSignOp(object): def __init__(self, tokens): # 请在此添加代码,补全函数__init__ #-----------Begin---------- #------------End----------- def eval(self): # 请在此添加代码,补全函数eval #-----------Begin---------- #------------End----------- def operatorOperands(tokenlist): it = iter(tokenlist) while 1: try: o1 = next(it) o2 = next(it) yield (o1, o2) except StopIteration: break class EvalMultOp(): def __init__(self, tokens): self.value = tokens[0] def eval(self): prod = self.value[0].eval() for op, val in operatorOperands(self.value[1:]): if op == '*': prod *= val.eval() if op == '/': prod /= val.eval() return prod class EvalAddOp(): def __init__(self, tokens): self.value = tokens[0] def eval(self): sum = self.value[0].eval() for op, val in operatorOperands(self.value[1:]): if op == '+': sum += val.eval() if op == '-': sum -= val.eval() return sum def main(strExpr): integer = "0"|Word(nums[1:],nums) real = Combine(integer + "." + Word(nums)) variable = Word(alphas) operand = real | integer | variable multop = oneOf('* / // %') plusop = oneOf('+ -') signop = oneOf('+ -') operand.setParseAction(EvalConstant) arith_expr = operatorPrecedence(operand, [(signop, 1, opAssoc.RIGHT, EvalSignOp), (multop, 2, opAssoc.LEFT, EvalMultOp), (plusop, 2, opAssoc.LEFT, EvalAddOp), ]) ret = arith_expr.parseString(strExpr, parseAll=True)[0] result = ret.eval() return result if __name__ == '__main__': exprs = ['-12*(3*(-3))-100+(-55)', '90/(-12-(-13))', '1+2+3+4+(-10)-(-11)'] for expr in exprs: print(main(expr))
Python爬虫爬取网页源代码为空,求问原因&解决方案(向)
代码如下: import urllib import urllib2 import re url ='http://www.yingjiesheng.com/guangzhou-moreptjob-2.html' req = urllib2.Request(url) try: html = urllib2.urlopen(req).read() print html except urllib2.HTTPError, e: print 'The server couldn\'t fulfill the request.' print 'Error code: ', e.code except urllib2.URLError, e: print 'We failed to reach a server.' print 'Reason: ', e.reason else: print 'No exception was raised.' 代码结果如下: ![图片说明](https://img-ask.csdn.net/upload/201508/11/1439268527_619604.png) 求:在爬取网页源代码的时候返回空的原因及解决方案(或解决方向)~求大神指点迷津啊! (PS:在处理这个问题的时候,我曾在IDLE上直接敲这段代码运行,有时候可以返回源代码有时候不可以,另外,有时候我把程序运行了几十遍之后,就能返回源代码,这时候我把url的数字2改为3时(即相当下一页),又不可以了,好诡异~~)
使用shell命令批量修改文件名。
文件夹中是很多子文件夹如下图 ![图片说明](https://img-ask.csdn.net/upload/201904/17/1555481831_649772.png) 每个文件夹中都有2个如下的文件 ![图片说明](https://img-ask.csdn.net/upload/201904/17/1555481878_540422.png) 如何使用shell命令将两个文件改成同样的一串随机数字? eg:将上图中两个文件名改为:1234567-left.dat、1234567-right.dat
jsvalidation用法,小白提问!!
1、使用jsvalidation没效果,点击表单提交没验证直接到了下一页,按F12显示 HTTP404: 找不到 - 服务器尚未找到与请求的 URI (统一资源标识符)匹配的任何内容。 GET - http://localhost:8081/js/validation-framework.js 不知道啥原因,jsvalidation的三个dtd xml 和js代码是老师给的,说是要写一个什么正则表达式,不会改 求解答 2 查过资料 web。xml中没有设置那个什么过滤那个 3下面是framework。js的代码,看不懂 ``/* * JavaScript Validation Framework * * Author: Michael Chen(mechiland) on 2004/03 * This software is on the http://www.cosoft.org.cn/projects/jsvalidation * for update, bugfix, etc, you can goto the homepage and submit your request * and question. * Apache License 2.0 * You should use this software under the terms. * * Please, please keep above words. At least ,please make a note that such as * "This software developed by Michael Chen(http://www.jzchen.net)" . * $Id: validation-framework.js,v 1.7 2004/04/30 05:33:29 jzchen Exp $ */ /** Config Section, Config these fields to make this framework work. **/ // If there is only one config file in your system, use this property. otherwise, use // ValidationFramework.init("configfile") // instead. var ValidationRoot = "/LoginServletProjectFour/js/"; // the field style when validation fails. it aim to provide more beautiful UI and more good // experience to the end-user. // NOTE: this will be buggy. Please report the error to me. var ValidationFailCssStyle = "border:2px solid #FFCC88;"; //Validation function. The entry point of the framework. function doValidate(formRef) { try { var formId = formRef; if (typeof (formRef) == "string") { formId = formRef; } else if (typeof (formRef) == "object") { formId = formRef.getAttribute("id"); } var form = FormFactory.getFormFromId(formId); if (form != null) { return ValidationFramework.validateForm(form); } else { return false; } } catch (e) { ValidationFramework.exception(e.name+":" +e.description); return false; } } /**===================================================================**/ /* * JSValidation Framework Code Started * * Please do not modify the code unless you are very familiar with JavaScript. * The best way to solve problem is report the problem to our project page. * url: http://cosoft.org.cn/projects/jsvalidation */ // The Xml document. To process cross-browser. Thanks Eric. function XmlDocument() {} XmlDocument.create = function () { if (document.implementation && document.implementation.createDocument) { return document.implementation.createDocument("", "", null); } else if (window.ActiveXObject) { try { var prefix = ["MSXML2", "MSXML", "Microsoft", "MSXML3"]; for (var i = 0; i < prefix.length; i++) { //return new ActiveXObject(prefix[i] + ".DomDocument"); var obj = new ActiveXObject(prefix[i] + ".DomDocument"); if (obj == null || typeof(obj) == 'undefined') { continue; } else { return obj; } } } catch (e) { //^_^ throw new Error("My God, What version of IE are you using? IE5&+ is requiered."); } } else throw new Error("Cannot create DOM Document!"); } function ValidationFramework() {} ValidationFramework._validationCache = null; ValidationFramework._currentForm = null; ValidationFramework._userLanguage="auto"; /** * Validate a form. * NOTE: the form is Framework virture form, not the HTML Form. * Html Form can be transform to Virture form by * FormFactory.getFormFromId(htmlFormId); * See the doc for more information. * @param form the virtual form. */ ValidationFramework.validateForm = function(fform) { ValidationFramework._currentForm = fform; var failFields = []; var id = fform.getId(); var showError = fform.getShowError(); var showType = fform.getShowType(); var br = null; if (showError != "alert") { br = "<br />"; } else { br = "\n"; } var errorStrArray = []; var ret = false; var formObj = document.getElementById(id); var fields = fform.getFields(); var rightnum = 0; for (var i = 0; i < fields.length; i++) { var retVal = ValidationFramework.validateField(fields[i]); var fo=formObj[fields[i].getName()]; if (typeof (fo) !='undefined' && fo != null && typeof(fo.type) != "undefined") { fo.style.cssText = ""; } if (retVal != "OK") { errorStrArray[errorStrArray.length] = retVal; failFields[failFields.length] = formObj[fields[i].getName()]; } else { rightnum ++; } } if (rightnum == fields.length) { ret = true; } if (errorStrArray.length > 0) { if (showError == "alert") { if (showType == "first") { alert(errorStrArray[0]); } else { alert(errorStrArray.join(br)); } } else { var errObj = document.getElementById(showError); if (showType == "first") { errObj.innerHTML = errorStrArray[0]; } else { errObj.innerHTML = errorStrArray.join(br); } } if (typeof (failFields[0]) !='undefined' && failFields[0] != null && typeof(failFields[0].type) != "undefined") { failFields[0].focus(); } for (var i = 0; i < failFields.length; i++) { var o = failFields[i]; if ( typeof (o) !='undefined' && o != null && typeof(o.type) != "undefined") { o.style.cssText = ValidationFailCssStyle; } } } return ret; } /** * Validation the field * @param filed the field you want to validate. */ ValidationFramework.validateField = function(field) { var depends = field.getDepends(); var retStr = "OK"; for (var i = 0; i < depends.length; i++) { if (!ValidationFramework.validateDepend(field, depends[i])) { retStr = ValidationFramework.getErrorString(field, depends[i]); return retStr; //Break; } } return retStr; } /** * Validate the field depend. * This function dispatch the various depends into ValidateMethodFactory.validateXXX */ ValidationFramework.validateDepend = function(field, depend) { if (depend.getName() == "required") { return ValidateMethodFactory.validateRequired(field, depend.getParams()); } else if (depend.getName() == "integer") { return ValidateMethodFactory.validateInteger(field, depend.getParams()); } else if (depend.getName() == "double") { return ValidateMethodFactory.validateDouble(field, depend.getParams()); } else if (depend.getName() == "commonChar") { return ValidateMethodFactory.validateCommonChar(field, depend.getParams()); } else if (depend.getName() == "chineseChar") { return ValidateMethodFactory.validateChineseChar(field, depend.getParams()); } else if (depend.getName() == "minLength") { return ValidateMethodFactory.validateMinLength(field, depend.getParams()); } else if (depend.getName() == "maxLength") { return ValidateMethodFactory.validateMaxLength(field, depend.getParams()); } else if (depend.getName() == "email") { return ValidateMethodFactory.validateEmail(field, depend.getParams()); } else if (depend.getName() == "date") { return ValidateMethodFactory.validateDate(field, depend.getParams()); } else if (depend.getName() == "time") { return ValidateMethodFactory.validateTime(field, depend.getParams()); } else if (depend.getName() == "mask") { return ValidateMethodFactory.validateMask(field, depend.getParams()); } else if (depend.getName() == "integerRange") { return ValidateMethodFactory.validateIntegerRange(field, depend.getParams()); } else if (depend.getName() == "doubleRange") { return ValidateMethodFactory.validateDoubleRange(field, depend.getParams()); } else if (depend.getName() == "equalsField") { return ValidateMethodFactory.validateEqualsField(field, depend.getParams()); } else { ValidationFramework.exception("还未实现该依赖: " + depend.getName()); return false; } } // hold the current config file var _validationConfigFile = ""; ValidationFramework.getDocumentElement = function() { if (ValidationFramework._validationCache != null) { return ValidationFramework._validationCache; } var file = ""; if (_validationConfigFile != "") { file = _validationConfigFile; } else { file = ValidationRoot + "validation-config.xml"; } var xmlDoc = XmlDocument.create(); xmlDoc.async = false; // Damn!!! it cost me half an hour to fix it! xmlDoc.load(file); if (xmlDoc.documentElement == null) { ValidationFramework.exception("配置文件读取错误,请检查。"); return null; } // TODO: parse the document if it's a valid document. ValidationFramework._validationCache = xmlDoc.documentElement; var lang=ValidationFramework._validationCache.getAttribute("lang"); ValidationFramework._userLanguage = (lang==null) ? "auto" : lang; return ValidationFramework._validationCache; } ValidationFramework.init = function(configFile) { _validationConfigFile = configFile; ValidationFramework.getDocumentElement(); } ValidationFramework.getAllForms = function() { var vforms = []; var root = ValidationFramework.getDocumentElement(); if (root != null) { var fs = root.childNodes; for (var i = 0;i < fs.length ;i++ ) { vforms[i] = new ValidationForm(fs.item(i)); } } return vforms; } ValidationFramework.getErrorString = function(field, depend) { var stringResource = null; var lang = ValidationFramework._userLanguage.toLowerCase(); //if lang == auto, get the user's browser language. if (lang == "auto") { // different browser has the different method the get the // user's language. so this is a stupid way to detect the // most common browser IE and Mozilla. if (typeof navigator.userLanguage == 'undefined') lang = navigator.language.toLowerCase(); else lang = navigator.userLanguage.toLowerCase(); } // get the language if (typeof ValidationErrorString[lang] != 'object') { stringResource = ValidationErrorString['zh-cn']; } else { stringResource = ValidationErrorString[lang]; } var dep = depend.getName().toLowerCase(); var retStr = stringResource[dep]; //If the specified depend not defined, use the default error string. if (typeof retStr != 'string') { retStr = stringResource["default"]; retStr = retStr.replace("{0}", field.getDisplayName()); return retStr; } retStr = retStr.replace("{0}", field.getDisplayName()); if (dep == "minlength" || dep == "maxlength" || dep == "date" ) { retStr = retStr.replace("{1}", depend.getParams()[0]); } else if ( dep == "equalsfield") { var eqField = field.getForm().findField(depend.getParams()[0]); if (eqField == null) { ValidationFramework.exception("找不到名称为[" + depend.getParams()[0]+"]的域,请检查xml配置文件。"); retStr = "<<配置错误>>"; } else { retStr = retStr.replace("{1}", field.getForm().findField(depend.getParams()[0]).getDisplayName()); } } else if (dep == "integerrange" || dep == "doublerange") { retStr = retStr.replace("{1}", depend.getParams()[0]); retStr = retStr.replace("{2}", depend.getParams()[1]); } return retStr; } ValidationFramework.getWebFormFieldObj = function(field) { var obj = null; if (ValidationFramework._currentForm != null) { var formObj = document.getElementById(ValidationFramework._currentForm.getId()); obj = formObj[field.getName()]; if (typeof(obj) == 'undefined') { obj = null; } } if (obj == null) { ValidationFramework.exception("在配置文件中有需要验证的域,但在实际网页表单中不存在:[name=" + field.getName() + "]。"); } return obj; } ValidationFramework.exception = function(str) { var ex = "JavaScript Validation Framework 运行时错误:\n\n"; ex += str; ex += "\n\n\n任何运行错误都会导致该域验证失败。"; alert(ex); } ValidationFramework.getIntegerValue = function(val) { var intvalue = parseInt(val); if (isNaN(intvalue)) { ValidationFramework.exception("期待一个整型参数。"); } return intvalue; } ValidationFramework.getFloatValue = function(val) { var floatvalue = parseFloat(val); if (isNaN(floatvalue)) { ValidationFramework.exception("期待一个浮点型参数。"); } return floatvalue; } /** * FormFactory * Build virture form from Html Form. */ function FormFactory() {} FormFactory.getFormFromDOM = function(dom) { var form = new ValidationForm(); form.setId(dom.getAttribute("id")); form.setShowError(dom.getAttribute("show-error")); form.setOnFail(dom.getAttribute("onfail")); form.setShowType(dom.getAttribute("show-type")); if (dom.hasChildNodes()) { var f = dom.childNodes; for (var i = 0; i < f.length; i++) { if (f.item(i) == null||typeof(f.item(i).tagName) == 'undefined' || f.item(i).tagName != 'field') { continue; } var field = FieldFactory.getFieldFromDOM(f.item(i)); if (field != null) { form.addField(field); } } } return form; } /// Get the Form from ID FormFactory.getFormFromId = function(id) { var root = ValidationFramework.getDocumentElement(); if ( root == null || (!root.hasChildNodes()) ) return null; var vforms = root.childNodes; for (var i = 0; i < vforms.length; i++) { var f = vforms.item(i); if (typeof(f.tagName) != 'undefined' && f.tagName == 'form' && f.getAttribute("id") == id) { return FormFactory.getFormFromDOM(f); } } return null; } /** * A validation form object. */ function ValidationForm() { this._fields = []; this._id = null; this._showError = null; this._onFail = null; this._showType = null; this.getFields = function() { return this._fields; } this.setFields = function(p0) { this._fields = p0; } this.getId = function() { return this._id; } this.setId = function(p0) { this._id = p0; } this.getShowError = function() { return this._showError; } this.setShowError = function(p0) { this._showError = p0; } this.getShowType = function() { return this._showType; } this.setShowType = function(p0) { this._showType = p0; } this.getOnFail = function() { return this._onFail; } this.setOnFail = function(p0) { this._onFail = p0; } // find field by it's name this.findField = function(p0) { for (var i = 0; i < this._fields.length; i++) { if (this._fields[i].getName() == p0) { return this._fields[i]; } } return null; } this.addField = function(p0) { this._fields[this._fields.length] = p0; p0.setForm(this); } } /** * A form filed. virtual. */ function ValidationField() { this._name = null; this._depends = []; this._displayName = null; this._onFail = null; this._form = null; this.getName = function() { return this._name; } this.setName = function(p0) { this._name = p0; } this.getDepends = function() { return this._depends; } this.setDepends = function(p0) { this._depends = p0; } this.getDisplayName = function() { return this._displayName; } this.setDisplayName = function(p0) { this._displayName = p0; } this.getOnFail = function() { return this._onFail; } this.setOnFail = function(p0) { this._onFail = p0; } this.getForm = function() { return this._form; } this.setForm = function(p0) { this._form = p0; } this.addDepend = function(p0) { this._depends[this._depends.length] = p0; } } ///Factory methods for create Field function FieldFactory() {} FieldFactory.getFieldFromDOM = function(dom) { var field = new ValidationField(); field.setName(dom.getAttribute("name")); field.setDisplayName(dom.getAttribute("display-name")); field.setOnFail(dom.getAttribute("onfail")); if (dom.hasChildNodes()) { var depends = dom.childNodes; for (var i = 0; i < depends.length; i++) { var item = depends.item(i); if (typeof(item.tagName) == 'undefined' || item.tagName != 'depend') { continue; } var dp = new ValidationDepend(); dp.setName(item.getAttribute("name")); dp.addParam(item.getAttribute("param0")); dp.addParam(item.getAttribute("param1")); dp.addParam(item.getAttribute("param2")); dp.addParam(item.getAttribute("param3")); dp.addParam(item.getAttribute("param4")); field.addDepend(dp); } } return field; } function FormFieldUtils() {} FormFieldUtils.findField = function(formName, fieldName) { var formArr = ValidationFramework.getAllForms(); var theForm = null; for (var i = 0; i < formArr.length; i++) { if (formArr[i].getName() == formName) { theForm = formArr[i]; } } if (theForm != null) { return theForm.findField(fieldName); } else { return null; } } /** * A validaton depend. */ function ValidationDepend() { this._name = null; this._params = []; this.getName = function() { return this._name; } this.setName = function(p0) { this._name = p0; } this.getParams = function() { return this._params; } this.setParams = function(p0) { this.params = p0; } this.addParam = function(p0) { this._params[this._params.length] = p0; } } function ValidateMethodFactory() {} ValidateMethodFactory._methods = []; ValidateMethodFactory.validateRequired = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (typeof(obj.type) == "undefined") { var tmp = 0; for (var i = 0; i < obj.length; i++) { if (obj[i].checked) { return true; } } return false; } if (obj.type == "checkbox" || obj.type == "radio") { return (obj.checked); } else { return !(obj.value == ""); } } ValidateMethodFactory.validateInteger = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var exp = new RegExp("^-?\\d+$"); return exp.test(obj.value); } ValidateMethodFactory.validateDouble = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var exp = new RegExp("^-?\\d+\.\\d+$"); return exp.test(obj.value); } ValidateMethodFactory.validateCommonChar = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var exp = new RegExp("^[A-Za-z0-9_]*$"); return exp.test(obj.value); } ValidateMethodFactory.validateChineseChar = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var exp = new RegExp("^[\u4E00-\u9FA5\uF900-\uFA2D]*$"); return exp.test(obj.value); } ValidateMethodFactory.validateMinLength = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; var v = ValidationFramework.getIntegerValue(params[0]); return (obj.value.length >= v); } ValidateMethodFactory.validateMaxLength = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; var v = ValidationFramework.getIntegerValue(params[0]); return (obj.value.length <= v); } ValidateMethodFactory.validateEmail = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; return ValidateMethodFactory.__checkEmail(obj.value); } ValidateMethodFactory.validateDate = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var value = obj.value; var datePattern = params[0]; var MONTH = "mm"; var DAY = "dd"; var YEAR = "yyyy"; var orderMonth = datePattern.indexOf(MONTH); var orderDay = datePattern.indexOf(DAY); var orderYear = datePattern.indexOf(YEAR); var bValid = true; var dateRegexp = null; if ((orderDay < orderYear && orderDay > orderMonth)) { var iDelim1 = orderMonth + MONTH.length; var iDelim2 = orderDay + DAY.length; var delim1 = datePattern.substring(iDelim1, iDelim1 + 1); var delim2 = datePattern.substring(iDelim2, iDelim2 + 1); if (iDelim1 == orderDay && iDelim2 == orderYear) { dateRegexp = new RegExp("^(\\d{2})(\\d{2})(\\d{4})$"); } else if (iDelim1 == orderDay) { dateRegexp = new RegExp("^(\\d{2})(\\d{2})[" + delim2 + "](\\d{4})$"); } else if (iDelim2 == orderYear) { dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})(\\d{4})$"); } else { dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})[" + delim2 + "](\\d{4})$"); } var matched = dateRegexp.exec(value); if(matched != null) { if (!ValidateMethodFactory.__isValidDate(matched[2], matched[1], matched[3])) { bValid = false; } } else { bValid = false; } } else if ((orderMonth < orderYear && orderMonth > orderDay)) { var iDelim1 = orderDay + DAY.length; var iDelim2 = orderMonth + MONTH.length; var delim1 = datePattern.substring(iDelim1, iDelim1 + 1); var delim2 = datePattern.substring(iDelim2, iDelim2 + 1); if (iDelim1 == orderMonth && iDelim2 == orderYear) { dateRegexp = new RegExp("^(\\d{2})(\\d{2})(\\d{4})$"); } else if (iDelim1 == orderMonth) { dateRegexp = new RegExp("^(\\d{2})(\\d{2})[" + delim2 + "](\\d{4})$"); } else if (iDelim2 == orderYear) { dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})(\\d{4})$"); } else { dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})[" + delim2 + "](\\d{4})$"); } var matched = dateRegexp.exec(value); if(matched != null) { if (!ValidateMethodFactory.__isValidDate(matched[1], matched[2], matched[3])) { bValid = false; } } else { bValid = false; } } else if ((orderMonth > orderYear && orderMonth < orderDay)) { var iDelim1 = orderYear + YEAR.length; var iDelim2 = orderMonth + MONTH.length; var delim1 = datePattern.substring(iDelim1, iDelim1 + 1); var delim2 = datePattern.substring(iDelim2, iDelim2 + 1); if (iDelim1 == orderMonth && iDelim2 == orderDay) { dateRegexp = new RegExp("^(\\d{4})(\\d{2})(\\d{2})$"); } else if (iDelim1 == orderMonth) { dateRegexp = new RegExp("^(\\d{4})(\\d{2})[" + delim2 + "](\\d{2})$"); } else if (iDelim2 == orderDay) { dateRegexp = new RegExp("^(\\d{4})[" + delim1 + "](\\d{2})(\\d{2})$"); } else { dateRegexp = new RegExp("^(\\d{4})[" + delim1 + "](\\d{2})[" + delim2 + "](\\d{2})$"); } var matched = dateRegexp.exec(value); if(matched != null) { if (!ValidateMethodFactory.__isValidDate(matched[3], matched[2], matched[1])) { bValid = false; } } else { bValid = false; } } else { bValid = false; } return bValid; } ValidateMethodFactory.validateTime = function(field, params) { ////NOT IMPLEMENT YET SINCE IT'S NOT USEFUL. return true; } ValidateMethodFactory.validateMask = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var exp = new RegExp(params[0]); //FIXME: this method may be buggy, need more test. return exp.test(obj.value); } ValidateMethodFactory.validateIntegerRange = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var p0 = ValidationFramework.getIntegerValue(params[0]); var p1 = ValidationFramework.getIntegerValue(params[1]); if (ValidateMethodFactory.validateInteger(field)) { var v = parseInt(obj.value); return (v >= p0 && v <= p1); } else { return false; } return true; } ValidateMethodFactory.validateDoubleRange = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; if (obj.value == "") return true; var p0 = ValidationFramework.getFloatValue(params[0]); var p1 = ValidationFramework.getFloatValue(params[1]); if (ValidateMethodFactory.validateInteger(field) || ValidateMethodFactory.validateDouble(field)) { var v = parseFloat(obj.value); return (v >= p0 && v <= p1); } else { return false; } return true; } ValidateMethodFactory.validateEqualsField = function(field, params) { var obj = ValidationFramework.getWebFormFieldObj(field); if (obj == null) return false; var formObj = document.getElementById(ValidationFramework._currentForm.getId()); var eqField = formObj[params[0]]; if (eqField != null) { return (obj.value == eqField.value) } else { return false; } } ValidateMethodFactory.__isValidDate = function(day, month, year) { if (month < 1 || month > 12) return false; if (day < 1 || day > 31) return false; if ((month == 4 || month == 6 || month == 9 || month == 11) &&(day == 31)) return false; if (month == 2) { var leap = (year % 4 == 0 && (year % 100 != 0 || year % 400 == 0)); if (day>29 || (day == 29 && !leap)) return false; } return true; } /** * Reference: Sandeep V. Tamhankar (stamhankar@hotmail.com), * http://javascript.internet.com */ ValidateMethodFactory.__checkEmail = function(emailStr) { if (emailStr.length == 0) { return true; } var emailPat=/^(.+)@(.+)$/; var specialChars="\\(\\)<>@,;:\\\\\\\"\\.\\[\\]"; var validChars="\[^\\s" + specialChars + "\]"; var quotedUser="(\"[^\"]*\")"; var ipDomainPat=/^(\d{1,3})[.](\d{1,3})[.](\d{1,3})[.](\d{1,3})$/; var atom=validChars + '+'; var word="(" + atom + "|" + quotedUser + ")"; var userPat=new RegExp("^" + word + "(\\." + word + ")*$"); var domainPat=new RegExp("^" + atom + "(\\." + atom + ")*$"); var matchArray=emailStr.match(emailPat); if (matchArray == null) { return false; } var user=matchArray[1]; var domain=matchArray[2]; if (user.match(userPat) == null) { return false; } var IPArray = domain.match(ipDomainPat); if (IPArray != null) { for (var i = 1; i <= 4; i++) { if (IPArray[i] > 255) { return false; } } return true; } var domainArray=domain.match(domainPat); if (domainArray == null) { return false; } var atomPat=new RegExp(atom,"g"); var domArr=domain.match(atomPat); var len=domArr.length; if ((domArr[domArr.length-1].length < 2) || (domArr[domArr.length-1].length > 3)) { return false; } if (len < 2) { return false; } return true; } ////Language Definitions var ValidationErrorString = new Object(); ////Simplified Chinese(zh-ch) ValidationErrorString["zh-cn"] = new Object(); ValidationErrorString["zh-cn"]["default"]="域{0}校验失败。"; ValidationErrorString["zh-cn"]["required"]="{0}不能为空。<br/>"; ValidationErrorString["zh-cn"]["integer"]="{0}必须是一个整数。"; ValidationErrorString["zh-cn"]["double"]="{0}必须是一个浮点数(带小数点)。"; ValidationErrorString["zh-cn"]["commonchar"] = "{0}必须是普通英文字符:字母,数字和下划线。<br/>"; ValidationErrorString["zh-cn"]["chinesechar"] = "{0}必须是中文字符。"; ValidationErrorString["zh-cn"]["minlength"]="{0}长度不能小于{1}个字符。"; ValidationErrorString["zh-cn"]["maxlength"]="{0}长度不能大于{1}个字符。" ; ValidationErrorString["zh-cn"]["invalid"]="{0}无效。"; ValidationErrorString["zh-cn"]["date"]="{0}不是一个有效日期,期待格式:{1}。"; ValidationErrorString["zh-cn"]["integerrange"]="{0}必须在整数{1}和{2}之间。"; ValidationErrorString["zh-cn"]["doublerange"]="{0}必须在浮点数{1}和{2}之间。"; ValidationErrorString["zh-cn"]["pid"]="{0}不是一个有效身份证号。"; ValidationErrorString["zh-cn"]["email"]="{0}不是一个有效的Email。"; ValidationErrorString["zh-cn"]["equalsfield"]="{0}必须和{1}一致。"; ////English(en-us) ValidationErrorString["en-us"] = new Object(); ValidationErrorString["en-us"]["default"]="Failed when validating filed {0}."; ValidationErrorString["en-us"]["required"]="{0} is required."; ValidationErrorString["en-us"]["integer"]="{0} must be a integer."; ValidationErrorString["en-us"]["double"]="{0} must be a double value. "; ValidationErrorString["en-us"]["commonchar"] = "{0} should be common ascii characters, A-Z,a-z and undercore. "; ValidationErrorString["en-us"]["chinesechar"] = "{0} must be chinese characters. "; ValidationErrorString["en-us"]["minlength"]="{0} cannot be less then {1}. "; ValidationErrorString["en-us"]["maxlength"]="{0} cannot be more then {1}. "; ValidationErrorString["en-us"]["invalid"]="{0} in invalid. "; ValidationErrorString["en-us"]["date"]="{0} is not an invalid date format: {1}. "; ValidationErrorString["en-us"]["integerrange"]="{0} should be between number {1} and {2}. "; ValidationErrorString["en-us"]["doublerange"]="{0} should be between double {1} and {2}. "; ValidationErrorString["en-us"]["pid"]="{0} is not an valid pid. "; ValidationErrorString["en-us"]["email"]="{0} is not an valid email address. "; ValidationErrorString["en-us"]["equalsfield"]="{0} should be agree with field {1}. "; // preload the validation file. //ValidationFramework.getDocumentElement();` 4xml代码 ``` <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE validation-config SYSTEM "validation-config.dtd"> <validation-config lang="auto"> <form id="form1" show-error="alert" show-type="all"> <field name="name" display-name="姓名" onfail=""> <depend name="required" /> <depend name="commonChar" /> <depend name="minLength" param0="3" /> <depend name="maxLength" param0="20" /> </field> <field name="email" display-name="email邮箱"> <depend name="required" /> <depend name="email" /> </field> <field name="age" display-name="年龄"> <depend name="required" /> </field> <field name="protime" display-name="编程时间"> <depend name="required" /> </field> <field name="os" display-name="使用的操作系统"> <depend name="required" /> </field> <field name="lang" display-name="使用的编程语言"> <depend name="required" /> </field> <field name="editor1" display-name="建议"> <depend name="required" /> </field> </form> </validation-config> ``` 5jsp代码 ``` <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <% String path = request.getContextPath(); String basePath = request.getScheme()+"://"+request.getServerName()+":"+request.getServerPort()+path+"/"; %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <base href="<%=basePath%>"> <title>My JSP 'Login.jsp' starting page</title> <meta http-equiv="pragma" content="no-cache"> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="expires" content="0"> <meta http-equiv="keywords" content="keyword1,keyword2,keyword3"> <meta http-equiv="description" content="This is my page"> <!-- <link rel="stylesheet" type="text/css" href="styles.css"> --> </head> <script type='text/javascript' src='ckeditor/ckeditor.js'></script> <script type='text/javascript' src='/js/validation-framework.js'></script> <body> <h1>潜在用户网络调查</h1> <form action='LoginServlet' name='form1' id='form1' method='post' onSubmit="return doValidate(this)"> 姓名:<input type='text' name='name' value='' /><br/> EMAIL:<input type='text' name='email' value=''/><br/> 年纪:<input type='radio' name='age' value='小于18' />小于18 <input type='radio' name='age' value='18-25'/>18-25 <input type='radio' name='age' value='26-40'/>26-40 <input type='radio' name='age' value='大于40'/>大于40<br/> 编程时间:<select name='protime'> <option value='6-12月' >6-12月</option> <option value='12-24月'>12-24月</option> <option value='24月以上'>24月以上</option> </select><br/> 使用操作系统:<select name='os' multiple='multiple' size='6'> <option value='Win XP'>Win XP</option> <option value='Win 2000/2003'>Win 2000/2003</option> <option value='Linux'>Linux</option> <option value='FreeBSD'>FreeBSD</option> <option value='Mac OS'>Mac OS</option> <option value='Other'>Other</option> </select><br/> 使用的编程语言:<input type='checkbox' name='lang' value='C'/>C <input type='checkbox' name='lang' value='C++'/>C++ <input type='checkbox' name='lang' value='C#'/>C# <input type='checkbox' name='lang' value='PYTHON'/>PYTHON <input type='checkbox' name='lang' value='JAVA'/>JAVA <input type='checkbox' name='lang' value='VB'/>VB <input type='checkbox' name='lang' value='DEPHI'/>DEPHI<br/> 建议:<textarea class='ckeditor' cols='50' id='editor1' name='editor1' rows='10'></textarea><br/> <input type='submit' value='提交'> <input type='reset' value='重置'> <input type='hidden' name='ring' value='normal'><br/> <% HttpSession hs=request.getSession(true); hs.setAttribute("write", "yes"); %> </form> </body> </html> ```
基于keras写的模型中自定义的函数(如损失函数)如何保存到模型中?
```python batch_size = 128 original_dim = 100 #25*4 latent_dim = 16 # z的维度 intermediate_dim = 256 # 中间层的维度 nb_epoch = 50 # 训练轮数 epsilon_std = 1.0 # 重参数 #my tips:encoding x = Input(batch_shape=(batch_size,original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) # mu z_log_var = Dense(latent_dim)(h) # sigma #my tips:Gauss sampling,sample Z def sampling(args): # 重采样 z_mean, z_log_var = args epsilon = K.random_normal(shape=(128, 16), mean=0., stddev=1.0) return z_mean + K.exp(z_log_var / 2) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend # my tips:get sample z(encoded) z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') # 中间层 decoder_mean = Dense(original_dim, activation='sigmoid') # 输出层 h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) #my tips:loss(restruct X)+KL def vae_loss(x, x_decoded_mean): xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return xent_loss + kl_loss vae = Model(x, x_decoded_mean) vae.compile(optimizer='rmsprop', loss=vae_loss) vae.fit(x_train, x_train, shuffle=True, epochs=nb_epoch, verbose=2, batch_size=batch_size, validation_data=(x_valid, x_valid)) vae.save(path+'//VAE.h5') ``` 一段搭建VAE结构的代码,在保存模型后调用时先是出现了sampling中一些全局变量未定义的问题,将变量改为确定数字后又出现了vae_loss函数未定义的问题(unknown loss function: vae_loss) 个人认为是模型中自定义的函数在保存上出现问题,但是也不知道怎么解决。刚刚上手keras和tensorflow这些框架,很多问题是第一次遇到,麻烦大神们帮帮忙!感谢!
keras 可视化图片乱码
之前用Keras自带的可视化工具,要安装pydot和graphviz,尝试了多次还是报错,最后发现pydot是python2的,换了pydot-ng。之后再运行直接报错import pydot error,又找到keras包下的util的 vis-util.py。把里面 ``` import pydot ``` 改为了 ``` import pydot-ng as pydot ``` 图是能画出来了,可是图里面总是跟着一串数字 ![图片说明](https://img-ask.csdn.net/upload/201904/24/1556092201_300083.png) 这该怎么办??求大佬指点。
基于tensorflow的pix2pix代码中如何做到输入图像和输出图像分辨率不一致
问题:例如在自己制作了成对的输入(input256×256 target 200×256)后,如何让输入图像和输出图像分辨率不一致,例如成对图像中:input的分辨率是256×256, output 和target都是200×256,需要修改哪里的参数。 论文参考:《Image-to-Image Translation with Conditional Adversarial Networks》 代码参考:https://blog.csdn.net/MOU_IT/article/details/80802407?utm_source=blogxgwz0 # coding=utf-8 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import numpy as np import os import glob import random import collections import math import time # https://github.com/affinelayer/pix2pix-tensorflow train_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/train/" # 训练集输入 train_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 训练集输出 test_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/val/" # 测试集输入 test_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/test_out/" # 测试集的输出 checkpoint = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 保存结果的目录 seed = None max_steps = None # number of training steps (0 to disable) max_epochs = 200 # number of training epochs progress_freq = 50 # display progress every progress_freq steps trace_freq = 0 # trace execution every trace_freq steps display_freq = 50 # write current training images every display_freq steps save_freq = 500 # save model every save_freq steps, 0 to disable separable_conv = False # use separable convolutions in the generator aspect_ratio = 1 #aspect ratio of output images (width/height) batch_size = 1 # help="number of images in batch") which_direction = "BtoA" # choices=["AtoB", "BtoA"]) ngf = 64 # help="number of generator filters in first conv layer") ndf = 64 # help="number of discriminator filters in first conv layer") scale_size = 286 # help="scale images to this size before cropping to 256x256") flip = True # flip images horizontally no_flip = True # don't flip images horizontally lr = 0.0002 # initial learning rate for adam beta1 = 0.5 # momentum term of adam l1_weight = 100.0 # weight on L1 term for generator gradient gan_weight = 1.0 # weight on GAN term for generator gradient output_filetype = "png" # 输出图像的格式 EPS = 1e-12 # 极小数,防止梯度为损失为0 CROP_SIZE = 256 # 图片的裁剪大小 # 命名元组,用于存放加载的数据集合创建好的模型 Examples = collections.namedtuple("Examples", "paths, inputs, targets, count, steps_per_epoch") Model = collections.namedtuple("Model", "outputs, predict_real, predict_fake, discrim_loss, discrim_grads_and_vars, gen_loss_GAN, gen_loss_L1, gen_grads_and_vars, train") # 图像预处理 [0, 1] => [-1, 1] def preprocess(image): with tf.name_scope("preprocess"): return image * 2 - 1 # 图像后处理[-1, 1] => [0, 1] def deprocess(image): with tf.name_scope("deprocess"): return (image + 1) / 2 # 判别器的卷积定义,batch_input为 [ batch , 256 , 256 , 6 ] def discrim_conv(batch_input, out_channels, stride): # [ batch , 256 , 256 , 6 ] ===>[ batch , 258 , 258 , 6 ] padded_input = tf.pad(batch_input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="CONSTANT") ''' [0,0]: 第一维batch大小不扩充 [1,1]:第二维图像宽度左右各扩充一列,用0填充 [1,1]:第三维图像高度上下各扩充一列,用0填充 [0,0]:第四维图像通道不做扩充 ''' return tf.layers.conv2d(padded_input, out_channels, kernel_size=4, strides=(stride, stride), padding="valid", kernel_initializer=tf.random_normal_initializer(0, 0.02)) # 生成器的卷积定义,卷积核为4*4,步长为2,输出图像为输入的一半 def gen_conv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: return tf.layers.separable_conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 生成器的反卷积定义 def gen_deconv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: _b, h, w, _c = batch_input.shape resized_input = tf.image.resize_images(batch_input, [h * 2, w * 2], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return tf.layers.separable_conv2d(resized_input, out_channels, kernel_size=4, strides=(1, 1), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d_transpose(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 定义LReLu激活函数 def lrelu(x, a): with tf.name_scope("lrelu"): # adding these together creates the leak part and linear part # then cancels them out by subtracting/adding an absolute value term # leak: a*x/2 - a*abs(x)/2 # linear: x/2 + abs(x)/2 # this block looks like it has 2 inputs on the graph unless we do this x = tf.identity(x) return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x) # 批量归一化图像 def batchnorm(inputs): return tf.layers.batch_normalization(inputs, axis=3, epsilon=1e-5, momentum=0.1, training=True, gamma_initializer=tf.random_normal_initializer(1.0, 0.02)) # 检查图像的维度 def check_image(image): assertion = tf.assert_equal(tf.shape(image)[-1], 3, message="image must have 3 color channels") with tf.control_dependencies([assertion]): image = tf.identity(image) if image.get_shape().ndims not in (3, 4): raise ValueError("image must be either 3 or 4 dimensions") # make the last dimension 3 so that you can unstack the colors shape = list(image.get_shape()) shape[-1] = 3 image.set_shape(shape) return image # 去除文件的后缀,获取文件名 def get_name(path): # os.path.basename(),返回path最后的文件名。若path以/或\结尾,那么就会返回空值。 # os.path.splitext(),分离文件名与扩展名;默认返回(fname,fextension)元组 name, _ = os.path.splitext(os.path.basename(path)) return name # 加载数据集,从文件读取-->解码-->归一化--->拆分为输入和目标-->像素转为[-1,1]-->转变形状 def load_examples(input_dir): if input_dir is None or not os.path.exists(input_dir): raise Exception("input_dir does not exist") # 匹配第一个参数的路径中所有的符合条件的文件,并将其以list的形式返回。 input_paths = glob.glob(os.path.join(input_dir, "*.jpg")) # 图像解码器 decode = tf.image.decode_jpeg if len(input_paths) == 0: input_paths = glob.glob(os.path.join(input_dir, "*.png")) decode = tf.image.decode_png if len(input_paths) == 0: raise Exception("input_dir contains no image files") # 如果文件名是数字,则用数字进行排序,否则用字母排序 if all(get_name(path).isdigit() for path in input_paths): input_paths = sorted(input_paths, key=lambda path: int(get_name(path))) else: input_paths = sorted(input_paths) sess = tf.Session() with tf.name_scope("load_images"): # 把我们需要的全部文件打包为一个tf内部的queue类型,之后tf开文件就从这个queue中取目录了, # 如果是训练模式时,shuffle为True path_queue = tf.train.string_input_producer(input_paths, shuffle=True) # Read的输出将是一个文件名(key)和该文件的内容(value,每次读取一个文件,分多次读取)。 reader = tf.WholeFileReader() paths, contents = reader.read(path_queue) # 对文件进行解码并且对图片作归一化处理 raw_input = decode(contents) raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32) # 归一化处理 # 判断两个值知否相等,如果不等抛出异常 assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels") ''' 对于control_dependencies这个管理器,只有当里面的操作是一个op时,才会生效,也就是先执行传入的 参数op,再执行里面的op。如果里面的操作不是定义的op,图中就不会形成一个节点,这样该管理器就失效了。 tf.identity是返回一个一模一样新的tensor的op,这会增加一个新节点到gragh中,这时control_dependencies就会生效. ''' with tf.control_dependencies([assertion]): raw_input = tf.identity(raw_input) raw_input.set_shape([None, None, 3]) # 图像值由[0,1]--->[-1, 1] width = tf.shape(raw_input)[1] # [height, width, channels] a_images = preprocess(raw_input[:, :width // 2, :]) # 256*256*3 b_images = preprocess(raw_input[:, width // 2:, :]) # 256*256*3 # 这里的which_direction为:BtoA if which_direction == "AtoB": inputs, targets = [a_images, b_images] elif which_direction == "BtoA": inputs, targets = [b_images, a_images] else: raise Exception("invalid direction") # synchronize seed for image operations so that we do the same operations to both # input and output images seed = random.randint(0, 2 ** 31 - 1) # 图像预处理,翻转、改变形状 with tf.name_scope("input_images"): input_images = transform(inputs) with tf.name_scope("target_images"): target_images = transform(targets) # 获得输入图像、目标图像的batch块 paths_batch, inputs_batch, targets_batch = tf.train.batch([paths, input_images, target_images], batch_size=batch_size) steps_per_epoch = int(math.ceil(len(input_paths) / batch_size)) return Examples( paths=paths_batch, # 输入的文件名块 inputs=inputs_batch, # 输入的图像块 targets=targets_batch, # 目标图像块 count=len(input_paths), # 数据集的大小 steps_per_epoch=steps_per_epoch, # batch的个数 ) # 图像预处理,翻转、改变形状 def transform(image): r = image if flip: r = tf.image.random_flip_left_right(r, seed=seed) # area produces a nice downscaling, but does nearest neighbor for upscaling # assume we're going to be doing downscaling here r = tf.image.resize_images(r, [scale_size, scale_size], method=tf.image.ResizeMethod.AREA) offset = tf.cast(tf.floor(tf.random_uniform([2], 0, scale_size - CROP_SIZE + 1, seed=seed)), dtype=tf.int32) if scale_size > CROP_SIZE: r = tf.image.crop_to_bounding_box(r, offset[0], offset[1], CROP_SIZE, CROP_SIZE) elif scale_size < CROP_SIZE: raise Exception("scale size cannot be less than crop size") return r # 创建生成器,这是一个编码解码器的变种,输入输出均为:256*256*3, 像素值为[-1,1] def create_generator(generator_inputs, generator_outputs_channels): layers = [] # encoder_1: [batch, 256, 256, in_channels] => [batch, 128, 128, ngf] with tf.variable_scope("encoder_1"): output = gen_conv(generator_inputs, ngf) # ngf为第一个卷积层的卷积核核数量,默认为 64 layers.append(output) layer_specs = [ ngf * 2, # encoder_2: [batch, 128, 128, ngf] => [batch, 64, 64, ngf * 2] ngf * 4, # encoder_3: [batch, 64, 64, ngf * 2] => [batch, 32, 32, ngf * 4] ngf * 8, # encoder_4: [batch, 32, 32, ngf * 4] => [batch, 16, 16, ngf * 8] ngf * 8, # encoder_5: [batch, 16, 16, ngf * 8] => [batch, 8, 8, ngf * 8] ngf * 8, # encoder_6: [batch, 8, 8, ngf * 8] => [batch, 4, 4, ngf * 8] ngf * 8, # encoder_7: [batch, 4, 4, ngf * 8] => [batch, 2, 2, ngf * 8] ngf * 8, # encoder_8: [batch, 2, 2, ngf * 8] => [batch, 1, 1, ngf * 8] ] # 卷积的编码器 for out_channels in layer_specs: with tf.variable_scope("encoder_%d" % (len(layers) + 1)): # 对最后一层使用激活函数 rectified = lrelu(layers[-1], 0.2) # [batch, in_height, in_width, in_channels] => [batch, in_height/2, in_width/2, out_channels] convolved = gen_conv(rectified, out_channels) output = batchnorm(convolved) layers.append(output) layer_specs = [ (ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] (ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] (ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] (ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] (ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] ] # 卷积的解码器 num_encoder_layers = len(layers) # 8 for decoder_layer, (out_channels, dropout) in enumerate(layer_specs): skip_layer = num_encoder_layers - decoder_layer - 1 with tf.variable_scope("decoder_%d" % (skip_layer + 1)): if decoder_layer == 0: # first decoder layer doesn't have skip connections # since it is directly connected to the skip_layer input = layers[-1] else: input = tf.concat([layers[-1], layers[skip_layer]], axis=3) rectified = tf.nn.relu(input) # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels] output = gen_deconv(rectified, out_channels) output = batchnorm(output) if dropout > 0.0: output = tf.nn.dropout(output, keep_prob=1 - dropout) layers.append(output) # decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels] with tf.variable_scope("decoder_1"): input = tf.concat([layers[-1], layers[0]], axis=3) rectified = tf.nn.relu(input) output = gen_deconv(rectified, generator_outputs_channels) output = tf.tanh(output) layers.append(output) return layers[-1] # 创建判别器,输入生成的图像和真实的图像:两个[batch,256,256,3],元素值值[-1,1],输出:[batch,30,30,1],元素值为概率 def create_discriminator(discrim_inputs, discrim_targets): n_layers = 3 layers = [] # 2x [batch, height, width, in_channels] => [batch, height, width, in_channels * 2] input = tf.concat([discrim_inputs, discrim_targets], axis=3) # layer_1: [batch, 256, 256, in_channels * 2] => [batch, 128, 128, ndf] with tf.variable_scope("layer_1"): convolved = discrim_conv(input, ndf, stride=2) rectified = lrelu(convolved, 0.2) layers.append(rectified) # layer_2: [batch, 128, 128, ndf] => [batch, 64, 64, ndf * 2] # layer_3: [batch, 64, 64, ndf * 2] => [batch, 32, 32, ndf * 4] # layer_4: [batch, 32, 32, ndf * 4] => [batch, 31, 31, ndf * 8] for i in range(n_layers): with tf.variable_scope("layer_%d" % (len(layers) + 1)): out_channels = ndf * min(2 ** (i + 1), 8) stride = 1 if i == n_layers - 1 else 2 # last layer here has stride 1 convolved = discrim_conv(layers[-1], out_channels, stride=stride) normalized = batchnorm(convolved) rectified = lrelu(normalized, 0.2) layers.append(rectified) # layer_5: [batch, 31, 31, ndf * 8] => [batch, 30, 30, 1] with tf.variable_scope("layer_%d" % (len(layers) + 1)): convolved = discrim_conv(rectified, out_channels=1, stride=1) output = tf.sigmoid(convolved) layers.append(output) return layers[-1] # 创建Pix2Pix模型,inputs和targets形状为:[batch_size, height, width, channels] def create_model(inputs, targets): with tf.variable_scope("generator"): out_channels = int(targets.get_shape()[-1]) outputs = create_generator(inputs, out_channels) # create two copies of discriminator, one for real pairs and one for fake pairs # they share the same underlying variables with tf.name_scope("real_discriminator"): with tf.variable_scope("discriminator"): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_real = create_discriminator(inputs, targets) # 条件变量图像和真实图像 with tf.name_scope("fake_discriminator"): with tf.variable_scope("discriminator", reuse=True): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_fake = create_discriminator(inputs, outputs) # 条件变量图像和生成的图像 # 判别器的损失,判别器希望V(G,D)尽可能大 with tf.name_scope("discriminator_loss"): # minimizing -tf.log will try to get inputs to 1 # predict_real => 1 # predict_fake => 0 discrim_loss = tf.reduce_mean(-(tf.log(predict_real + EPS) + tf.log(1 - predict_fake + EPS))) # 生成器的损失,生成器希望V(G,D)尽可能小 with tf.name_scope("generator_loss"): # predict_fake => 1 # abs(targets - outputs) => 0 gen_loss_GAN = tf.reduce_mean(-tf.log(predict_fake + EPS)) gen_loss_L1 = tf.reduce_mean(tf.abs(targets - outputs)) gen_loss = gen_loss_GAN * gan_weight + gen_loss_L1 * l1_weight # 判别器训练 with tf.name_scope("discriminator_train"): # 判别器需要优化的参数 discrim_tvars = [var for var in tf.trainable_variables() if var.name.startswith("discriminator")] # 优化器定义 discrim_optim = tf.train.AdamOptimizer(lr, beta1) # 计算损失函数对优化参数的梯度 discrim_grads_and_vars = discrim_optim.compute_gradients(discrim_loss, var_list=discrim_tvars) # 更新该梯度所对应的参数的状态,返回一个op discrim_train = discrim_optim.apply_gradients(discrim_grads_and_vars) # 生成器训练 with tf.name_scope("generator_train"): with tf.control_dependencies([discrim_train]): # 生成器需要优化的参数列表 gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] # 定义优化器 gen_optim = tf.train.AdamOptimizer(lr, beta1) # 计算需要优化的参数的梯度 gen_grads_and_vars = gen_optim.compute_gradients(gen_loss, var_list=gen_tvars) # 更新该梯度所对应的参数的状态,返回一个op gen_train = gen_optim.apply_gradients(gen_grads_and_vars) ''' 在采用随机梯度下降算法训练神经网络时,使用 tf.train.ExponentialMovingAverage 滑动平均操作的意义在于 提高模型在测试数据上的健壮性(robustness)。tensorflow 下的 tf.train.ExponentialMovingAverage 需要 提供一个衰减率(decay)。该衰减率用于控制模型更新的速度。该衰减率用于控制模型更新的速度, ExponentialMovingAverage 对每一个(待更新训练学习的)变量(variable)都会维护一个影子变量 (shadow variable)。影子变量的初始值就是这个变量的初始值, shadow_variable=decay×shadow_variable+(1−decay)×variable ''' ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([discrim_loss, gen_loss_GAN, gen_loss_L1]) # global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step + 1) return Model( predict_real=predict_real, # 条件变量(输入图像)和真实图像之间的概率值,形状为;[batch,30,30,1] predict_fake=predict_fake, # 条件变量(输入图像)和生成图像之间的概率值,形状为;[batch,30,30,1] discrim_loss=ema.average(discrim_loss), # 判别器损失 discrim_grads_and_vars=discrim_grads_and_vars, # 判别器需要优化的参数和对应的梯度 gen_loss_GAN=ema.average(gen_loss_GAN), # 生成器的损失 gen_loss_L1=ema.average(gen_loss_L1), # 生成器的 L1损失 gen_grads_and_vars=gen_grads_and_vars, # 生成器需要优化的参数和对应的梯度 outputs=outputs, # 生成器生成的图片 train=tf.group(update_losses, incr_global_step, gen_train), # 打包需要run的操作op ) # 保存图像 def save_images(output_dir, fetches, step=None): image_dir = os.path.join(output_dir, "images") if not os.path.exists(image_dir): os.makedirs(image_dir) filesets = [] for i, in_path in enumerate(fetches["paths"]): name, _ = os.path.splitext(os.path.basename(in_path.decode("utf8"))) fileset = {"name": name, "step": step} for kind in ["inputs", "outputs", "targets"]: filename = name + "-" + kind + ".png" if step is not None: filename = "%08d-%s" % (step, filename) fileset[kind] = filename out_path = os.path.join(image_dir, filename) contents = fetches[kind][i] with open(out_path, "wb") as f: f.write(contents) filesets.append(fileset) return filesets # 将结果写入HTML网页 def append_index(output_dir, filesets, step=False): index_path = os.path.join(output_dir, "index.html") if os.path.exists(index_path): index = open(index_path, "a") else: index = open(index_path, "w") index.write("<html><body><table><tr>") if step: index.write("<th>step</th>") index.write("<th>name</th><th>input</th><th>output</th><th>target</th></tr>") for fileset in filesets: index.write("<tr>") if step: index.write("<td>%d</td>" % fileset["step"]) index.write("<td>%s</td>" % fileset["name"]) for kind in ["inputs", "outputs", "targets"]: index.write("<td><img src='images/%s'></td>" % fileset[kind]) index.write("</tr>") return index_path # 转变图像的尺寸、并且将[0,1]--->[0,255] def convert(image): if aspect_ratio != 1.0: # upscale to correct aspect ratio size = [CROP_SIZE, int(round(CROP_SIZE * aspect_ratio))] image = tf.image.resize_images(image, size=size, method=tf.image.ResizeMethod.BICUBIC) # 将数据的类型转换为8位无符号整型 return tf.image.convert_image_dtype(image, dtype=tf.uint8, saturate=True) # 主函数 def train(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(train_output_dir): os.makedirs(train_output_dir) # 加载数据集,得到输入数据和目标数据并把范围变为 :[-1,1] examples = load_examples(train_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] # 返回值: model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } with tf.name_scope("parameter_count"): parameter_count = tf.reduce_sum([tf.reduce_prod(tf.shape(v)) for v in tf.trainable_variables()]) # 只保存最新一个checkpoint saver = tf.train.Saver(max_to_keep=20) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print("parameter_count =", sess.run(parameter_count)) if max_epochs is not None: max_steps = examples.steps_per_epoch * max_epochs # 400X200=80000 # 因为是从文件中读取数据,所以需要启动start_queue_runners() # 这个函数将会启动输入管道的线程,填充样本到队列中,以便出队操作可以从队列中拿到样本。 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # 运行训练集 print("begin trainning......") print("max_steps:", max_steps) start = time.time() for step in range(max_steps): def should(freq): return freq > 0 and ((step + 1) % freq == 0 or step == max_steps - 1) print("step:", step) # 定义一个需要run的所有操作的字典 fetches = { "train": model.train } # progress_freq为 50,每50次计算一次三个损失,显示进度 if should(progress_freq): fetches["discrim_loss"] = model.discrim_loss fetches["gen_loss_GAN"] = model.gen_loss_GAN fetches["gen_loss_L1"] = model.gen_loss_L1 # display_freq为 50,每50次保存一次输入、目标、输出的图像 if should(display_freq): fetches["display"] = display_fetches # 运行各种操作, results = sess.run(fetches) # display_freq为 50,每50次保存输入、目标、输出的图像 if should(display_freq): print("saving display images") filesets = save_images(train_output_dir, results["display"], step=step) append_index(train_output_dir, filesets, step=True) # progress_freq为 50,每50次打印一次三种损失的大小,显示进度 if should(progress_freq): # global_step will have the correct step count if we resume from a checkpoint train_epoch = math.ceil(step / examples.steps_per_epoch) train_step = (step - 1) % examples.steps_per_epoch + 1 rate = (step + 1) * batch_size / (time.time() - start) remaining = (max_steps - step) * batch_size / rate print("progress epoch %d step %d image/sec %0.1f remaining %dm" % ( train_epoch, train_step, rate, remaining / 60)) print("discrim_loss", results["discrim_loss"]) print("gen_loss_GAN", results["gen_loss_GAN"]) print("gen_loss_L1", results["gen_loss_L1"]) # save_freq为500,每500次保存一次模型 if should(save_freq): print("saving model") saver.save(sess, os.path.join(train_output_dir, "model"), global_step=step) # 测试 def test(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(test_output_dir): os.makedirs(test_output_dir) if checkpoint is None: raise Exception("checkpoint required for test mode") # disable these features in test mode scale_size = CROP_SIZE flip = False # 加载数据集,得到输入数据和目标数据 examples = load_examples(test_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } sess = tf.InteractiveSession() saver = tf.train.Saver(max_to_keep=1) ckpt = tf.train.get_checkpoint_state(checkpoint) saver.restore(sess,ckpt.model_checkpoint_path) start = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for step in range(examples.count): results = sess.run(display_fetches) filesets = save_images(test_output_dir, results) for i, f in enumerate(filesets): print("evaluated image", f["name"]) index_path = append_index(test_output_dir, filesets) print("wrote index at", index_path) print("rate", (time.time() - start) / max_steps) if __name__ == '__main__': train() #test()
no kernel image is available for execution on the device,计算能力不匹配的问题?
在运行网上下载的gipuma源码时遇到这个问题,以为是CUDA版本问题就换了笔记本试,源码核心的源文件见https://github.com/kysucix/gipuma/blob/master/gipuma.cu 笔记本:WIN10,VS2015,OPENCV2.4.13,CUDA9.0,显卡GTX950M,计算能力5.0,显卡驱动版本388.73 台式:WIN7家庭版,VS2015,OPENCV2.4.13,CUDA8.0,显卡QUADRO K2000,计算能力3.0,显卡驱动版本417.35 在网上查是code generation不对,其中-arch表示gpu architecture,于是在CMakeLists里将set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -gencode arch=compute_30,code=sm_30 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_61,code=sm_61) 笔记本里改成了set(CUDA_NVCC_FLAGS_RELEASE ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -arch=sm_50 -gencode=arch=compute_50,code=sm_50) 台式里改为了set(CUDA_NVCC_FLAGS_RELEASE ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -arch=sm_30 -gencode=arch=compute_30,code=sm_30) 结果还是不行,也换了其他所有可能的数字都不行,在CMake生成的工程属性页CUDA C/C++里也进行了修改(不知道对不对),也不行,求求各位帮忙看一下!可能问题有描述不清的地方,我会尽力解释的。PS:积分用完了,之后赚回来再悬赏吧
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
【JSON解析】浅谈JSONObject的使用
简介 在程序开发过程中,在参数传递,函数返回值等方面,越来越多的使用JSON。JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,同时也易于机器解析和生成、易于理解、阅读和撰写,而且Json采用完全独立于语言的文本格式,这使得Json成为理想的数据交换语言。 JSON建构于两种结构: “名称/值”对的集合(A Collection of name/va...
《MySQL 性能优化》之理解 MySQL 体系结构
本文介绍 MySQL 的体系结构,包括物理结构、逻辑结构以及插件式存储引擎。
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
只因接了一个电话,程序员被骗 30 万!
今天想给大家说一个刚刚发生在我身边的一起真实的诈骗经历,我的朋友因此被骗走30万。注:为了保护当事人隐私,部分情节进行了修改。1平安夜突来的电话开始以为就像普通的诈骗一样,想办法让你把钱...
我一个37岁的程序员朋友
周末了,人一旦没有点事情干,心里就瞎想,而且跟几个老男人坐在一起,更容易瞎想,我自己现在也是 30 岁了,也是无时无刻在担心自己的职业生涯,担心丢掉工作没有收入,担心身体机能下降,担心突...
python自动下载图片
近日闲来无事,总有一种无形的力量萦绕在朕身边,让朕精神涣散,昏昏欲睡。 可是,像朕这么有职业操守的社畜怎么能在上班期间睡瞌睡呢,我不禁陷入了沉思。。。。 突然旁边的IOS同事问:‘嘿,兄弟,我发现一个网站的图片很有意思啊,能不能帮我保存下来提升我的开发灵感?’ 作为一个坚强的社畜怎么能说自己不行呢,当时朕就不假思索的答应:‘oh, It’s simple. Wait for me for a ...
一名大专同学的四个问题
【前言】   收到一封来信,赶上各种事情拖了几日,利用今天要放下工作的时机,做个回复。   2020年到了,就以这一封信,作为开年标志吧。 【正文】   您好,我是一名现在有很多困惑的大二学生。有一些问题想要向您请教。   先说一下我的基本情况,高考失利,不想复读,来到广州一所大专读计算机应用技术专业。学校是偏艺术类的,计算机专业没有实验室更不用说工作室了。而且学校的学风也不好。但我很想在计算机领...
复习一周,京东+百度一面,不小心都拿了Offer
京东和百度一面都问了啥,面试官百般刁难,可惜我全会。
Java 14 都快来了,为什么还有这么多人固守Java 8?
从Java 9开始,Java版本的发布就让人眼花缭乱了。每隔6个月,都会冒出一个新版本出来,Java 10 , Java 11, Java 12, Java 13, 到2020年3月份,...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
Idea 中最常用的10款插件(提高开发效率),一定要学会使用!
学习使用一些插件,可以提高开发效率。对于我们开发人员很有帮助。这篇博客介绍了开发中使用的插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
木兰编程语言,当事人最新回复来了
同行12年,不知Python是木兰,当事人回应来了
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
立即提问