一个关于 GROUP_CONCAT 函数的问题

SELECT btp.phy_obj_id AS phyobjid,GROUP_CONCAT(btf.file_url SEPARATOR ',') AS fileurl,btp.phy_obj_ip AS phyobjip
FROM b_task_phy btp
LEFT JOIN b_task_file btf ON btp.file_id = btf.id
WHERE btp.task_id = 9 AND btp.STATUS IN(0,3) GROUP BY btp.phy_obj_id;
图片说明
图片说明

上面是一个连表查询的SQL,查询之后返回一个List< Map >

(插播个小问题:怎么让这个List< Map >的尖括号和Map之间没空格又能显示出来,不明白我说啥的可以去提问里输一下.)

额,回归正题. 现在由于各种原因我不能用连表查询只能单表, so
SELECT btp.phy_obj_id AS phyobjid,btp.phy_obj_ip AS phyobjip,btp.`file_id`
FROM b_task_phy btp
WHERE btp.task_id = 9 AND btp.STATUS IN(0,3) GROUP BY btp.phy_obj_id;图片说明

以及
SELECT GROUP_CONCAT(file_url SEPARATOR ',') AS file_url
FROM b_task_file
WHERE id IN ( file_id );图片说明

IN 里面的值由前一个SQL查询所得的 file_id 组成的集合

前提条件说完了, 我的问题就是用这种单表查询所得到的List< Map > 怎么后台处理(java)得到和连表查询一样的结果. 或者有什么更好的方法.

theUncle
theUncle 你是想要将你后面两个SQL查询出来的数据拼接成和第一个连表查询出来的数据一致是吗?你可以将后面两个SQL查询出来的数据,通过file_id进行相同对比放入到一个新的List中,这样你就可以得到一个和连表查询数据一样的结果了。你可以试试!
7 个月之前 回复
qq_41556907
新船长 回复tkzc_shark: 我也布吉岛怎么回事啊,咋也不敢问。我不就回答了几个问题吗,那些家伙就疯狂踩我,现在我都不敢答题了
7 个月之前 回复
tiankongzhichenglyf
tkzc_shark 我去,声望这么吊
7 个月之前 回复

1个回答

你是想要将你后面两个SQL查询出来的数据拼接成和第一个连表查询出来的数据一致是吗?你可以将后面两个SQL查询出来的数据,通过file_id进行相同对比放入到一个新的List中,这样你就可以得到一个和连表查询数据一样的结果了。你可以试试!

theUncle
theUncle 回复新船长: 我的意思是通过file_id =2 查询出来的 b_task_file表里的file_url字段的值都是一样的吗?如果是的话就很好处理
7 个月之前 回复
theUncle
theUncle 回复新船长: b_task_file表里的file_url字段的值都是一样的吗?
7 个月之前 回复
qq_41556907
新船长 谢谢你的回复。你是说两个单表都将file_id 查出来然后把file_id相同的提出来吗?这个我也想过,但是他们的file_id都是一样的...也就是说所有查询出来的file_id只有一个,我没在问题里说清楚。
7 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
MySQL中的group_concat函数怎么用sybase实现?
select distinct a.computername,a.mac , GROUP_CONCAT(distinct a.username) as usernames,GROUP_CONCAT(distinct a.userid) as userids from (select distinct nons.* , users.username,client.computername from noncontrol_softwares nons left join users on nons.userid=users.userid left join client on nons.mac=client.mac) as a group by a.mac 怎么将这段MySQL语句转换成sybase的?
因为group_concat默认值为1024的原因,请教一个问题
group_concat 函数默认长度为1024,但是查询出来的结果查询超过了1024,然后对结果进行解析 报错,百度之后,告诉设置group_concat_max_len变量,我执行了: ``` set global group_concat_max_len=20000; ``` 提示没有super权限: ``` you need (at least one of) the SUPER privilege(s) for this operation ``` 客户使用的是阿里云的mysql服务,如果设置session级别的设置并不能解决问题,所以想问一下有没有知道怎么获取阿里云MySQLsuper权限的大佬 分不多,囊中羞涩,不胜感激!!!
teradata sql 有分组合并字段的函数吗?类似mysql 的 group_concat 或 hivesql collect_set
teradata sql 有分组合并字段的函数吗?类似mysql 的 group_concat 或 hivesql collect_set
oracle sql WM_CONCAT函数
![图片说明](https://img-ask.csdn.net/upload/201809/19/1537342012_104439.png) 我想给WM_CONCAT(Z.ORIGIN) as 用地指标来源一个查询条件 ![图片说明](https://img-ask.csdn.net/upload/201809/19/1537342059_969717.png) 写法应该是怎么样的各位大佬 这是我的sql ``` SELECT T.IID, T.PROJ_NAME, WM_CONCAT(Z.ORIGIN) as 用地指标来源 FROM UT_SP_PROJ_BUILD T, OT_BATCHAPPLY O, UT_SP_ZBLY Z WHERE T.PROJ_NAME = O.NAME(+) AND O.ID = Z.BATCH_ID(+) GROUP BY T.IID, T.PROJ_NAME ```
使用聚合拼接函数wm_concat报错,提示字符串缓冲区太小的问题
select t1.name,wm_concat(t1.sl||t1.mc) from my_table t1 group by t1.name;
Oracle的listagg函数在group by中使用时提示标识符无效,用wm_concat可以,但想将连接符换成分号
SELECT A.PACK_NO, ( /*-- plsql中执行时提示错误“ORA-00904:"A"."PACK_NO":标识符无效” SELECT LISTAGG(ORDER_REMARK, '; ') WITHIN GROUP(ORDER BY ORDER_REMARK) FROM (SELECT DISTINCT ORDER_REMARK FROM T_BUSIS_PACKING_SLIP_DETAIL WHERE PACK_NO = A.PACK_NO)*/ /*PACK_NO ORDER_REMARKS TOTAL_EXPORT_PRICE PN000008532 从2009年起订 9645 PN000008500 从2012年起订购,从2014年起按长期订购,长期订单 从2013年起订 52356*/ SELECT WM_CONCAT(DISTINCT TSD.ORDER_REMARK) FROM T_BUSIS_PACKING_SLIP_DETAIL TSD WHERE TSD.PACK_NO = A.PACK_NO) AS ORDER_REMARKS, (SELECT SUM(TSD.SETTLE_ACTURE_AMOUNT * TSD.PRO_NUM) FROM T_BUSIS_RELEASE_ORDER_DETAIL TRD JOIN T_BUSIS_PACKING_SLIP_DETAIL TSD ON TSD.RELEASE_DETAIL_ID = TRD.ID WHERE TSD.PACK_NO = A.PACK_NO GROUP BY TSD.PACK_NO) AS TOTAL_EXPORT_PRICE FROM T_BUSIS_PACKING_SLIP A WHERE A.PACK_NO IN ('PN000008532', 'PN000008500') ORDER BY A.CUST_ACCT_CODE, A.PACK_NO ASC /* -- 单独摘出来执行可以 -- LISTAGG(ORDER_REMARK,';')WITHI -- 从2012年起订购; 从2014年起按长期订购; 长期订单 从2013年起订 SELECT LISTAGG(ORDER_REMARK, '; ') WITHIN GROUP(ORDER BY ORDER_REMARK) FROM (SELECT DISTINCT ORDER_REMARK FROM T_BUSIS_PACKING_SLIP_DETAIL WHERE PACK_NO = 'PN000008500') */
mysql自定义函数查询会报错 应该是死循环吧
BEGIN DECLARE sTemp text; DECLARE sTempChd text; SET@@group_concat_max_len = 102400; SET sTemp = '$'; SET sTempChd = rootId; WHILE sTempChd IS NOT NULL DO SET sTemp = concat(sTemp, ',', sTempChd); SELECT group_concat(comcode) INTO sTempChd FROM PrpDcompany WHERE FIND_IN_SET(UPPERCOMCODE, sTempChd) > 0; END WHILE; SET@@group_concat_max_len = 1024; RETURN SUBSTRING(sTemp,3); END 这是函数 这是sql select * from PrpDcompany where FIND_IN_SET(COMCODE, Dcompany(1044)); 这是错误 [SQL]select * from PrpDcompany where FIND_IN_SET(COMCODE, Dcompany(1044)); [Err] 1406 - Data too long for column 'sTemp' at row 53
请教 JPQL group by 列聚合成字符串
TYPE ID C123 1 C189 2 C123 3 C123 4 C189 5 结果 TYPE ID C123 1,3,4 C189 2,5 求教 如何用jpql 如何实现 类似h2的group_concat或oracle中的wm_concat(column)的函数效果
如何理解mysql高版本(8)的group by?
``` SELECT a.*,c.orgName AS dutyOrgNames,GROUP_CONCAT(e.userName) AS projectLeaderNames FROM majorProject a LEFT JOIN projectdutyorg b ON (a.id = b.projectId) LEFT JOIN organization c ON (c.id = b.orgId) LEFT JOIN projectleader d ON (d.projectId = a.id) LEFT JOIN userinfo e ON (e.id = d.userId) WHERE 1=1 AND a.isDelete = 0 GROUP BY a.id ``` 这是之前写的一段sql,会提示错误: ``` Expression #23 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'dcdb_guangan_190815.c.ORGNAME' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by ``` 如果在group by a.id后面加上",c.orgName"则查询成功,但是无法达到之前sql想要的结果。 我尝试解析新group by的用法,最开始同事告诉我使用group by时,必须和select字段一致,或者使用分组函数max之类的。但是我修改代码后: ``` SELECT a.*,GROUP_CONCAT(e.userName) AS projectLeaderNames FROM majorProject a LEFT JOIN projectdutyorg b ON (a.id = b.projectId) LEFT JOIN organization c ON (c.id = b.orgId) LEFT JOIN projectleader d ON (d.projectId = a.id) LEFT JOIN userinfo e ON (e.id = d.userId) WHERE 1=1 AND a.isDelete = 0 GROUP BY a.id ``` 这段sql运行成功了,但让我不解的是a.星号里面包含了很多字段,并没有和group by后面的字段一致。之后我认为,应该是a.id是主键的原因,将a.星号中的一个字段也加上主键标识,出现了错误: ``` Expression #23 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'dcdb_guangan_190815.a.RECORDRECORDCREATETIME' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by ``` a.RECORDCREATETIME是a.*中的一个字段。 我总结为:如果select字段中存在主键字段,那么必须与group by后面一致。但是问题在于,最开始出现的报错c.orgName,这个字段并不是主键字段。 非常迷惑,望各位大佬解答,在网上搜了一些资料,都只说如何解决,但是解决方案最佳也仅仅是改配置文件,对我来说这样的解决方法不太友好,我希望理解group by的用法,通过修改sql去解决这个问题。
mysql自定义函数问题找不到function
这是函数 ``` CREATE FUNCTION myFunction (rootId VARCHAR(255)) RETURNS VARCHAR(255) BEGIN DECLARE sTemp VARCHAR (1000); DECLARE sTempChd VARCHAR (1000); SET sTemp = '$'; SET sTempChd = cast(rootId AS CHAR); WHILE sTempChd IS NOT NULL DO SET sTemp = concat(sTemp, ',', sTempChd); SELECT group_concat(TASKCODE) INTO sTempChd FROM treeNodes WHERE FIND_IN_SET(PARENTCODE, sTempChd) > 0; END WHILE; END ``` 这是错误 [Err] 1320 - No RETURN found in FUNCTION salestest.myFunction
Maria数据库查询树形结构
base_users表包含User_ID和parentid两个字段.要求传入User_ID,查询出查询出用户,以及其下级用户(多个下级,下级还有下级). 我使用的方法 函数: ``` BEGIN DECLARE oTemp VARCHAR(4000); DECLARE oTempChild VARCHAR(4000); SET oTemp = ''; SET oTempChild = CAST(pUID AS CHAR); WHILE oTempChild IS NOT NULL DO SET oTemp = CONCAT(oTemp,',',oTempChild); SELECT GROUP_CONCAT(User_ID) INTO oTempChild FROM base_users WHERE FIND_IN_SET(ParentId,oTempChild) > 0; END WHILE; if(length(oTemp)>0) then begin return substring(oTemp,2); end; else return oTemp; end if; END ``` 存储过程调用: ``` BEGIN declare pUser_Type int; declare pids text; select GROUP_CONCAT(User_ID) into pids from base_users where FIND_IN_SET(user_id, getChildUser(pUser_ID)); SELECT User_ID,ParentId,User_Name,User_Type FROM base_users where deletemark=0 and FIND_IN_SET(User_ID,pids) ; ``` 存储过程中还有查询需要用到pids. 这样做 查询好慢总共2000+数据,结果24行,要10+秒. 求大神优化指点. (用FIND_IN_SET(ParentId, @pids) > 0, @pids := CONCAT(@pids, ',', User_ID), 0) AS ischild 方法会有的下级查询不出来)
MYSQL 创建函数失败 半天没明白错再哪,求大神
[SQL] delimiter $$ CREATE FUNCTION queryChildrenAreaInfo(areaId INT) RETURNS VARCHAR(4000) BEGIN DECLARE sTemp VARCHAR(4000); DECLARE sTempChd VARCHAR(4000); SET sTemp='$'; SET sTempChd = CAST(areaId AS CHAR); WHILE sTempChd IS NOT NULL DO SET sTemp= CONCAT(sTemp,',',sTempChd); SELECT GROUP_CONCAT(BRANCH) INTO sTempChd FROM t_areainfo WHERE FIND_IN_SET(ATTACHED_TO,sTempChd)>0; END WHILE; RETURN sTemp; END $$ [Err] 1105 - line 1 column 15 near " queryChildrenAreaInfo(areaId INT) RETURNS VARCHAR(4000) BEGIN DECLARE sTemp VARCHAR(4000); DECLARE sTempChd VARCHAR(4000); SET sTemp='$'; SET sTempChd = CAST(areaId AS CHAR); WHILE sTempChd IS NOT NULL DO SET sTemp= CONCAT(sTemp,',',sTempChd); SELECT GROUP_CONCAT(BRANCH) INTO sTempChd FROM t_areainfo WHERE FIND_IN_SET(ATTACHED_TO,sTempChd)>0; END WHILE; RETURN sTemp; END ;" (total length 407)
oracle自定义函数中执行查询语句与直接执行查询语句不同,请问是怎么回事
# 自定义函数: create or replace function uf_get_docsenduser(activityname in varchar2, pid in number) return varchar2 is result varchar2(255); begin select to_char(Wmsys.Wm_Concat(to_Char(username))) into result from (select ope.docid, ope.username, row_number() over(partition by ope.docid order by ope.endtime desc) as rId from gzyth_instance.fd_docoperate ope where ope.opttype in ('提交', '追加', '创建') and instr(ope.content, activityname || ']') > 0) where rid = 1 and docid = pid group by docid; return(result); end uf_get_docsenduser; # 直接执行sql: select to_char(Wmsys.Wm_Concat(to_Char(username))) as result from (select ope.docid, ope.username, row_number() over(partition by ope.docid order by ope.endtime desc) as rId from gzyth_instance.fd_docoperate ope where ope.opttype in ('提交', '追加', '创建') and instr(ope.content, activityname || ']') > 0) where rid = 1 and docid = pid group by docid; 两种方式查询结果不一样,请问是什么原因,如何修改函数可以达到一致的效果
mysql递归查询树形结构,主键ID保存在一张临时表中,哪位大神帮忙看下,急。。。。。。
CREATE FUNCTION `getTypeChildList` (rootId BIGINT) RETURNS @service_type TABLE (type_id VARCHAR (1000)) AS BEGIN DECLARE sChildList VARCHAR (1000); DECLARE sChildTemp VARCHAR (1000); SET sChildTemp = cast(rootId AS CHAR); WHILE sChildTemp IS NOT NULL DO IF (sChildList IS NOT NULL) THEN INSERT INTO @service_type VALUES (sChildTemp); END IF; SELECT group_concat(type_id) INTO sChildTemp FROM service_type where FIND_IN_SET(parent_id,sChildTemp)>0; END WHILE; RETURN; END; 表:CREATE TABLE `service_type` ( `type_id` bigint(20) NOT NULL, `type_name` varchar(100) DEFAULT NULL COMMENT '类型名称', `parent_id` bigint(20) DEFAULT NULL COMMENT '上一级类型id', PRIMARY KEY (`type_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
基于tensorflow的pix2pix代码中如何做到输入图像和输出图像分辨率不一致
问题:例如在自己制作了成对的输入(input256×256 target 200×256)后,如何让输入图像和输出图像分辨率不一致,例如成对图像中:input的分辨率是256×256, output 和target都是200×256,需要修改哪里的参数。 论文参考:《Image-to-Image Translation with Conditional Adversarial Networks》 代码参考:https://blog.csdn.net/MOU_IT/article/details/80802407?utm_source=blogxgwz0 # coding=utf-8 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import numpy as np import os import glob import random import collections import math import time # https://github.com/affinelayer/pix2pix-tensorflow train_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/train/" # 训练集输入 train_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 训练集输出 test_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/val/" # 测试集输入 test_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/test_out/" # 测试集的输出 checkpoint = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 保存结果的目录 seed = None max_steps = None # number of training steps (0 to disable) max_epochs = 200 # number of training epochs progress_freq = 50 # display progress every progress_freq steps trace_freq = 0 # trace execution every trace_freq steps display_freq = 50 # write current training images every display_freq steps save_freq = 500 # save model every save_freq steps, 0 to disable separable_conv = False # use separable convolutions in the generator aspect_ratio = 1 #aspect ratio of output images (width/height) batch_size = 1 # help="number of images in batch") which_direction = "BtoA" # choices=["AtoB", "BtoA"]) ngf = 64 # help="number of generator filters in first conv layer") ndf = 64 # help="number of discriminator filters in first conv layer") scale_size = 286 # help="scale images to this size before cropping to 256x256") flip = True # flip images horizontally no_flip = True # don't flip images horizontally lr = 0.0002 # initial learning rate for adam beta1 = 0.5 # momentum term of adam l1_weight = 100.0 # weight on L1 term for generator gradient gan_weight = 1.0 # weight on GAN term for generator gradient output_filetype = "png" # 输出图像的格式 EPS = 1e-12 # 极小数,防止梯度为损失为0 CROP_SIZE = 256 # 图片的裁剪大小 # 命名元组,用于存放加载的数据集合创建好的模型 Examples = collections.namedtuple("Examples", "paths, inputs, targets, count, steps_per_epoch") Model = collections.namedtuple("Model", "outputs, predict_real, predict_fake, discrim_loss, discrim_grads_and_vars, gen_loss_GAN, gen_loss_L1, gen_grads_and_vars, train") # 图像预处理 [0, 1] => [-1, 1] def preprocess(image): with tf.name_scope("preprocess"): return image * 2 - 1 # 图像后处理[-1, 1] => [0, 1] def deprocess(image): with tf.name_scope("deprocess"): return (image + 1) / 2 # 判别器的卷积定义,batch_input为 [ batch , 256 , 256 , 6 ] def discrim_conv(batch_input, out_channels, stride): # [ batch , 256 , 256 , 6 ] ===>[ batch , 258 , 258 , 6 ] padded_input = tf.pad(batch_input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="CONSTANT") ''' [0,0]: 第一维batch大小不扩充 [1,1]:第二维图像宽度左右各扩充一列,用0填充 [1,1]:第三维图像高度上下各扩充一列,用0填充 [0,0]:第四维图像通道不做扩充 ''' return tf.layers.conv2d(padded_input, out_channels, kernel_size=4, strides=(stride, stride), padding="valid", kernel_initializer=tf.random_normal_initializer(0, 0.02)) # 生成器的卷积定义,卷积核为4*4,步长为2,输出图像为输入的一半 def gen_conv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: return tf.layers.separable_conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 生成器的反卷积定义 def gen_deconv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: _b, h, w, _c = batch_input.shape resized_input = tf.image.resize_images(batch_input, [h * 2, w * 2], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return tf.layers.separable_conv2d(resized_input, out_channels, kernel_size=4, strides=(1, 1), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d_transpose(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 定义LReLu激活函数 def lrelu(x, a): with tf.name_scope("lrelu"): # adding these together creates the leak part and linear part # then cancels them out by subtracting/adding an absolute value term # leak: a*x/2 - a*abs(x)/2 # linear: x/2 + abs(x)/2 # this block looks like it has 2 inputs on the graph unless we do this x = tf.identity(x) return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x) # 批量归一化图像 def batchnorm(inputs): return tf.layers.batch_normalization(inputs, axis=3, epsilon=1e-5, momentum=0.1, training=True, gamma_initializer=tf.random_normal_initializer(1.0, 0.02)) # 检查图像的维度 def check_image(image): assertion = tf.assert_equal(tf.shape(image)[-1], 3, message="image must have 3 color channels") with tf.control_dependencies([assertion]): image = tf.identity(image) if image.get_shape().ndims not in (3, 4): raise ValueError("image must be either 3 or 4 dimensions") # make the last dimension 3 so that you can unstack the colors shape = list(image.get_shape()) shape[-1] = 3 image.set_shape(shape) return image # 去除文件的后缀,获取文件名 def get_name(path): # os.path.basename(),返回path最后的文件名。若path以/或\结尾,那么就会返回空值。 # os.path.splitext(),分离文件名与扩展名;默认返回(fname,fextension)元组 name, _ = os.path.splitext(os.path.basename(path)) return name # 加载数据集,从文件读取-->解码-->归一化--->拆分为输入和目标-->像素转为[-1,1]-->转变形状 def load_examples(input_dir): if input_dir is None or not os.path.exists(input_dir): raise Exception("input_dir does not exist") # 匹配第一个参数的路径中所有的符合条件的文件,并将其以list的形式返回。 input_paths = glob.glob(os.path.join(input_dir, "*.jpg")) # 图像解码器 decode = tf.image.decode_jpeg if len(input_paths) == 0: input_paths = glob.glob(os.path.join(input_dir, "*.png")) decode = tf.image.decode_png if len(input_paths) == 0: raise Exception("input_dir contains no image files") # 如果文件名是数字,则用数字进行排序,否则用字母排序 if all(get_name(path).isdigit() for path in input_paths): input_paths = sorted(input_paths, key=lambda path: int(get_name(path))) else: input_paths = sorted(input_paths) sess = tf.Session() with tf.name_scope("load_images"): # 把我们需要的全部文件打包为一个tf内部的queue类型,之后tf开文件就从这个queue中取目录了, # 如果是训练模式时,shuffle为True path_queue = tf.train.string_input_producer(input_paths, shuffle=True) # Read的输出将是一个文件名(key)和该文件的内容(value,每次读取一个文件,分多次读取)。 reader = tf.WholeFileReader() paths, contents = reader.read(path_queue) # 对文件进行解码并且对图片作归一化处理 raw_input = decode(contents) raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32) # 归一化处理 # 判断两个值知否相等,如果不等抛出异常 assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels") ''' 对于control_dependencies这个管理器,只有当里面的操作是一个op时,才会生效,也就是先执行传入的 参数op,再执行里面的op。如果里面的操作不是定义的op,图中就不会形成一个节点,这样该管理器就失效了。 tf.identity是返回一个一模一样新的tensor的op,这会增加一个新节点到gragh中,这时control_dependencies就会生效. ''' with tf.control_dependencies([assertion]): raw_input = tf.identity(raw_input) raw_input.set_shape([None, None, 3]) # 图像值由[0,1]--->[-1, 1] width = tf.shape(raw_input)[1] # [height, width, channels] a_images = preprocess(raw_input[:, :width // 2, :]) # 256*256*3 b_images = preprocess(raw_input[:, width // 2:, :]) # 256*256*3 # 这里的which_direction为:BtoA if which_direction == "AtoB": inputs, targets = [a_images, b_images] elif which_direction == "BtoA": inputs, targets = [b_images, a_images] else: raise Exception("invalid direction") # synchronize seed for image operations so that we do the same operations to both # input and output images seed = random.randint(0, 2 ** 31 - 1) # 图像预处理,翻转、改变形状 with tf.name_scope("input_images"): input_images = transform(inputs) with tf.name_scope("target_images"): target_images = transform(targets) # 获得输入图像、目标图像的batch块 paths_batch, inputs_batch, targets_batch = tf.train.batch([paths, input_images, target_images], batch_size=batch_size) steps_per_epoch = int(math.ceil(len(input_paths) / batch_size)) return Examples( paths=paths_batch, # 输入的文件名块 inputs=inputs_batch, # 输入的图像块 targets=targets_batch, # 目标图像块 count=len(input_paths), # 数据集的大小 steps_per_epoch=steps_per_epoch, # batch的个数 ) # 图像预处理,翻转、改变形状 def transform(image): r = image if flip: r = tf.image.random_flip_left_right(r, seed=seed) # area produces a nice downscaling, but does nearest neighbor for upscaling # assume we're going to be doing downscaling here r = tf.image.resize_images(r, [scale_size, scale_size], method=tf.image.ResizeMethod.AREA) offset = tf.cast(tf.floor(tf.random_uniform([2], 0, scale_size - CROP_SIZE + 1, seed=seed)), dtype=tf.int32) if scale_size > CROP_SIZE: r = tf.image.crop_to_bounding_box(r, offset[0], offset[1], CROP_SIZE, CROP_SIZE) elif scale_size < CROP_SIZE: raise Exception("scale size cannot be less than crop size") return r # 创建生成器,这是一个编码解码器的变种,输入输出均为:256*256*3, 像素值为[-1,1] def create_generator(generator_inputs, generator_outputs_channels): layers = [] # encoder_1: [batch, 256, 256, in_channels] => [batch, 128, 128, ngf] with tf.variable_scope("encoder_1"): output = gen_conv(generator_inputs, ngf) # ngf为第一个卷积层的卷积核核数量,默认为 64 layers.append(output) layer_specs = [ ngf * 2, # encoder_2: [batch, 128, 128, ngf] => [batch, 64, 64, ngf * 2] ngf * 4, # encoder_3: [batch, 64, 64, ngf * 2] => [batch, 32, 32, ngf * 4] ngf * 8, # encoder_4: [batch, 32, 32, ngf * 4] => [batch, 16, 16, ngf * 8] ngf * 8, # encoder_5: [batch, 16, 16, ngf * 8] => [batch, 8, 8, ngf * 8] ngf * 8, # encoder_6: [batch, 8, 8, ngf * 8] => [batch, 4, 4, ngf * 8] ngf * 8, # encoder_7: [batch, 4, 4, ngf * 8] => [batch, 2, 2, ngf * 8] ngf * 8, # encoder_8: [batch, 2, 2, ngf * 8] => [batch, 1, 1, ngf * 8] ] # 卷积的编码器 for out_channels in layer_specs: with tf.variable_scope("encoder_%d" % (len(layers) + 1)): # 对最后一层使用激活函数 rectified = lrelu(layers[-1], 0.2) # [batch, in_height, in_width, in_channels] => [batch, in_height/2, in_width/2, out_channels] convolved = gen_conv(rectified, out_channels) output = batchnorm(convolved) layers.append(output) layer_specs = [ (ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] (ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] (ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] (ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] (ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] ] # 卷积的解码器 num_encoder_layers = len(layers) # 8 for decoder_layer, (out_channels, dropout) in enumerate(layer_specs): skip_layer = num_encoder_layers - decoder_layer - 1 with tf.variable_scope("decoder_%d" % (skip_layer + 1)): if decoder_layer == 0: # first decoder layer doesn't have skip connections # since it is directly connected to the skip_layer input = layers[-1] else: input = tf.concat([layers[-1], layers[skip_layer]], axis=3) rectified = tf.nn.relu(input) # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels] output = gen_deconv(rectified, out_channels) output = batchnorm(output) if dropout > 0.0: output = tf.nn.dropout(output, keep_prob=1 - dropout) layers.append(output) # decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels] with tf.variable_scope("decoder_1"): input = tf.concat([layers[-1], layers[0]], axis=3) rectified = tf.nn.relu(input) output = gen_deconv(rectified, generator_outputs_channels) output = tf.tanh(output) layers.append(output) return layers[-1] # 创建判别器,输入生成的图像和真实的图像:两个[batch,256,256,3],元素值值[-1,1],输出:[batch,30,30,1],元素值为概率 def create_discriminator(discrim_inputs, discrim_targets): n_layers = 3 layers = [] # 2x [batch, height, width, in_channels] => [batch, height, width, in_channels * 2] input = tf.concat([discrim_inputs, discrim_targets], axis=3) # layer_1: [batch, 256, 256, in_channels * 2] => [batch, 128, 128, ndf] with tf.variable_scope("layer_1"): convolved = discrim_conv(input, ndf, stride=2) rectified = lrelu(convolved, 0.2) layers.append(rectified) # layer_2: [batch, 128, 128, ndf] => [batch, 64, 64, ndf * 2] # layer_3: [batch, 64, 64, ndf * 2] => [batch, 32, 32, ndf * 4] # layer_4: [batch, 32, 32, ndf * 4] => [batch, 31, 31, ndf * 8] for i in range(n_layers): with tf.variable_scope("layer_%d" % (len(layers) + 1)): out_channels = ndf * min(2 ** (i + 1), 8) stride = 1 if i == n_layers - 1 else 2 # last layer here has stride 1 convolved = discrim_conv(layers[-1], out_channels, stride=stride) normalized = batchnorm(convolved) rectified = lrelu(normalized, 0.2) layers.append(rectified) # layer_5: [batch, 31, 31, ndf * 8] => [batch, 30, 30, 1] with tf.variable_scope("layer_%d" % (len(layers) + 1)): convolved = discrim_conv(rectified, out_channels=1, stride=1) output = tf.sigmoid(convolved) layers.append(output) return layers[-1] # 创建Pix2Pix模型,inputs和targets形状为:[batch_size, height, width, channels] def create_model(inputs, targets): with tf.variable_scope("generator"): out_channels = int(targets.get_shape()[-1]) outputs = create_generator(inputs, out_channels) # create two copies of discriminator, one for real pairs and one for fake pairs # they share the same underlying variables with tf.name_scope("real_discriminator"): with tf.variable_scope("discriminator"): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_real = create_discriminator(inputs, targets) # 条件变量图像和真实图像 with tf.name_scope("fake_discriminator"): with tf.variable_scope("discriminator", reuse=True): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_fake = create_discriminator(inputs, outputs) # 条件变量图像和生成的图像 # 判别器的损失,判别器希望V(G,D)尽可能大 with tf.name_scope("discriminator_loss"): # minimizing -tf.log will try to get inputs to 1 # predict_real => 1 # predict_fake => 0 discrim_loss = tf.reduce_mean(-(tf.log(predict_real + EPS) + tf.log(1 - predict_fake + EPS))) # 生成器的损失,生成器希望V(G,D)尽可能小 with tf.name_scope("generator_loss"): # predict_fake => 1 # abs(targets - outputs) => 0 gen_loss_GAN = tf.reduce_mean(-tf.log(predict_fake + EPS)) gen_loss_L1 = tf.reduce_mean(tf.abs(targets - outputs)) gen_loss = gen_loss_GAN * gan_weight + gen_loss_L1 * l1_weight # 判别器训练 with tf.name_scope("discriminator_train"): # 判别器需要优化的参数 discrim_tvars = [var for var in tf.trainable_variables() if var.name.startswith("discriminator")] # 优化器定义 discrim_optim = tf.train.AdamOptimizer(lr, beta1) # 计算损失函数对优化参数的梯度 discrim_grads_and_vars = discrim_optim.compute_gradients(discrim_loss, var_list=discrim_tvars) # 更新该梯度所对应的参数的状态,返回一个op discrim_train = discrim_optim.apply_gradients(discrim_grads_and_vars) # 生成器训练 with tf.name_scope("generator_train"): with tf.control_dependencies([discrim_train]): # 生成器需要优化的参数列表 gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] # 定义优化器 gen_optim = tf.train.AdamOptimizer(lr, beta1) # 计算需要优化的参数的梯度 gen_grads_and_vars = gen_optim.compute_gradients(gen_loss, var_list=gen_tvars) # 更新该梯度所对应的参数的状态,返回一个op gen_train = gen_optim.apply_gradients(gen_grads_and_vars) ''' 在采用随机梯度下降算法训练神经网络时,使用 tf.train.ExponentialMovingAverage 滑动平均操作的意义在于 提高模型在测试数据上的健壮性(robustness)。tensorflow 下的 tf.train.ExponentialMovingAverage 需要 提供一个衰减率(decay)。该衰减率用于控制模型更新的速度。该衰减率用于控制模型更新的速度, ExponentialMovingAverage 对每一个(待更新训练学习的)变量(variable)都会维护一个影子变量 (shadow variable)。影子变量的初始值就是这个变量的初始值, shadow_variable=decay×shadow_variable+(1−decay)×variable ''' ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([discrim_loss, gen_loss_GAN, gen_loss_L1]) # global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step + 1) return Model( predict_real=predict_real, # 条件变量(输入图像)和真实图像之间的概率值,形状为;[batch,30,30,1] predict_fake=predict_fake, # 条件变量(输入图像)和生成图像之间的概率值,形状为;[batch,30,30,1] discrim_loss=ema.average(discrim_loss), # 判别器损失 discrim_grads_and_vars=discrim_grads_and_vars, # 判别器需要优化的参数和对应的梯度 gen_loss_GAN=ema.average(gen_loss_GAN), # 生成器的损失 gen_loss_L1=ema.average(gen_loss_L1), # 生成器的 L1损失 gen_grads_and_vars=gen_grads_and_vars, # 生成器需要优化的参数和对应的梯度 outputs=outputs, # 生成器生成的图片 train=tf.group(update_losses, incr_global_step, gen_train), # 打包需要run的操作op ) # 保存图像 def save_images(output_dir, fetches, step=None): image_dir = os.path.join(output_dir, "images") if not os.path.exists(image_dir): os.makedirs(image_dir) filesets = [] for i, in_path in enumerate(fetches["paths"]): name, _ = os.path.splitext(os.path.basename(in_path.decode("utf8"))) fileset = {"name": name, "step": step} for kind in ["inputs", "outputs", "targets"]: filename = name + "-" + kind + ".png" if step is not None: filename = "%08d-%s" % (step, filename) fileset[kind] = filename out_path = os.path.join(image_dir, filename) contents = fetches[kind][i] with open(out_path, "wb") as f: f.write(contents) filesets.append(fileset) return filesets # 将结果写入HTML网页 def append_index(output_dir, filesets, step=False): index_path = os.path.join(output_dir, "index.html") if os.path.exists(index_path): index = open(index_path, "a") else: index = open(index_path, "w") index.write("<html><body><table><tr>") if step: index.write("<th>step</th>") index.write("<th>name</th><th>input</th><th>output</th><th>target</th></tr>") for fileset in filesets: index.write("<tr>") if step: index.write("<td>%d</td>" % fileset["step"]) index.write("<td>%s</td>" % fileset["name"]) for kind in ["inputs", "outputs", "targets"]: index.write("<td><img src='images/%s'></td>" % fileset[kind]) index.write("</tr>") return index_path # 转变图像的尺寸、并且将[0,1]--->[0,255] def convert(image): if aspect_ratio != 1.0: # upscale to correct aspect ratio size = [CROP_SIZE, int(round(CROP_SIZE * aspect_ratio))] image = tf.image.resize_images(image, size=size, method=tf.image.ResizeMethod.BICUBIC) # 将数据的类型转换为8位无符号整型 return tf.image.convert_image_dtype(image, dtype=tf.uint8, saturate=True) # 主函数 def train(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(train_output_dir): os.makedirs(train_output_dir) # 加载数据集,得到输入数据和目标数据并把范围变为 :[-1,1] examples = load_examples(train_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] # 返回值: model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } with tf.name_scope("parameter_count"): parameter_count = tf.reduce_sum([tf.reduce_prod(tf.shape(v)) for v in tf.trainable_variables()]) # 只保存最新一个checkpoint saver = tf.train.Saver(max_to_keep=20) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print("parameter_count =", sess.run(parameter_count)) if max_epochs is not None: max_steps = examples.steps_per_epoch * max_epochs # 400X200=80000 # 因为是从文件中读取数据,所以需要启动start_queue_runners() # 这个函数将会启动输入管道的线程,填充样本到队列中,以便出队操作可以从队列中拿到样本。 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # 运行训练集 print("begin trainning......") print("max_steps:", max_steps) start = time.time() for step in range(max_steps): def should(freq): return freq > 0 and ((step + 1) % freq == 0 or step == max_steps - 1) print("step:", step) # 定义一个需要run的所有操作的字典 fetches = { "train": model.train } # progress_freq为 50,每50次计算一次三个损失,显示进度 if should(progress_freq): fetches["discrim_loss"] = model.discrim_loss fetches["gen_loss_GAN"] = model.gen_loss_GAN fetches["gen_loss_L1"] = model.gen_loss_L1 # display_freq为 50,每50次保存一次输入、目标、输出的图像 if should(display_freq): fetches["display"] = display_fetches # 运行各种操作, results = sess.run(fetches) # display_freq为 50,每50次保存输入、目标、输出的图像 if should(display_freq): print("saving display images") filesets = save_images(train_output_dir, results["display"], step=step) append_index(train_output_dir, filesets, step=True) # progress_freq为 50,每50次打印一次三种损失的大小,显示进度 if should(progress_freq): # global_step will have the correct step count if we resume from a checkpoint train_epoch = math.ceil(step / examples.steps_per_epoch) train_step = (step - 1) % examples.steps_per_epoch + 1 rate = (step + 1) * batch_size / (time.time() - start) remaining = (max_steps - step) * batch_size / rate print("progress epoch %d step %d image/sec %0.1f remaining %dm" % ( train_epoch, train_step, rate, remaining / 60)) print("discrim_loss", results["discrim_loss"]) print("gen_loss_GAN", results["gen_loss_GAN"]) print("gen_loss_L1", results["gen_loss_L1"]) # save_freq为500,每500次保存一次模型 if should(save_freq): print("saving model") saver.save(sess, os.path.join(train_output_dir, "model"), global_step=step) # 测试 def test(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(test_output_dir): os.makedirs(test_output_dir) if checkpoint is None: raise Exception("checkpoint required for test mode") # disable these features in test mode scale_size = CROP_SIZE flip = False # 加载数据集,得到输入数据和目标数据 examples = load_examples(test_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } sess = tf.InteractiveSession() saver = tf.train.Saver(max_to_keep=1) ckpt = tf.train.get_checkpoint_state(checkpoint) saver.restore(sess,ckpt.model_checkpoint_path) start = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for step in range(examples.count): results = sess.run(display_fetches) filesets = save_images(test_output_dir, results) for i, f in enumerate(filesets): print("evaluated image", f["name"]) index_path = append_index(test_output_dir, filesets) print("wrote index at", index_path) print("rate", (time.time() - start) / max_steps) if __name__ == '__main__': train() #test()
程序运行到一半自动停止却不报错
################################################### # # Script to: # - Load the images and extract the patches # - Define the neural network # - define the training # ################################################## import numpy as np import configparser from keras.models import Model from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, UpSampling2D, Reshape, core, Dropout from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint, LearningRateScheduler from keras import backend as K from keras.utils.vis_utils import plot_model as plot from keras.optimizers import SGD import sys sys.path.insert(0, 'C:\\Users\\Administrator\\Desktop\\袁炀\\最新下载的python3的完整版\\Retina-Unet-master\\lib\\') from help_functions import * #function to obtain data for training/testing (validation) from extract_patches import get_data_training print('0step') #Define the neural network def get_unet(n_ch,patch_height,patch_width): inputs = Input(shape=(n_ch,patch_height,patch_width)) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same',data_format='channels_first')(inputs) conv1 = Dropout(0.2)(conv1) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same',data_format='channels_first')(conv1) pool1 = MaxPooling2D((2, 2))(conv1) # conv2 = Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(pool1) conv2 = Dropout(0.2)(conv2) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(conv2) pool2 = MaxPooling2D((2, 2))(conv2) # conv3 = Conv2D(128, (3, 3), activation='relu', padding='same',data_format='channels_first')(pool2) conv3 = Dropout(0.2)(conv3) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same',data_format='channels_first')(conv3) up1 = UpSampling2D(size=(2, 2))(conv3) up1 = concatenate([conv2,up1],axis=1) conv4 = Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(up1) conv4 = Dropout(0.2)(conv4) conv4 = Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(conv4) # up2 = UpSampling2D(size=(2, 2))(conv4) up2 = concatenate([conv1,up2], axis=1) conv5 = Conv2D(32, (3, 3), activation='relu', padding='same',data_format='channels_first')(up2) conv5 = Dropout(0.2)(conv5) conv5 = Conv2D(32, (3, 3), activation='relu', padding='same',data_format='channels_first')(conv5) # conv6 = Conv2D(2, (1, 1), activation='relu',padding='same',data_format='channels_first')(conv5) conv6 = core.Reshape((2,patch_height*patch_width))(conv6) conv6 = core.Permute((2,1))(conv6) ############ conv7 = core.Activation('softmax')(conv6) model = Model(inputs=inputs, outputs=conv7) # sgd = SGD(lr=0.01, decay=1e-6, momentum=0.3, nesterov=False) model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics = ['accuracy']) ####编译模型https://www.cnblogs.com/LittleHann/p/6442161.html return model print('1step') #Define the neural network gnet #you need change function call "get_unet" to "get_gnet" in line 166 before use this network def get_gnet(n_ch,patch_height,patch_width): inputs = Input((n_ch, patch_height, patch_width)) conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(inputs) conv1 = Dropout(0.2)(conv1) conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv1) up1 = UpSampling2D(size=(2, 2))(conv1) # conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(up1) conv2 = Dropout(0.2)(conv2) conv2 = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(conv2) pool1 = MaxPooling2D(pool_size=(2, 2))(conv2) # conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(pool1) conv3 = Dropout(0.2)(conv3) conv3 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv3) pool2 = MaxPooling2D(pool_size=(2, 2))(conv3) # conv4 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(pool2) conv4 = Dropout(0.2)(conv4) conv4 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv4) pool3 = MaxPooling2D(pool_size=(2, 2))(conv4) # conv5 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(pool3) conv5 = Dropout(0.2)(conv5) conv5 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(conv5) # up2 = merge([UpSampling2D(size=(2, 2))(conv5), conv4], mode='concat', concat_axis=1) conv6 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(up2) conv6 = Dropout(0.2)(conv6) conv6 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv6) # up3 = merge([UpSampling2D(size=(2, 2))(conv6), conv3], mode='concat', concat_axis=1) conv7 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(up3) conv7 = Dropout(0.2)(conv7) conv7 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv7) # up4 = merge([UpSampling2D(size=(2, 2))(conv7), conv2], mode='concat', concat_axis=1) conv8 = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(up4) conv8 = Dropout(0.2)(conv8) conv8 = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(conv8) # pool4 = MaxPooling2D(pool_size=(2, 2))(conv8) conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(pool4) conv9 = Dropout(0.2)(conv9) conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv9) # conv10 = Convolution2D(2, 1, 1, activation='relu', border_mode='same')(conv9) conv10 = core.Reshape((2,patch_height*patch_width))(conv10) conv10 = core.Permute((2,1))(conv10) ############ conv10 = core.Activation('softmax')(conv10) model = Model(input=inputs, output=conv10) # sgd = SGD(lr=0.01, decay=1e-6, momentum=0.3, nesterov=False) model.compile(optimizer='sgd', loss='categorical_crossentropy',metrics=['accuracy']) return model print('2step') #========= Load settings from Config file config = configparser.RawConfigParser() config.read('configuration.txt') #patch to the datasets path_data = config.get('data paths', 'path_local') #Experiment name name_experiment = config.get('experiment name', 'name') #training settings N_epochs = int(config.get('training settings', 'N_epochs')) batch_size = int(config.get('training settings', 'batch_size')) #============ Load the data and divided in patches patches_imgs_train, patches_masks_train = get_data_training( DRIVE_train_imgs_original = path_data + config.get('data paths', 'train_imgs_original'), DRIVE_train_groudTruth = path_data + config.get('data paths', 'train_groundTruth'), #masks patch_height = int(config.get('data attributes', 'patch_height')), patch_width = int(config.get('data attributes', 'patch_width')), N_subimgs = int(config.get('training settings', 'N_subimgs')), inside_FOV = config.getboolean('training settings', 'inside_FOV') #select the patches only inside the FOV (default == True) ) ####具体函数在extract_patch中,获得批量的图和批量的提取的图 print('3step') #========= Save a sample of what you're feeding to the neural network ========== N_sample = min(patches_imgs_train.shape[0],40)###patches_imgs_train.shape为(190000, 1, 48, 48),即patches_imgs_train.shape[0]为19000 visualize(group_images(patches_imgs_train[0:N_sample,:,:,:],5),'./'+name_experiment+'/'+"sample_input_imgs").show()#对数据集划分,进行分组显示 visualize(group_images(patches_masks_train[0:N_sample,:,:,:],5),'./'+name_experiment+'/'+"sample_input_masks").show() print('4step') #=========== Construct and save the model arcitecture ===== n_ch = patches_imgs_train.shape[1] patch_height = patches_imgs_train.shape[2] patch_width = patches_imgs_train.shape[3] model = get_unet(n_ch, patch_height, patch_width) #the U-net model print("Check: final output of the network:") print(model.output_shape) import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/' plot(model, to_file='./'+name_experiment+'/'+name_experiment + '_model.png') #check how the model looks like json_string = model.to_json() open('./'+name_experiment+'/'+name_experiment +'_architecture.json', 'w').write(json_string) ###调用网络 及 保存网络模型 print('5step') #============ Training ================================== checkpointer = ModelCheckpoint(filepath='./'+name_experiment+'/'+name_experiment +'_best_weights.h5', verbose=1, monitor='val_loss', mode='auto', save_best_only=True) #save at each epoch if the validation decreased print('6step') def step_decay(epoch): lrate = 0.01 #the initial learning rate (by default in keras) if epoch==100: return 0.005 else: return lrate lrate_drop = LearningRateScheduler(step_decay) ##动态调整学习率并实时保存each epoch的checkpoint数据 print('7step') patches_masks_train=np.load("patch_woxiede.npy") ###patches_masks_train = masks_Unet(patches_masks_train) reduce memory consumption?????????????减少内存消耗patches_masks_train的shape是(190000, 1, 48, 48) print('8step') model.fit(patches_imgs_train, patches_masks_train, epochs=N_epochs, batch_size=batch_size, verbose=1, shuffle=True, validation_split=0.1, callbacks=[checkpointer]) print('9step') #========== Save and test the last model =================== model.save_weights('./'+name_experiment+'/'+name_experiment +'_last_weights.h5', overwrite=True) #test the model # score = model.evaluate(patches_imgs_test, masks_Unet(patches_masks_test), verbose=0) # print('Test score:', score[0]) # print('Test accuracy:', score[1])
在ssm中如何使用SQL的COUNT函数出来的结果并在JSP中显示列表
``` Mapper.xml部分 <select id="selectAuthorMessage" resultMap="ComidMap"> SELECT c.Cid,a.Aname, COUNT(1) c,MAX(c.Cname) m ,c.Subcount,c.State,c1.Ctime FROM comic c INNER JOIN author a INNER JOIN chapter c1 ON c.Authorid=a.Aid AND c1.Cid=c.Cid <trim prefix="WHERE" prefixOverrides="and | or"> <if test="Aname!=null and Aname!=''"> a.Aname LIKE CONCAT('%',#{Aname},'%') </if> </trim> GROUP BY a.Aname limit #{from},#{pageSize} </select> //item.c 和item.m在jsp页面上显示undefined $.each(data.list, function(infex, item) { text = text + "<tr>" + "<td>" + item.cid + "</td>" + "<td>" + item.author.aname + "</td>" + "<td>" + item.c + "</td>" + "<td>" + item.m + "</td>" + "<td>" + item.subcount + "</td>" + "<td>" + item.state + "</td>" + "<td>" + item.chapter.ctime + "</td>" + "<td class='td-manage'><a title='查看' onclick='x_admin_show('编辑','order-view.html')' href='javascript:;'> <i class='layui-icon'>&#xe63c;</i>" + "</a></td>" + "</tr>" }) ```
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
【JSON解析】浅谈JSONObject的使用
简介 在程序开发过程中,在参数传递,函数返回值等方面,越来越多的使用JSON。JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,同时也易于机器解析和生成、易于理解、阅读和撰写,而且Json采用完全独立于语言的文本格式,这使得Json成为理想的数据交换语言。 JSON建构于两种结构: “名称/值”对的集合(A Collection of name/va...
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
2019年还剩1天,我从外包公司离职了
这日子过的可真快啊,2019年还剩1天,外包公司干了不到3个月,我离职了
我一个37岁的程序员朋友
周末了,人一旦没有点事情干,心里就瞎想,而且跟几个老男人坐在一起,更容易瞎想,我自己现在也是 30 岁了,也是无时无刻在担心自己的职业生涯,担心丢掉工作没有收入,担心身体机能下降,担心突...
计算机网络的核心概念
这是《计算机网络》系列文章的第二篇文章 我们第一篇文章讲述了计算机网络的基本概念,互联网的基本名词,什么是协议以及几种接入网以及网络传输的物理媒体,那么本篇文章我们来探讨一下网络核心、交换网络、时延、丢包、吞吐量以及计算机网络的协议层次和网络攻击。 网络核心 网络的核心是由因特网端系统和链路构成的网状网络,下面这幅图正确的表达了这一点 那么在不同的 ISP 和本地以及家庭网络是如何交换信息的呢?...
python自动下载图片
近日闲来无事,总有一种无形的力量萦绕在朕身边,让朕精神涣散,昏昏欲睡。 可是,像朕这么有职业操守的社畜怎么能在上班期间睡瞌睡呢,我不禁陷入了沉思。。。。 突然旁边的IOS同事问:‘嘿,兄弟,我发现一个网站的图片很有意思啊,能不能帮我保存下来提升我的开发灵感?’ 作为一个坚强的社畜怎么能说自己不行呢,当时朕就不假思索的答应:‘oh, It’s simple. Wait for me for a ...
一名大专同学的四个问题
【前言】   收到一封来信,赶上各种事情拖了几日,利用今天要放下工作的时机,做个回复。   2020年到了,就以这一封信,作为开年标志吧。 【正文】   您好,我是一名现在有很多困惑的大二学生。有一些问题想要向您请教。   先说一下我的基本情况,高考失利,不想复读,来到广州一所大专读计算机应用技术专业。学校是偏艺术类的,计算机专业没有实验室更不用说工作室了。而且学校的学风也不好。但我很想在计算机领...
复习一周,京东+百度一面,不小心都拿了Offer
京东和百度一面都问了啥,面试官百般刁难,可惜我全会。
Java 14 都快来了,为什么还有这么多人固守Java 8?
从Java 9开始,Java版本的发布就让人眼花缭乱了。每隔6个月,都会冒出一个新版本出来,Java 10 , Java 11, Java 12, Java 13, 到2020年3月份,...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
程序员如何通过造轮子走向人生巅峰?
前言:你所做的事情,也许暂时看不到成果。但不要灰心,你不是没有成长,而是在扎根。 程序员圈经常流行的一句话:“不要重复造轮子”。在计算机领域,我们将封装好的组件、库,叫做轮子。因为它可以拿来直接用,直接塞进我们的项目中,就能实现对应的功能。 有些同学会问,人家都已经做好了,你再来重新弄一遍,有什么意义?这不是在浪费时间吗。 殊不知,造轮子是一种学习方式,能快速进步,造得好,是自己超强能力的表...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
Idea 中最常用的10款插件(提高开发效率),一定要学会使用!
学习使用一些插件,可以提高开发效率。对于我们开发人员很有帮助。这篇博客介绍了开发中使用的插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
立即提问