当前位置: 首页 > news >正文

网站建设服务合同交印花税吗毕业设计模板网站

网站建设服务合同交印花税吗,毕业设计模板网站,云主机试用,wordpress内存分配不足本次项目的文件 main.py主程序如下 导入必要的库和模块#xff1a; 导入 TensorFlow 库以及自定义的 FaceAging 模块。导入操作系统库和参数解析库。 定义 str2bool 函数#xff1a; 自定义函数用于将字符串转换为布尔值。 创建命令行参数解析器#xff1a; 使用 argparse.A…本次项目的文件 main.py主程序如下 导入必要的库和模块 导入 TensorFlow 库以及自定义的 FaceAging 模块。导入操作系统库和参数解析库。 定义 str2bool 函数 自定义函数用于将字符串转换为布尔值。 创建命令行参数解析器 使用 argparse.ArgumentParser 创建解析器设置命令行参数的相关信息如是否训练、轮数、数据集名称等。 主函数 main(_) 入口 打印设置的参数。配置 TensorFlow 会话设置 GPU 使用等。 在 with tf.Session(configconfig) as session 中 创建 FaceAging 模型实例传入会话、训练模式标志、保存路径和数据集名称。 判断是否训练模式 如果是训练模式根据参数决定是否使用预训练模型进行训练。如果不使用预训练模型执行预训练步骤并在预训练完成后开始正式训练。执行模型的训练方法传入训练轮数等参数。 如果不是训练模式 进入测试模式执行模型的自定义测试方法传入测试图像目录。 在 __name__ __main__ 中执行程序 执行命令行参数解析和主函数。 import tensorflow as tf from FaceAging import FaceAging # 导入自定义的 FaceAging 模块 from os import environ import argparse# 设置环境变量控制 TensorFlow 输出日志等级 environ[TF_CPP_MIN_LOG_LEVEL] 3# 自定义一个函数用于将字符串转换为布尔值 def str2bool(v):if v.lower() in (yes, true, t, y, 1):return Trueelif v.lower() in (no, false, f, n, 0):return Falseelse:raise argparse.ArgumentTypeError(Boolean value expected.)# 创建命令行参数解析器 parser argparse.ArgumentParser(descriptionCAAE) parser.add_argument(--is_train, typestr2bool, defaultTrue, help是否进行训练) parser.add_argument(--epoch, typeint, default50, help训练的轮数) parser.add_argument(--dataset, typestr, defaultUTKFace, help存储在./data目录中的训练数据集名称) parser.add_argument(--savedir, typestr, defaultsave, help保存检查点、中间训练结果和摘要的目录) parser.add_argument(--testdir, typestr, defaultNone, help测试图像所在的目录) parser.add_argument(--use_trained_model, typestr2bool, defaultTrue, help是否使用已有的模型进行训练) parser.add_argument(--use_init_model, typestr2bool, defaultTrue, help如果找不到已有模型是否从初始模型开始训练) FLAGS parser.parse_args()# 主函数入口 def main(_):# 打印设置参数import pprintpprint.pprint(FLAGS)# 配置 TensorFlow 会话config tf.ConfigProto()config.gpu_options.allow_growth Truewith tf.Session(configconfig) as session:# 创建 FaceAging 模型实例model FaceAging(session, # TensorFlow 会话is_trainingFLAGS.is_train, # 是否为训练模式的标志save_dirFLAGS.savedir, # 保存检查点、样本和摘要的路径dataset_nameFLAGS.dataset # 存储在 ./data 目录中的数据集名称)if FLAGS.is_train:print (\n\t训练模式)if not FLAGS.use_trained_model:print (\n\t预训练网络)model.train(num_epochs10, # 训练轮数use_trained_modelFLAGS.use_trained_model,use_init_modelFLAGS.use_init_model,weights(0, 0, 0))print (\n\t预训练完成训练将开始。)model.train(num_epochsFLAGS.epoch, # 训练轮数use_trained_modelFLAGS.use_trained_model,use_init_modelFLAGS.use_init_model)else:print (\n\t测试模式)model.custom_test(testing_samples_dirFLAGS.testdir /*jpg)if __name__ __main__:# 在主程序中执行命令行解析和执行主函数tf.app.run()2.FaceAging.py 主要流程 导入必要的库和模块 导入所需的Python库如NumPy、TensorFlow等。导入自定义的操作ops.py。 定义 FaceAging 类 在初始化方法中设置了模型的各种参数例如输入图像大小、网络层参数、训练参数等并创建了 TensorFlow 图的输入节点。定义了图的结构包括编码器、生成器、判别器等。定义了损失函数包括生成器、判别器、总变差TV等。收集了需要用于TensorBoard可视化的摘要信息。 train 方法 从文件中加载训练数据集的文件名列表。定义了优化器和损失函数然后进行模型的训练。在每个epoch中随机选择一部分训练图像样本计算并更新生成器和判别器的参数输出训练进度等信息。保存模型的中间检查点生成样本图像用于可视化训练结束后保存最终模型。 encoder 方法 实现了编码器结构将输入图像转化为对应的噪声或特征。 generator 方法 实现了生成器结构将噪声特征、年龄标签和性别标签拼接生成相应年龄段的人脸图像。 discriminator_z 和 discriminator_img 方法 实现了判别器结构对输入的噪声特征或图像进行判别。 save_checkpoint 和 load_checkpoint 方法 用于保存和加载训练过程中的模型检查点。 sample 和 test 方法 生成一些样本图像以及将训练过程中的中间结果保存为图片。 custom_test 方法 运行模型进行自定义测试加载模型并生成特定人脸的年龄化效果。 from __future__ import division import os import time from glob import glob import tensorflow as tf import numpy as np from scipy.io import savemat from ops import *class FaceAging(object):def __init__(self,session, # TensorFlow sessionsize_image128, # size the input imagessize_kernel5, # size of the kernels in convolution and deconvolutionsize_batch100, # mini-batch size for training and testing, must be square of an integernum_input_channels3, # number of channels of input imagesnum_encoder_channels64, # number of channels of the first conv layer of encodernum_z_channels50, # number of channels of the layer z (noise or code)num_categories10, # number of categories (age segments) in the training datasetnum_gen_channels1024, # number of channels of the first deconv layer of generatorenable_tile_labelTrue, # enable to tile the labeltile_ratio1.0, # ratio of the length between tiled label and zis_trainingTrue, # flag for training or testing modesave_dir./save, # path to save checkpoints, samples, and summarydataset_nameUTKFace # name of the dataset in the folder ./data):self.session sessionself.image_value_range (-1, 1)self.size_image size_imageself.size_kernel size_kernelself.size_batch size_batchself.num_input_channels num_input_channelsself.num_encoder_channels num_encoder_channelsself.num_z_channels num_z_channelsself.num_categories num_categoriesself.num_gen_channels num_gen_channelsself.enable_tile_label enable_tile_labelself.tile_ratio tile_ratioself.is_training is_trainingself.save_dir save_dirself.dataset_name dataset_name# ************************************* input to graph ********************************************************self.input_image tf.placeholder(tf.float32,[self.size_batch, self.size_image, self.size_image, self.num_input_channels],nameinput_images)self.age tf.placeholder(tf.float32,[self.size_batch, self.num_categories],nameage_labels)self.gender tf.placeholder(tf.float32,[self.size_batch, 2],namegender_labels)self.z_prior tf.placeholder(tf.float32,[self.size_batch, self.num_z_channels],namez_prior)# ************************************* build the graph *******************************************************print (\n\tBuilding graph ...)# encoder: input image -- zself.z self.encoder(imageself.input_image)# generator: z label -- generated imageself.G self.generator(zself.z,yself.age,genderself.gender,enable_tile_labelself.enable_tile_label,tile_ratioself.tile_ratio)# discriminator on zself.D_z, self.D_z_logits self.discriminator_z(zself.z,is_trainingself.is_training)# discriminator on Gself.D_G, self.D_G_logits self.discriminator_img(imageself.G,yself.age,genderself.gender,is_trainingself.is_training)# discriminator on z_priorself.D_z_prior, self.D_z_prior_logits self.discriminator_z(zself.z_prior,is_trainingself.is_training,reuse_variablesTrue)# discriminator on input imageself.D_input, self.D_input_logits self.discriminator_img(imageself.input_image,yself.age,genderself.gender,is_trainingself.is_training,reuse_variablesTrue)# ************************************* loss functions *******************************************************# loss function of encoder generator#self.EG_loss tf.nn.l2_loss(self.input_image - self.G) / self.size_batch # L2 lossself.EG_loss tf.reduce_mean(tf.abs(self.input_image - self.G)) # L1 loss# loss function of discriminator on zself.D_z_loss_prior tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_z_prior_logits, labelstf.ones_like(self.D_z_prior_logits)))self.D_z_loss_z tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_z_logits, labelstf.zeros_like(self.D_z_logits)))self.E_z_loss tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_z_logits, labelstf.ones_like(self.D_z_logits)))# loss function of discriminator on imageself.D_img_loss_input tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_input_logits, labelstf.ones_like(self.D_input_logits)))self.D_img_loss_G tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_G_logits, labelstf.zeros_like(self.D_G_logits)))self.G_img_loss tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logitsself.D_G_logits, labelstf.ones_like(self.D_G_logits)))# total variation to smooth the generated imagetv_y_size self.size_imagetv_x_size self.size_imageself.tv_loss ((tf.nn.l2_loss(self.G[:, 1:, :, :] - self.G[:, :self.size_image - 1, :, :]) / tv_y_size) (tf.nn.l2_loss(self.G[:, :, 1:, :] - self.G[:, :, :self.size_image - 1, :]) / tv_x_size)) / self.size_batch# *********************************** trainable variables ****************************************************trainable_variables tf.trainable_variables()# variables of encoderself.E_variables [var for var in trainable_variables if E_ in var.name]# variables of generatorself.G_variables [var for var in trainable_variables if G_ in var.name]# variables of discriminator on zself.D_z_variables [var for var in trainable_variables if D_z_ in var.name]# variables of discriminator on imageself.D_img_variables [var for var in trainable_variables if D_img_ in var.name]# ************************************* collect the summary ***************************************self.z_summary tf.summary.histogram(z, self.z)self.z_prior_summary tf.summary.histogram(z_prior, self.z_prior)self.EG_loss_summary tf.summary.scalar(EG_loss, self.EG_loss)self.D_z_loss_z_summary tf.summary.scalar(D_z_loss_z, self.D_z_loss_z)self.D_z_loss_prior_summary tf.summary.scalar(D_z_loss_prior, self.D_z_loss_prior)self.E_z_loss_summary tf.summary.scalar(E_z_loss, self.E_z_loss)self.D_z_logits_summary tf.summary.histogram(D_z_logits, self.D_z_logits)self.D_z_prior_logits_summary tf.summary.histogram(D_z_prior_logits, self.D_z_prior_logits)self.D_img_loss_input_summary tf.summary.scalar(D_img_loss_input, self.D_img_loss_input)self.D_img_loss_G_summary tf.summary.scalar(D_img_loss_G, self.D_img_loss_G)self.G_img_loss_summary tf.summary.scalar(G_img_loss, self.G_img_loss)self.D_G_logits_summary tf.summary.histogram(D_G_logits, self.D_G_logits)self.D_input_logits_summary tf.summary.histogram(D_input_logits, self.D_input_logits)# for saving the graph and variablesself.saver tf.train.Saver(max_to_keep2)def train(self,num_epochs200, # number of epochslearning_rate0.0002, # learning rate of optimizerbeta10.5, # parameter for Adam optimizerdecay_rate1.0, # learning rate decay (0, 1], 1 means no decayenable_shuffleTrue, # enable shuffle of the datasetuse_trained_modelTrue, # use the saved checkpoint to initialize the networkuse_init_modelTrue, # use the init model to initialize the networkweigts(0.0001, 0, 0) # the weights of adversarial loss and TV loss):# *************************** load file names of images ******************************************************file_names glob(os.path.join(./data, self.dataset_name, *.jpg))size_data len(file_names)np.random.seed(seed2017)if enable_shuffle:np.random.shuffle(file_names)# *********************************** optimizer **************************************************************# over all, there are three loss functions, weights may differ from the paper because of different datasetsself.loss_EG self.EG_loss weigts[0] * self.G_img_loss weigts[1] * self.E_z_loss weigts[2] * self.tv_loss # slightly increase the paramsself.loss_Dz self.D_z_loss_prior self.D_z_loss_zself.loss_Di self.D_img_loss_input self.D_img_loss_G# set learning rate decayself.EG_global_step tf.Variable(0, trainableFalse, nameglobal_step)EG_learning_rate tf.train.exponential_decay(learning_ratelearning_rate,global_stepself.EG_global_step,decay_stepssize_data / self.size_batch * 2,decay_ratedecay_rate,staircaseTrue)# optimizer for encoder generatorwith tf.variable_scope(opt, reusetf.AUTO_REUSE):self.EG_optimizer tf.train.AdamOptimizer(learning_rateEG_learning_rate,beta1beta1).minimize(lossself.loss_EG,global_stepself.EG_global_step,var_listself.E_variables self.G_variables)# optimizer for discriminator on zself.D_z_optimizer tf.train.AdamOptimizer(learning_rateEG_learning_rate,beta1beta1).minimize(lossself.loss_Dz,var_listself.D_z_variables)# optimizer for discriminator on imageself.D_img_optimizer tf.train.AdamOptimizer(learning_rateEG_learning_rate,beta1beta1).minimize(lossself.loss_Di,var_listself.D_img_variables)# *********************************** tensorboard *************************************************************# for visualization (TensorBoard): $ tensorboard --logdir path/to/log-directoryself.EG_learning_rate_summary tf.summary.scalar(EG_learning_rate, EG_learning_rate)self.summary tf.summary.merge([self.z_summary, self.z_prior_summary,self.D_z_loss_z_summary, self.D_z_loss_prior_summary,self.D_z_logits_summary, self.D_z_prior_logits_summary,self.EG_loss_summary, self.E_z_loss_summary,self.D_img_loss_input_summary, self.D_img_loss_G_summary,self.G_img_loss_summary, self.EG_learning_rate_summary,self.D_G_logits_summary, self.D_input_logits_summary])self.writer tf.summary.FileWriter(os.path.join(self.save_dir, summary), self.session.graph)# ************* get some random samples as testing data to visualize the learning process *********************sample_files file_names[0:self.size_batch]file_names[0:self.size_batch] []sample [load_image(image_pathsample_file,image_sizeself.size_image,image_value_rangeself.image_value_range,is_gray(self.num_input_channels 1),) for sample_file in sample_files]if self.num_input_channels 1:sample_images np.array(sample).astype(np.float32)[:, :, :, None]else:sample_images np.array(sample).astype(np.float32)sample_label_age np.ones(shape(len(sample_files), self.num_categories),dtypenp.float32) * self.image_value_range[0]sample_label_gender np.ones(shape(len(sample_files), 2),dtypenp.float32) * self.image_value_range[0]for i, label in enumerate(sample_files):label int(str(sample_files[i]).split(/)[-1].split(_)[0])if 0 label 5:label 0elif 6 label 10:label 1elif 11 label 15:label 2elif 16 label 20:label 3elif 21 label 30:label 4elif 31 label 40:label 5elif 41 label 50:label 6elif 51 label 60:label 7elif 61 label 70:label 8else:label 9sample_label_age[i, label] self.image_value_range[-1]gender int(str(sample_files[i]).split(/)[-1].split(_)[1])sample_label_gender[i, gender] self.image_value_range[-1]# ******************************************* training *******************************************************# initialize the graphtf.global_variables_initializer().run()# load check pointif use_trained_model:if self.load_checkpoint():print(\tSUCCESS ^_^)else:print(\tFAILED _!)# load init modelif use_init_model:if not os.path.exists(init_model/model-init.data-00000-of-00001):from init_model.zip_opt import jointry:join(init_model/model_parts, init_model/model-init.data-00000-of-00001)except:raise Exception(Error joining files)self.load_checkpoint(model_pathinit_model)# epoch iterationnum_batches len(file_names) // self.size_batchfor epoch in range(num_epochs):if enable_shuffle:np.random.shuffle(file_names)for ind_batch in range(num_batches):start_time time.time()# read batch images and labelsbatch_files file_names[ind_batch*self.size_batch:(ind_batch1)*self.size_batch]batch [load_image(image_pathbatch_file,image_sizeself.size_image,image_value_rangeself.image_value_range,is_gray(self.num_input_channels 1),) for batch_file in batch_files]if self.num_input_channels 1:batch_images np.array(batch).astype(np.float32)[:, :, :, None]else:batch_images np.array(batch).astype(np.float32)batch_label_age np.ones(shape(len(batch_files), self.num_categories),dtypenp.float) * self.image_value_range[0]batch_label_gender np.ones(shape(len(batch_files), 2),dtypenp.float) * self.image_value_range[0]for i, label in enumerate(batch_files):label int(str(batch_files[i]).split(/)[-1].split(_)[0])if 0 label 5:label 0elif 6 label 10:label 1elif 11 label 15:label 2elif 16 label 20:label 3elif 21 label 30:label 4elif 31 label 40:label 5elif 41 label 50:label 6elif 51 label 60:label 7elif 61 label 70:label 8else:label 9batch_label_age[i, label] self.image_value_range[-1]gender int(str(batch_files[i]).split(/)[-1].split(_)[1])batch_label_gender[i, gender] self.image_value_range[-1]# prior distribution on the prior of zbatch_z_prior np.random.uniform(self.image_value_range[0],self.image_value_range[-1],[self.size_batch, self.num_z_channels]).astype(np.float32)# update_, _, _, EG_err, Ez_err, Dz_err, Dzp_err, Gi_err, DiG_err, Di_err, TV self.session.run(fetches [self.EG_optimizer,self.D_z_optimizer,self.D_img_optimizer,self.EG_loss,self.E_z_loss,self.D_z_loss_z,self.D_z_loss_prior,self.G_img_loss,self.D_img_loss_G,self.D_img_loss_input,self.tv_loss],feed_dict{self.input_image: batch_images,self.age: batch_label_age,self.gender: batch_label_gender,self.z_prior: batch_z_prior})print(\nEpoch: [%3d/%3d] Batch: [%3d/%3d]\n\tEG_err%.4f\tTV%.4f %(epoch1, num_epochs, ind_batch1, num_batches, EG_err, TV))print(\tEz%.4f\tDz%.4f\tDzp%.4f % (Ez_err, Dz_err, Dzp_err))print(\tGi%.4f\tDi%.4f\tDiG%.4f % (Gi_err, Di_err, DiG_err))# estimate left run timeelapse time.time() - start_timetime_left ((num_epochs - epoch - 1) * num_batches (num_batches - ind_batch - 1)) * elapseprint(\tTime left: %02d:%02d:%02d %(int(time_left / 3600), int(time_left % 3600 / 60), time_left % 60))# add to summarysummary self.summary.eval(feed_dict{self.input_image: batch_images,self.age: batch_label_age,self.gender: batch_label_gender,self.z_prior: batch_z_prior})self.writer.add_summary(summary, self.EG_global_step.eval())# save sample images for each epochname {:02d}.png.format(epoch1)self.sample(sample_images, sample_label_age, sample_label_gender, name)self.test(sample_images, sample_label_gender, name)# save checkpoint for each 5 epochif np.mod(epoch, 5) 4:self.save_checkpoint()# save the trained modelself.save_checkpoint()# close the summary writerself.writer.close()def encoder(self, image, reuse_variablesFalse):if reuse_variables:tf.get_variable_scope().reuse_variables()num_layers int(np.log2(self.size_image)) - int(self.size_kernel / 2)current image# conv layers with stride 2for i in range(num_layers):name E_conv str(i)current conv2d(input_mapcurrent,num_output_channelsself.num_encoder_channels * (2 ** i),size_kernelself.size_kernel,namename)current tf.nn.relu(current)# fully connection layername E_fccurrent fc(input_vectortf.reshape(current, [self.size_batch, -1]),num_output_lengthself.num_z_channels,namename)# outputreturn tf.nn.tanh(current)def generator(self, z, y, gender, reuse_variablesFalse, enable_tile_labelTrue, tile_ratio1.0):if reuse_variables:tf.get_variable_scope().reuse_variables()num_layers int(np.log2(self.size_image)) - int(self.size_kernel / 2)if enable_tile_label:duplicate int(self.num_z_channels * tile_ratio / self.num_categories)else:duplicate 1z concat_label(z, y, duplicateduplicate)if enable_tile_label:duplicate int(self.num_z_channels * tile_ratio / 2)else:duplicate 1z concat_label(z, gender, duplicateduplicate)size_mini_map int(self.size_image / 2 ** num_layers)# fc layername G_fccurrent fc(input_vectorz,num_output_lengthself.num_gen_channels * size_mini_map * size_mini_map,namename)# reshape to cube for deconvcurrent tf.reshape(current, [-1, size_mini_map, size_mini_map, self.num_gen_channels])current tf.nn.relu(current)# deconv layers with stride 2for i in range(num_layers):name G_deconv str(i)current deconv2d(input_mapcurrent,output_shape[self.size_batch,size_mini_map * 2 ** (i 1),size_mini_map * 2 ** (i 1),int(self.num_gen_channels / 2 ** (i 1))],size_kernelself.size_kernel,namename)current tf.nn.relu(current)name G_deconv str(i1)current deconv2d(input_mapcurrent,output_shape[self.size_batch,self.size_image,self.size_image,int(self.num_gen_channels / 2 ** (i 2))],size_kernelself.size_kernel,stride1,namename)current tf.nn.relu(current)name G_deconv str(i 2)current deconv2d(input_mapcurrent,output_shape[self.size_batch,self.size_image,self.size_image,self.num_input_channels],size_kernelself.size_kernel,stride1,namename)# outputreturn tf.nn.tanh(current)def discriminator_z(self, z, is_trainingTrue, reuse_variablesFalse, num_hidden_layer_channels(64, 32, 16), enable_bnTrue):if reuse_variables:tf.get_variable_scope().reuse_variables()current z# fully connection layerfor i in range(len(num_hidden_layer_channels)):name D_z_fc str(i)current fc(input_vectorcurrent,num_output_lengthnum_hidden_layer_channels[i],namename)if enable_bn:name D_z_bn str(i)current tf.contrib.layers.batch_norm(current,scaleFalse,is_trainingis_training,scopename,reusereuse_variables)current tf.nn.relu(current)# output layername D_z_fc str(i1)current fc(input_vectorcurrent,num_output_length1,namename)return tf.nn.sigmoid(current), currentdef discriminator_img(self, image, y, gender, is_trainingTrue, reuse_variablesFalse, num_hidden_layer_channels(16, 32, 64, 128), enable_bnTrue):if reuse_variables:tf.get_variable_scope().reuse_variables()num_layers len(num_hidden_layer_channels)current image# conv layers with stride 2for i in range(num_layers):name D_img_conv str(i)current conv2d(input_mapcurrent,num_output_channelsnum_hidden_layer_channels[i],size_kernelself.size_kernel,namename)if enable_bn:name D_img_bn str(i)current tf.contrib.layers.batch_norm(current,scaleFalse,is_trainingis_training,scopename,reusereuse_variables)current tf.nn.relu(current)if i 0:current concat_label(current, y)current concat_label(current, gender, int(self.num_categories / 2))# fully connection layername D_img_fc1current fc(input_vectortf.reshape(current, [self.size_batch, -1]),num_output_length1024,namename)current lrelu(current)name D_img_fc2current fc(input_vectorcurrent,num_output_length1,namename)# outputreturn tf.nn.sigmoid(current), currentdef save_checkpoint(self):checkpoint_dir os.path.join(self.save_dir, checkpoint)if not os.path.exists(checkpoint_dir):os.makedirs(checkpoint_dir)self.saver.save(sessself.session,save_pathos.path.join(checkpoint_dir, model),global_stepself.EG_global_step.eval())def load_checkpoint(self, model_pathNone):if model_path is None:print(\n\tLoading pre-trained model ...)checkpoint_dir os.path.join(self.save_dir, checkpoint)else:print(\n\tLoading init model ...)checkpoint_dir model_pathcheckpoints tf.train.get_checkpoint_state(checkpoint_dir)if checkpoints and checkpoints.model_checkpoint_path:checkpoints_name os.path.basename(checkpoints.model_checkpoint_path)try:self.saver.restore(self.session, os.path.join(checkpoint_dir, checkpoints_name))return Trueexcept:return Falseelse:return Falsedef sample(self, images, labels, gender, name):sample_dir os.path.join(self.save_dir, samples)if not os.path.exists(sample_dir):os.makedirs(sample_dir)z, G self.session.run([self.z, self.G],feed_dict{self.input_image: images,self.age: labels,self.gender: gender})size_frame int(np.sqrt(self.size_batch))save_batch_images(batch_imagesG,save_pathos.path.join(sample_dir, name),image_value_rangeself.image_value_range,size_frame[size_frame, size_frame])def test(self, images, gender, name):test_dir os.path.join(self.save_dir, test)if not os.path.exists(test_dir):os.makedirs(test_dir)images images[:int(np.sqrt(self.size_batch)), :, :, :]gender gender[:int(np.sqrt(self.size_batch)), :]size_sample images.shape[0]labels np.arange(size_sample)labels np.repeat(labels, size_sample)query_labels np.ones(shape(size_sample ** 2, size_sample),dtypenp.float32) * self.image_value_range[0]for i in range(query_labels.shape[0]):query_labels[i, labels[i]] self.image_value_range[-1]query_images np.tile(images, [self.num_categories, 1, 1, 1])query_gender np.tile(gender, [self.num_categories, 1])z, G self.session.run([self.z, self.G],feed_dict{self.input_image: query_images,self.age: query_labels,self.gender: query_gender})save_batch_images(batch_imagesquery_images,save_pathos.path.join(test_dir, input.png),image_value_rangeself.image_value_range,size_frame[size_sample, size_sample])save_batch_images(batch_imagesG,save_pathos.path.join(test_dir, name),image_value_rangeself.image_value_range,size_frame[size_sample, size_sample])def custom_test(self, testing_samples_dir):if not self.load_checkpoint():print(\tFAILED _!)exit(0)else:print(\tSUCCESS ^_^)num_samples int(np.sqrt(self.size_batch))file_names glob(testing_samples_dir)if len(file_names) num_samples:print (The number of testing images is must larger than %d % num_samples)exit(0)sample_files file_names[0:num_samples]sample [load_image(image_pathsample_file,image_sizeself.size_image,image_value_rangeself.image_value_range,is_gray(self.num_input_channels 1),) for sample_file in sample_files]if self.num_input_channels 1:images np.array(sample).astype(np.float32)[:, :, :, None]else:images np.array(sample).astype(np.float32)gender_male np.ones(shape(num_samples, 2),dtypenp.float32) * self.image_value_range[0]gender_female np.ones(shape(num_samples, 2),dtypenp.float32) * self.image_value_range[0]for i in range(gender_male.shape[0]):gender_male[i, 0] self.image_value_range[-1]gender_female[i, 1] self.image_value_range[-1]self.test(images, gender_male, test_as_male.png)self.test(images, gender_female, test_as_female.png)print (\n\tDone! Results are saved as %s\n % os.path.join(self.save_dir, test, test_as_xxx.png)) 3.data一共23708张照片  4.对数据集感兴趣的可以关注 from __future__ import division import os import time from glob import glob import tensorflow as tf import numpy as np from scipy.io import savemat from ops import *#https://mbd.pub/o/bread/ZJ2UmJpp
http://www.hkea.cn/news/14426621/

相关文章:

  • 做学校网站的目的企业建筑网站
  • wordpress站添加根部单页打不开上海哪家seo好
  • 写作网站5妙不写就删除自己做的网站与ie不兼容
  • 荆州做网站公司陈铭生简介
  • 大气黑色机械企业网站源码如何将百度地图加入网站
  • 亚马逊网站深圳手机商城网站设计价格
  • 如何形容一个网站做的好中小型网站建设机构
  • 如何进入公司网站的后台专业app开发设计的公司
  • 北京专业建设网站公司郑中设计事务所
  • 网站地图如何做生成wordpress博客app
  • 有什么做服装的网站好网站开发西安
  • 网站兼容ie7网站建设商城商城网站建设多少钱
  • 网站文案优化建站需要什么软件
  • 网站怎么做电脑系统下载软件厦门seo怎么做
  • 制作服务网站wordpress首页怎么打开很慢
  • 网站建设资金的请示新乡个人网站建设价格
  • 可以做彩票广告的网站吗做cpa用单页网站好还是
  • 昌平网站制作珠宝首饰网站源码
  • 裕华建设集团网站企业网站每年要多少钱
  • 自己怎样建企业网站肇庆seo按天收费
  • 有做门窗找活的网站吗wordpress土鳖主题
  • 网站维护费用怎么收网站毕业设计一般做几个页面
  • 网站建设方案维护html5免费模板
  • 监察部门网站建设方案做网站3年3万
  • 苏州网站搜索引擎优化百度搜索引擎网站
  • 如何让域名跳转网站大连专业制作网站
  • 网站图片如何做防盗链李洋网络做网站怎么样
  • 网站seo查询工具在线创建网站免费网站
  • 王者荣耀官网广州网站整站优化
  • 海口云建站模板简述获得友情链接的途径