您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch |

自学教程:Python layers.Input方法代码示例

51自学网 2020-12-01 11:08:42
  Keras
这篇教程Python layers.Input方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.Input方法的典型用法代码示例。如果您正苦于以下问题:Python layers.Input方法的具体用法?Python layers.Input怎么用?Python layers.Input使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.Input方法的26个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: RNNModel

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def RNNModel(vocab_size, max_len, rnnConfig, model_type):	embedding_size = rnnConfig['embedding_size']	if model_type == 'inceptionv3':		# InceptionV3 outputs a 2048 dimensional vector for each image, which we'll feed to RNN Model		image_input = Input(shape=(2048,))	elif model_type == 'vgg16':		# VGG16 outputs a 4096 dimensional vector for each image, which we'll feed to RNN Model		image_input = Input(shape=(4096,))	image_model_1 = Dropout(rnnConfig['dropout'])(image_input)	image_model = Dense(embedding_size, activation='relu')(image_model_1)	caption_input = Input(shape=(max_len,))	# mask_zero: We zero pad inputs to the same length, the zero mask ignores those inputs. E.g. it is an efficiency.	caption_model_1 = Embedding(vocab_size, embedding_size, mask_zero=True)(caption_input)	caption_model_2 = Dropout(rnnConfig['dropout'])(caption_model_1)	caption_model = LSTM(rnnConfig['LSTM_units'])(caption_model_2)	# Merging the models and creating a softmax classifier	final_model_1 = concatenate([image_model, caption_model])	final_model_2 = Dense(rnnConfig['dense_units'], activation='relu')(final_model_1)	final_model = Dense(vocab_size, activation='softmax')(final_model_2)	model = Model(inputs=[image_input, caption_input], outputs=final_model)	model.compile(loss='categorical_crossentropy', optimizer='adam')	return model 
开发者ID:dabasajay,项目名称:Image-Caption-Generator,代码行数:27,代码来源:model.py


示例2: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def create_model(self, input_dim):        encoding_dim = 14        input_layer = Input(shape=(input_dim,))        encoder = Dense(encoding_dim, activation="tanh",                        activity_regularizer=regularizers.l1(10e-5))(input_layer)        encoder = Dense(encoding_dim // 2, activation="relu")(encoder)        decoder = Dense(encoding_dim // 2, activation='tanh')(encoder)        decoder = Dense(input_dim, activation='relu')(decoder)        model = Model(inputs=input_layer, outputs=decoder)        model.compile(optimizer='adam',                      loss='mean_squared_error',                      metrics=['accuracy'])        return model 
开发者ID:chen0040,项目名称:keras-anomaly-detection,代码行数:19,代码来源:feedforward.py


示例3: weather_l2

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def weather_l2(hidden_nums=100,l2=0.01):     input_img = Input(shape=(37,))    hn = Dense(hidden_nums, activation='relu')(input_img)    hn = Dense(hidden_nums, activation='relu',               kernel_regularizer=regularizers.l2(l2))(hn)    out_u = Dense(37, activation='sigmoid',                                   name='ae_part')(hn)    out_sig = Dense(37, activation='linear',                     name='pred_part')(hn)    out_both = concatenate([out_u, out_sig], axis=1, name = 'concatenate')    #weather_model = Model(input_img, outputs=[out_ae, out_pred])    mve_model = Model(input_img, outputs=[out_both])    mve_model.compile(optimizer='adam', loss=mve_loss, loss_weights=[1.])        return mve_model 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:18,代码来源:weather_model.py


示例4: CausalCNN

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def CausalCNN(n_filters, lr, decay, loss,                seq_len, input_features,                strides_len, kernel_size,               dilation_rates):    inputs = Input(shape=(seq_len, input_features), name='input_layer')       x=inputs    for dilation_rate in dilation_rates:        x = Conv1D(filters=n_filters,               kernel_size=kernel_size,                padding='causal',               dilation_rate=dilation_rate,               activation='linear')(x)         x = BatchNormalization()(x)        x = Activation('relu')(x)    #x = Dense(7, activation='relu', name='dense_layer')(x)    outputs = Dense(3, activation='sigmoid', name='output_layer')(x)    causalcnn = Model(inputs, outputs=[outputs])    return causalcnn 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:23,代码来源:weather_model.py


示例5: weather_ae

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def weather_ae(layers, lr, decay, loss,                input_len, input_features):        inputs = Input(shape=(input_len, input_features), name='input_layer')        for i, hidden_nums in enumerate(layers):        if i==0:            hn = Dense(hidden_nums, activation='relu')(inputs)        else:            hn = Dense(hidden_nums, activation='relu')(hn)    outputs = Dense(3, activation='sigmoid', name='output_layer')(hn)    weather_model = Model(inputs, outputs=[outputs])    return weather_model 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:18,代码来源:weather_model.py


示例6: __init__

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def __init__(self, model_path=None):        if model_path is not None:            self.model = self.load_model(model_path)        else:            # VGG16 last conv features            inputs = Input(shape=(7, 7, 512))            x = Convolution2D(128, 1, 1)(inputs)            x = Flatten()(x)            # Cls head            h_cls = Dense(256, activation='relu', W_regularizer=l2(l=0.01))(x)            h_cls = Dropout(p=0.5)(h_cls)            cls_head = Dense(20, activation='softmax', name='cls')(h_cls)            # Reg head            h_reg = Dense(256, activation='relu', W_regularizer=l2(l=0.01))(x)            h_reg = Dropout(p=0.5)(h_reg)            reg_head = Dense(4, activation='linear', name='reg')(h_reg)            # Joint model            self.model = Model(input=inputs, output=[cls_head, reg_head]) 
开发者ID:wiseodd,项目名称:cnn-levelset,代码行数:23,代码来源:localizer.py


示例7: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(1, kernel_size=3, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:26,代码来源:sgan.py


示例8: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        def d_layer(layer_input, filters, f_size=4, normalization=True):            """Discriminator layer"""            d = Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)            d = LeakyReLU(alpha=0.2)(d)            if normalization:                d = InstanceNormalization()(d)            return d        img = Input(shape=self.img_shape)        d1 = d_layer(img, self.df, normalization=False)        d2 = d_layer(d1, self.df*2)        d3 = d_layer(d2, self.df*4)        d4 = d_layer(d3, self.df*8)        validity = Conv2D(1, kernel_size=4, strides=1, padding='same')(d4)        return Model(img, validity) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:22,代码来源:discogan.py


示例9: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        img = Input(shape=self.img_shape)        model = Sequential()        model.add(Conv2D(64, kernel_size=4, strides=2, padding='same', input_shape=self.img_shape))        model.add(LeakyReLU(alpha=0.8))        model.add(Conv2D(128, kernel_size=4, strides=2, padding='same'))        model.add(LeakyReLU(alpha=0.2))        model.add(InstanceNormalization())        model.add(Conv2D(256, kernel_size=4, strides=2, padding='same'))        model.add(LeakyReLU(alpha=0.2))        model.add(InstanceNormalization())        model.summary()        img = Input(shape=self.img_shape)        features = model(img)        validity = Conv2D(1, kernel_size=4, strides=1, padding='same')(features)        label = Flatten()(features)        label = Dense(self.num_classes+1, activation="softmax")(label)        return Model(img, [validity, label]) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:27,代码来源:ccgan.py


示例10: build_encoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_encoder(self):        model = Sequential()        model.add(Flatten(input_shape=self.img_shape))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(self.latent_dim))        model.summary()        img = Input(shape=self.img_shape)        z = model(img)        return Model(img, z) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:20,代码来源:bigan.py


示例11: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(512, input_dim=self.latent_dim))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(np.prod(self.img_shape), activation='tanh'))        model.add(Reshape(self.img_shape))        model.summary()        z = Input(shape=(self.latent_dim,))        gen_img = model(z)        return Model(z, gen_img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:20,代码来源:bigan.py


示例12: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        z = Input(shape=(self.latent_dim, ))        img = Input(shape=self.img_shape)        d_in = concatenate([z, Flatten()(img)])        model = Dense(1024)(d_in)        model = LeakyReLU(alpha=0.2)(model)        model = Dropout(0.5)(model)        model = Dense(1024)(model)        model = LeakyReLU(alpha=0.2)(model)        model = Dropout(0.5)(model)        model = Dense(1024)(model)        model = LeakyReLU(alpha=0.2)(model)        model = Dropout(0.5)(model)        validity = Dense(1, activation="sigmoid")(model)        return Model([z, img], validity) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:20,代码来源:bigan.py


示例13: build_vgg

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_vgg(self):        """        Builds a pre-trained VGG19 model that outputs image features extracted at the        third block of the model        """        vgg = VGG19(weights="imagenet")        # Set outputs to outputs of last conv. layer in block 3        # See architecture at: https://github.com/keras-team/keras/blob/master/keras/applications/vgg19.py        vgg.outputs = [vgg.layers[9].output]        img = Input(shape=self.hr_shape)        # Extract image features        img_features = vgg(img)        return Model(img, img_features) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:18,代码来源:srgan.py


示例14: build_classifier

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_classifier(self):        def clf_layer(layer_input, filters, f_size=4, normalization=True):            """Classifier layer"""            d = Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)            d = LeakyReLU(alpha=0.2)(d)            if normalization:                d = InstanceNormalization()(d)            return d        img = Input(shape=self.img_shape)        c1 = clf_layer(img, self.cf, normalization=False)        c2 = clf_layer(c1, self.cf*2)        c3 = clf_layer(c2, self.cf*4)        c4 = clf_layer(c3, self.cf*8)        c5 = clf_layer(c4, self.cf*8)        class_pred = Dense(self.num_classes, activation='softmax')(Flatten()(c5))        return Model(img, class_pred) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:23,代码来源:pixelda.py


示例15: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(self.channels, kernel_size=3, padding='same'))        model.add(Activation("tanh"))        gen_input = Input(shape=(self.latent_dim,))        img = model(gen_input)        model.summary()        return Model(gen_input, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:26,代码来源:infogan.py


示例16: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=4, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=4, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(Conv2D(self.channels, kernel_size=4, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:25,代码来源:wgan_gp.py


示例17: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        def d_layer(layer_input, filters, f_size=4, bn=True):            """Discriminator layer"""            d = Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)            d = LeakyReLU(alpha=0.2)(d)            if bn:                d = BatchNormalization(momentum=0.8)(d)            return d        img_A = Input(shape=self.img_shape)        img_B = Input(shape=self.img_shape)        # Concatenate image and conditioning image by channels to produce input        combined_imgs = Concatenate(axis=-1)([img_A, img_B])        d1 = d_layer(combined_imgs, self.df, bn=False)        d2 = d_layer(d1, self.df*2)        d3 = d_layer(d2, self.df*4)        d4 = d_layer(d3, self.df*8)        validity = Conv2D(1, kernel_size=4, strides=1, padding='same')(d4)        return Model([img_A, img_B], validity) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:26,代码来源:pix2pix.py


示例18: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(256, input_dim=self.latent_dim))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(1024))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dense(np.prod(self.img_shape), activation='tanh'))        model.add(Reshape(self.img_shape))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:24,代码来源:lsgan.py


示例19: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        model = Sequential()        model.add(Flatten(input_shape=self.img_shape))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(256))        model.add(LeakyReLU(alpha=0.2))        # (!!!) No softmax        model.add(Dense(1))        model.summary()        img = Input(shape=self.img_shape)        validity = model(img)        return Model(img, validity) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:19,代码来源:lsgan.py


示例20: build_discriminators

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminators(self):        img1 = Input(shape=self.img_shape)        img2 = Input(shape=self.img_shape)        # Shared discriminator layers        model = Sequential()        model.add(Flatten(input_shape=self.img_shape))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(256))        model.add(LeakyReLU(alpha=0.2))        img1_embedding = model(img1)        img2_embedding = model(img2)        # Discriminator 1        validity1 = Dense(1, activation='sigmoid')(img1_embedding)        # Discriminator 2        validity2 = Dense(1, activation='sigmoid')(img2_embedding)        return Model(img1, validity1), Model(img2, validity2) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:24,代码来源:cogan.py


示例21: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        X = Input(shape=(self.img_dim,))        model = Sequential()        model.add(Dense(256, input_dim=self.img_dim))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dropout(0.4))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dropout(0.4))        model.add(Dense(1024))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Dropout(0.4))        model.add(Dense(self.img_dim, activation='tanh'))        X_translated = model(X)        return Model(X, X_translated) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:24,代码来源:dualgan.py


示例22: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(Conv2D(self.channels, kernel_size=3, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:25,代码来源:dcgan.py


示例23: build_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_discriminator(self):        model = Sequential()        model.add(Flatten(input_shape=self.img_shape))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(256))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(1, activation='sigmoid'))        model.summary()        img = Input(shape=self.img_shape)        validity = model(img)        return Model(img, validity) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:18,代码来源:gan.py


示例24: build_encoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_encoder(self):        # Encoder        img = Input(shape=self.img_shape)        h = Flatten()(img)        h = Dense(512)(h)        h = LeakyReLU(alpha=0.2)(h)        h = Dense(512)(h)        h = LeakyReLU(alpha=0.2)(h)        mu = Dense(self.latent_dim)(h)        log_var = Dense(self.latent_dim)(h)        latent_repr = merge([mu, log_var],                mode=lambda p: p[0] + K.random_normal(K.shape(p[0])) * K.exp(p[1] / 2),                output_shape=lambda p: p[0])        return Model(img, latent_repr) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:19,代码来源:aae.py


示例25: build_decoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def build_decoder(self):        model = Sequential()        model.add(Dense(512, input_dim=self.latent_dim))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(512))        model.add(LeakyReLU(alpha=0.2))        model.add(Dense(np.prod(self.img_shape), activation='tanh'))        model.add(Reshape(self.img_shape))        model.summary()        z = Input(shape=(self.latent_dim,))        img = model(z)        return Model(z, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:19,代码来源:aae.py


示例26: AlternativeRNNModel

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Input [as 别名]def AlternativeRNNModel(vocab_size, max_len, rnnConfig, model_type):	embedding_size = rnnConfig['embedding_size']	if model_type == 'inceptionv3':		# InceptionV3 outputs a 2048 dimensional vector for each image, which we'll feed to RNN Model		image_input = Input(shape=(2048,))	elif model_type == 'vgg16':		# VGG16 outputs a 4096 dimensional vector for each image, which we'll feed to RNN Model		image_input = Input(shape=(4096,))	image_model_1 = Dense(embedding_size, activation='relu')(image_input)	image_model = RepeatVector(max_len)(image_model_1)	caption_input = Input(shape=(max_len,))	# mask_zero: We zero pad inputs to the same length, the zero mask ignores those inputs. E.g. it is an efficiency.	caption_model_1 = Embedding(vocab_size, embedding_size, mask_zero=True)(caption_input)	# Since we are going to predict the next word using the previous words	# (length of previous words changes with every iteration over the caption), we have to set return_sequences = True.	caption_model_2 = LSTM(rnnConfig['LSTM_units'], return_sequences=True)(caption_model_1)	# caption_model = TimeDistributed(Dense(embedding_size, activation='relu'))(caption_model_2)	caption_model = TimeDistributed(Dense(embedding_size))(caption_model_2)	# Merging the models and creating a softmax classifier	final_model_1 = concatenate([image_model, caption_model])	# final_model_2 = LSTM(rnnConfig['LSTM_units'], return_sequences=False)(final_model_1)	final_model_2 = Bidirectional(LSTM(rnnConfig['LSTM_units'], return_sequences=False))(final_model_1)	# final_model_3 = Dense(rnnConfig['dense_units'], activation='relu')(final_model_2)	# final_model = Dense(vocab_size, activation='softmax')(final_model_3)	final_model = Dense(vocab_size, activation='softmax')(final_model_2)	model = Model(inputs=[image_input, caption_input], outputs=final_model)	model.compile(loss='categorical_crossentropy', optimizer='adam')	# model.compile(loss='categorical_crossentropy', optimizer='rmsprop')	return model 
开发者ID:dabasajay,项目名称:Image-Caption-Generator,代码行数:34,代码来源:model.py


51自学网,即我要自学网,自学EXCEL、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。
京ICP备13026421号-1