您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch |

自学教程:Python layers.Activation方法代码示例

51自学网 2020-12-01 11:08:43
  Keras
这篇教程Python layers.Activation方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.Activation方法的典型用法代码示例。如果您正苦于以下问题:Python layers.Activation方法的具体用法?Python layers.Activation怎么用?Python layers.Activation使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.Activation方法的30个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _get_logits_name

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def _get_logits_name(self):        """        Looks for the name of the layer producing the logits.        :return: name of layer producing the logits        """        softmax_name = self._get_softmax_name()        softmax_layer = self.model.get_layer(softmax_name)        if not isinstance(softmax_layer, Activation):            # In this case, the activation is part of another layer            return softmax_name        if hasattr(softmax_layer, 'inbound_nodes'):            warnings.warn(                "Please update your version to keras >= 2.1.3; "                "support for earlier keras versions will be dropped on "                "2018-07-22")            node = softmax_layer.inbound_nodes[0]        else:            node = softmax_layer._inbound_nodes[0]        logits_name = node.inbound_layers[0].name        return logits_name 
开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:26,代码来源:utils_keras.py


示例2: CausalCNN

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def CausalCNN(n_filters, lr, decay, loss,                seq_len, input_features,                strides_len, kernel_size,               dilation_rates):    inputs = Input(shape=(seq_len, input_features), name='input_layer')       x=inputs    for dilation_rate in dilation_rates:        x = Conv1D(filters=n_filters,               kernel_size=kernel_size,                padding='causal',               dilation_rate=dilation_rate,               activation='linear')(x)         x = BatchNormalization()(x)        x = Activation('relu')(x)    #x = Dense(7, activation='relu', name='dense_layer')(x)    outputs = Dense(3, activation='sigmoid', name='output_layer')(x)    causalcnn = Model(inputs, outputs=[outputs])    return causalcnn 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:23,代码来源:weather_model.py


示例3: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(1, kernel_size=3, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:26,代码来源:sgan.py


示例4: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(self.channels, kernel_size=3, padding='same'))        model.add(Activation("tanh"))        gen_input = Input(shape=(self.latent_dim,))        img = model(gen_input)        model.summary()        return Model(gen_input, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:26,代码来源:infogan.py


示例5: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=4, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=4, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(Conv2D(self.channels, kernel_size=4, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:25,代码来源:wgan_gp.py


示例6: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(BatchNormalization(momentum=0.8))        model.add(Activation("relu"))        model.add(Conv2D(self.channels, kernel_size=3, padding="same"))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        img = model(noise)        return Model(noise, img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:25,代码来源:dcgan.py


示例7: get_model_41

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def get_model_41(params):    embedding_weights = pickle.load(open("../data/datasets/train_data/embedding_weights_w2v-google_MSD-AG.pk","rb"))    # main sequential model    model = Sequential()    model.add(Embedding(len(embedding_weights[0]), params['embedding_dim'], input_length=params['sequence_length'],                        weights=embedding_weights))    #model.add(Dropout(params['dropout_prob'][0], input_shape=(params['sequence_length'], params['embedding_dim'])))    model.add(LSTM(2048))    #model.add(Dropout(params['dropout_prob'][1]))    model.add(Dense(output_dim=params["n_out"], init="uniform"))    model.add(Activation(params['final_activation']))    logging.debug("Output CNN: %s" % str(model.output_shape))    if params['final_activation'] == 'linear':        model.add(Lambda(lambda x :K.l2_normalize(x, axis=1)))    return model# CRNN Arch for audio 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:22,代码来源:models.py


示例8: g_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def g_block(inp, fil, u = True):    if u:        out = UpSampling2D(interpolation = 'bilinear')(inp)    else:        out = Activation('linear')(inp)    skip = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(out)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(out)    out = LeakyReLU(0.2)(out)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(out)    out = LeakyReLU(0.2)(out)    out = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(out)    out = add([out, skip])    out = LeakyReLU(0.2)(out)    return out 
开发者ID:manicman1999,项目名称:Keras-BiGAN,代码行数:23,代码来源:bigan.py


示例9: nonlinearity

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def nonlinearity(h_nonlin_name):    def compile_fn(di, dh):        def fn(di):            nonlin_name = dh['nonlin_name']            if nonlin_name == 'relu':                Out = Activation('relu')(di['in'])            elif nonlin_name == 'tanh':                Out = Activation('tanh')(di['in'])            elif nonlin_name == 'elu':                Out = Activation('elu')(di['in'])            else:                raise ValueError            return {"out": Out}        return fn    return hke.siso_keras_module('Nonlinearity', compile_fn,                                 {'nonlin_name': h_nonlin_name}) 
开发者ID:negrinho,项目名称:deep_architect,代码行数:22,代码来源:main_keras.py


示例10: evaluate

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def evaluate(self, inputs, outputs):        keras.backend.clear_session()        X = Input(self.X_train[0].shape)        co.forward({inputs['in']: X})        logits = outputs['out'].val        probs = Activation('softmax')(logits)        model = Model(inputs=[inputs['in'].val], outputs=[probs])        model.compile(optimizer=Adam(lr=self.learning_rate),                      loss='sparse_categorical_crossentropy',                      metrics=['accuracy'])        model.summary()        history = model.fit(self.X_train,                            self.y_train,                            batch_size=self.batch_size,                            epochs=self.num_training_epochs,                            validation_data=(self.X_val, self.y_val))        results = {'validation_accuracy': history.history['val_accuracy'][-1]}        return results 
开发者ID:negrinho,项目名称:deep_architect,代码行数:22,代码来源:main_keras.py


示例11: modelA

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def modelA():    model = Sequential()    model.add(Conv2D(64, (5, 5),                            padding='valid'))    model.add(Activation('relu'))    model.add(Conv2D(64, (5, 5)))    model.add(Activation('relu'))    model.add(Dropout(0.25))    model.add(Flatten())    model.add(Dense(128))    model.add(Activation('relu'))    model.add(Dropout(0.5))    model.add(Dense(FLAGS.NUM_CLASSES))    return model 
开发者ID:sunblaze-ucb,项目名称:blackbox-attacks,代码行数:20,代码来源:mnist.py


示例12: modelB

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def modelB():    model = Sequential()    model.add(Dropout(0.2, input_shape=(FLAGS.IMAGE_ROWS,                                        FLAGS.IMAGE_COLS,                                        FLAGS.NUM_CHANNELS)))    model.add(Convolution2D(64, 8, 8,                            subsample=(2, 2),                            border_mode='same'))    model.add(Activation('relu'))    model.add(Convolution2D(128, 6, 6,                            subsample=(2, 2),                            border_mode='valid'))    model.add(Activation('relu'))    model.add(Convolution2D(128, 5, 5,                            subsample=(1, 1)))    model.add(Activation('relu'))    model.add(Dropout(0.5))    model.add(Flatten())    model.add(Dense(FLAGS.NUM_CLASSES))    return model 
开发者ID:sunblaze-ucb,项目名称:blackbox-attacks,代码行数:26,代码来源:mnist.py


示例13: modelC

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def modelC():    model = Sequential()    model.add(Convolution2D(128, 3, 3,                            border_mode='valid',                            input_shape=(FLAGS.IMAGE_ROWS,                                         FLAGS.IMAGE_COLS,                                         FLAGS.NUM_CHANNELS)))    model.add(Activation('relu'))    model.add(Convolution2D(64, 3, 3))    model.add(Activation('relu'))    model.add(Dropout(0.25))    model.add(Flatten())    model.add(Dense(128))    model.add(Activation('relu'))    model.add(Dropout(0.5))    model.add(Dense(FLAGS.NUM_CLASSES))    return model 
开发者ID:sunblaze-ucb,项目名称:blackbox-attacks,代码行数:23,代码来源:mnist.py


示例14: modelF

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def modelF():    model = Sequential()    model.add(Convolution2D(32, 3, 3,                            border_mode='valid',                            input_shape=(FLAGS.IMAGE_ROWS,                                         FLAGS.IMAGE_COLS,                                         FLAGS.NUM_CHANNELS)))    model.add(Activation('relu'))    model.add(MaxPooling2D(pool_size=(2, 2)))    model.add(Convolution2D(64, 3, 3))    model.add(Activation('relu'))    model.add(MaxPooling2D(pool_size=(2, 2)))    model.add(Flatten())    model.add(Dense(1024))    model.add(Activation('relu'))    model.add(Dense(FLAGS.NUM_CLASSES))    return model 
开发者ID:sunblaze-ucb,项目名称:blackbox-attacks,代码行数:26,代码来源:mnist.py


示例15: test_keras_transformer_single_dim

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def test_keras_transformer_single_dim(self):        """        Test that KerasTransformer correctly handles single-dimensional input data.        """        # Construct a model for simple binary classification (with a single hidden layer)        model = Sequential()        input_shape = [10]        model.add(Dense(units=10, input_shape=input_shape,                        bias_initializer=self._getKerasModelWeightInitializer(),                        kernel_initializer=self._getKerasModelWeightInitializer()))        model.add(Activation('relu'))        model.add(Dense(units=1, bias_initializer=self._getKerasModelWeightInitializer(),                        kernel_initializer=self._getKerasModelWeightInitializer()))        model.add(Activation('sigmoid'))        # Compare KerasTransformer output to raw Keras model output        self._test_keras_transformer_helper(model, model_filename="keras_transformer_single_dim") 
开发者ID:databricks,项目名称:spark-deep-learning,代码行数:18,代码来源:keras_transformer_test.py


示例16: ann_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def ann_model(input_shape):    inp = Input(shape=input_shape, name='mfcc_in')    model = inp    model = Conv1D(filters=12, kernel_size=(3), activation='relu')(model)    model = Conv1D(filters=12, kernel_size=(3), activation='relu')(model)    model = Flatten()(model)    model = Dense(56)(model)    model = Activation('relu')(model)    model = BatchNormalization()(model)    model = Dropout(0.2)(model)    model = Dense(28)(model)    model = Activation('relu')(model)    model = BatchNormalization()(model)    model = Dense(1)(model)    model = Activation('sigmoid')(model)    model = Model(inp, model)    return model 
开发者ID:tympanix,项目名称:subsync,代码行数:24,代码来源:train_ann.py


示例17: _initial_conv_block_inception

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def _initial_conv_block_inception(input, initial_conv_filters, weight_decay=5e-4):    ''' Adds an initial conv block, with batch norm and relu for the DPN    Args:        input: input tensor        initial_conv_filters: number of filters for initial conv block        weight_decay: weight decay factor    Returns: a keras tensor    '''    channel_axis = 1 if K.image_data_format() == 'channels_first' else -1    x = Conv2D(initial_conv_filters, (7, 7), padding='same', use_bias=False, kernel_initializer='he_normal',               kernel_regularizer=l2(weight_decay), strides=(2, 2))(input)    x = BatchNormalization(axis=channel_axis)(x)    x = Activation('relu')(x)    x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)    return x 
开发者ID:titu1994,项目名称:Keras-DualPathNetworks,代码行数:20,代码来源:dual_path_network.py


示例18: _bn_relu_conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def _bn_relu_conv_block(input, filters, kernel=(3, 3), stride=(1, 1), weight_decay=5e-4):    ''' Adds a Batchnorm-Relu-Conv block for DPN    Args:        input: input tensor        filters: number of output filters        kernel: convolution kernel size        stride: stride of convolution    Returns: a keras tensor    '''    channel_axis = 1 if K.image_data_format() == 'channels_first' else -1    x = Conv2D(filters, kernel, padding='same', use_bias=False, kernel_initializer='he_normal',               kernel_regularizer=l2(weight_decay), strides=stride)(input)    x = BatchNormalization(axis=channel_axis)(x)    x = Activation('relu')(x)    return x 
开发者ID:titu1994,项目名称:Keras-DualPathNetworks,代码行数:18,代码来源:dual_path_network.py


示例19: weather_conv1D

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def weather_conv1D(layers, lr, decay, loss,                input_len, input_features,                strides_len, kernel_size):        inputs = Input(shape=(input_len, input_features), name='input_layer')    for i, hidden_nums in enumerate(layers):        if i==0:            #inputs = BatchNormalization(name='BN_input')(inputs)            hn = Conv1D(hidden_nums, kernel_size=kernel_size, strides=strides_len,                         data_format='channels_last',                         padding='same', activation='linear')(inputs)            hn = BatchNormalization(name='BN_{}'.format(i))(hn)            hn = Activation('relu')(hn)        elif i<len(layers)-1:            hn = Conv1D(hidden_nums, kernel_size=kernel_size, strides=strides_len,                        data_format='channels_last',                         padding='same',activation='linear')(hn)            hn = BatchNormalization(name='BN_{}'.format(i))(hn)             hn = Activation('relu')(hn)        else:            hn = Conv1D(hidden_nums, kernel_size=kernel_size, strides=strides_len,                        data_format='channels_last',                         padding='same',activation='linear')(hn)            hn = BatchNormalization(name='BN_{}'.format(i))(hn)     outputs = Dense(80, activation='relu', name='dense_layer')(hn)    outputs = Dense(3, activation='tanh', name='output_layer')(outputs)    weather_model = Model(inputs, outputs=[outputs])    return weather_model 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:33,代码来源:weather_model.py


示例20: weather_fnn

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def weather_fnn(layers, lr,            decay, loss, seq_len,             input_features, output_features):        ori_inputs = Input(shape=(seq_len, input_features), name='input_layer')    #print(seq_len*input_features)    conv_ = Conv1D(11, kernel_size=13, strides=1,                         data_format='channels_last',                         padding='valid', activation='linear')(ori_inputs)    conv_ = BatchNormalization(name='BN_conv')(conv_)    conv_ = Activation('relu')(conv_)    conv_ = Conv1D(5, kernel_size=7, strides=1,                         data_format='channels_last',                         padding='valid', activation='linear')(conv_)    conv_ = BatchNormalization(name='BN_conv2')(conv_)    conv_ = Activation('relu')(conv_)    inputs = Reshape((-1,))(conv_)    for i, hidden_nums in enumerate(layers):        if i==0:            hn = Dense(hidden_nums, activation='linear')(inputs)            hn = BatchNormalization(name='BN_{}'.format(i))(hn)            hn = Activation('relu')(hn)        else:            hn = Dense(hidden_nums, activation='linear')(hn)            hn = BatchNormalization(name='BN_{}'.format(i))(hn)            hn = Activation('relu')(hn)            #hn = Dropout(0.1)(hn)    #print(seq_len, output_features)    #print(hn)    outputs = Dense(seq_len*output_features, activation='sigmoid', name='output_layer')(hn) # 37*3    outputs = Reshape((seq_len, output_features))(outputs)    weather_fnn = Model(ori_inputs, outputs=[outputs])    return weather_fnn 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:39,代码来源:weather_model.py


示例21: ss_bt

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def ss_bt(self, x, dilation, strides=(1, 1), padding='same'):        x1, x2 = self.channel_split(x)        filters = (int(x.shape[-1]) // self.groups)        x1 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding)(x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding)(x1)        x1 = layers.BatchNormalization()(x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding, dilation_rate=(dilation, 1))(            x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding, dilation_rate=(1, dilation))(            x1)        x1 = layers.BatchNormalization()(x1)        x1 = layers.Activation('relu')(x1)        x2 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding)(x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding)(x2)        x2 = layers.BatchNormalization()(x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding, dilation_rate=(1, dilation))(            x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding, dilation_rate=(dilation, 1))(            x2)        x2 = layers.BatchNormalization()(x2)        x2 = layers.Activation('relu')(x2)        x_concat = layers.concatenate([x1, x2], axis=-1)        x_add = layers.add([x, x_concat])        output = self.channel_shuffle(x_add)        return output 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:34,代码来源:lednet.py


示例22: down_sample

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def down_sample(self, x, filters):        x_filters = int(x.shape[-1])        x_conv = layers.Conv2D(filters - x_filters, kernel_size=3, strides=(2, 2), padding='same')(x)        x_pool = layers.MaxPool2D()(x)        x = layers.concatenate([x_conv, x_pool], axis=-1)        x = layers.BatchNormalization()(x)        x = layers.Activation('relu')(x)        return x 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:10,代码来源:lednet.py


示例23: apn_module

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def apn_module(self, x):        def right(x):            x = layers.AveragePooling2D()(x)            x = layers.Conv2D(self.classes, kernel_size=1, padding='same')(x)            x = layers.BatchNormalization()(x)            x = layers.Activation('relu')(x)            x = layers.UpSampling2D(interpolation='bilinear')(x)            return x        def conv(x, filters, kernel_size, stride):            x = layers.Conv2D(filters, kernel_size=kernel_size, strides=(stride, stride), padding='same')(x)            x = layers.BatchNormalization()(x)            x = layers.Activation('relu')(x)            return x        x_7 = conv(x, int(x.shape[-1]), 7, stride=2)        x_5 = conv(x_7, int(x.shape[-1]), 5, stride=2)        x_3 = conv(x_5, int(x.shape[-1]), 3, stride=2)        x_3_1 = conv(x_3, self.classes, 3, stride=1)        x_3_1_up = layers.UpSampling2D(interpolation='bilinear')(x_3_1)        x_5_1 = conv(x_5, self.classes, 5, stride=1)        x_3_5 = layers.add([x_5_1, x_3_1_up])        x_3_5_up = layers.UpSampling2D(interpolation='bilinear')(x_3_5)        x_7_1 = conv(x_7, self.classes, 3, stride=1)        x_3_5_7 = layers.add([x_7_1, x_3_5_up])        x_3_5_7_up = layers.UpSampling2D(interpolation='bilinear')(x_3_5_7)        x_middle = conv(x, self.classes, 1, stride=1)        x_middle = layers.multiply([x_3_5_7_up, x_middle])        x_right = right(x)        x_middle = layers.add([x_middle, x_right])        return x_middle 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:37,代码来源:lednet.py


示例24: decoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def decoder(self, x):        x = self.apn_module(x)        x = layers.UpSampling2D(size=8, interpolation='bilinear')(x)        x = layers.Conv2D(self.classes, kernel_size=3, padding='same')(x)        x = layers.BatchNormalization()(x)        x = layers.Activation('softmax')(x)        return x 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:9,代码来源:lednet.py


示例25: conv2d_bn

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def conv2d_bn(x,              filters,              kernel_size,              strides=1,              padding='same',              activation='relu',              use_bias=False,              name=None):    """Utility function to apply conv + BN.    # Arguments        x: input tensor.        filters: filters in `Conv2D`.        kernel_size: kernel size as in `Conv2D`.        padding: padding mode in `Conv2D`.        activation: activation in `Conv2D`.        strides: strides in `Conv2D`.        name: name of the ops; will become `name + '_ac'` for the activation            and `name + '_bn'` for the batch norm layer.    # Returns        Output tensor after applying `Conv2D` and `BatchNormalization`.    """    x = Conv2D(filters,               kernel_size,               strides=strides,               padding=padding,               use_bias=use_bias,               name=name)(x)    if not use_bias:        bn_axis = 1 if K.image_data_format() == 'channels_first' else 3        bn_name = None if name is None else name + '_bn'        x = BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)    if activation is not None:        ac_name = None if name is None else name + '_ac'        x = Activation(activation, name=ac_name)(x)    return x 
开发者ID:killthekitten,项目名称:kaggle-carvana-2017,代码行数:39,代码来源:inception_resnet_v2.py


示例26: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def identity_block(input_tensor, kernel_size, filters, stage, block):    """The identity block is the block that has no conv layer at shortcut.    # Arguments        input_tensor: input tensor        kernel_size: default 3, the kernel size of middle conv layer at main path        filters: list of integers, the filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'keras.., current block label, used for generating layer names    # Returns        Output tensor for the block.    """    filters1, filters2, filters3 = filters    if K.image_data_format() == 'channels_last':        bn_axis = 3    else:        bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Conv2D(filters1, (1, 1), name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = Conv2D(filters2, kernel_size,               padding='same', name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    x = layers.add([x, input_tensor])    x = Activation('relu')(x)    return x 
开发者ID:killthekitten,项目名称:kaggle-carvana-2017,代码行数:38,代码来源:resnet50_fixed.py


示例27: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        # Encoder        model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=self.img_shape, padding="same"))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))        model.add(LeakyReLU(alpha=0.2))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(512, kernel_size=1, strides=2, padding="same"))        model.add(LeakyReLU(alpha=0.2))        model.add(Dropout(0.5))        # Decoder        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation('relu'))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation('relu'))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(self.channels, kernel_size=3, padding="same"))        model.add(Activation('tanh'))        model.summary()        masked_img = Input(shape=self.img_shape)        gen_missing = model(masked_img)        return Model(masked_img, gen_missing) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:40,代码来源:context_encoder.py


示例28: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        """Resnet Generator"""        def residual_block(layer_input):            """Residual block described in paper"""            d = Conv2D(64, kernel_size=3, strides=1, padding='same')(layer_input)            d = BatchNormalization(momentum=0.8)(d)            d = Activation('relu')(d)            d = Conv2D(64, kernel_size=3, strides=1, padding='same')(d)            d = BatchNormalization(momentum=0.8)(d)            d = Add()([d, layer_input])            return d        # Image input        img = Input(shape=self.img_shape)        l1 = Conv2D(64, kernel_size=3, padding='same', activation='relu')(img)        # Propogate signal through residual blocks        r = residual_block(l1)        for _ in range(self.residual_blocks - 1):            r = residual_block(r)        output_img = Conv2D(self.channels, kernel_size=3, padding='same', activation='tanh')(r)        return Model(img, output_img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:28,代码来源:pixelda.py


示例29: build_generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def build_generator(self):        model = Sequential()        model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))        model.add(Reshape((7, 7, 128)))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(128, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(UpSampling2D())        model.add(Conv2D(64, kernel_size=3, padding="same"))        model.add(Activation("relu"))        model.add(BatchNormalization(momentum=0.8))        model.add(Conv2D(self.channels, kernel_size=3, padding='same'))        model.add(Activation("tanh"))        model.summary()        noise = Input(shape=(self.latent_dim,))        label = Input(shape=(1,), dtype='int32')        label_embedding = Flatten()(Embedding(self.num_classes, self.latent_dim)(label))        model_input = multiply([noise, label_embedding])        img = model(model_input)        return Model([noise, label], img) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:30,代码来源:acgan.py


示例30: creat_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Activation [as 别名]def creat_model(input_shape, num_class):    init = initializers.Orthogonal(gain=args.norm)    sequence_input =Input(shape=input_shape)    mask = Masking(mask_value=0.)(sequence_input)    if args.aug:        mask = augmentaion()(mask)    X = Noise(0.075)(mask)    if args.model[0:2]=='VA':        # VA        trans = LSTM(args.nhid,recurrent_activation='sigmoid',return_sequences=True,implementation=2,recurrent_initializer=init)(X)        trans = Dropout(0.5)(trans)        trans = TimeDistributed(Dense(3,kernel_initializer='zeros'))(trans)        rot = LSTM(args.nhid,recurrent_activation='sigmoid',return_sequences=True,implementation=2,recurrent_initializer=init)(X)        rot = Dropout(0.5)(rot)        rot = TimeDistributed(Dense(3,kernel_initializer='zeros'))(rot)        transform = Concatenate()([rot,trans])        X = VA()([mask,transform])    X = LSTM(args.nhid,recurrent_activation='sigmoid',return_sequences=True,implementation=2,recurrent_initializer=init)(X)    X = Dropout(0.5)(X)    X = LSTM(args.nhid,recurrent_activation='sigmoid',return_sequences=True,implementation=2,recurrent_initializer=init)(X)    X = Dropout(0.5)(X)    X = LSTM(args.nhid,recurrent_activation='sigmoid',return_sequences=True,implementation=2,recurrent_initializer=init)(X)    X = Dropout(0.5)(X)    X = TimeDistributed(Dense(num_class))(X)    X = MeanOverTime()(X)    X = Activation('softmax')(X)    model=Model(sequence_input,X)    return model 
开发者ID:microsoft,项目名称:View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition,代码行数:33,代码来源:va-rnn.py


51自学网,即我要自学网,自学EXCEL、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。
京ICP备13026421号-1