您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch |

自学教程:Python layers.Lambda方法代码示例

51自学网 2020-12-01 11:08:46
  Keras
这篇教程Python layers.Lambda方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.Lambda方法的典型用法代码示例。如果您正苦于以下问题:Python layers.Lambda方法的具体用法?Python layers.Lambda怎么用?Python layers.Lambda使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.Lambda方法的26个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: crop

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def crop(dimension, start, end):    # Crops (or slices) a Tensor on a given dimension from start to end    # example : to crop tensor x[:, :, 5:10]    # call slice(2, 5, 10) as you want to crop on the second dimension    def func(x):        if dimension == 0:            return x[start: end]        if dimension == 1:            return x[:, start: end]        if dimension == 2:            return x[:, :, start: end]        if dimension == 3:            return x[:, :, :, start: end]        if dimension == 4:            return x[:, :, :, :, start: end]    return Lambda(func) 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:18,代码来源:weather_model.py


示例2: get_model_41

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_41(params):    embedding_weights = pickle.load(open("../data/datasets/train_data/embedding_weights_w2v-google_MSD-AG.pk","rb"))    # main sequential model    model = Sequential()    model.add(Embedding(len(embedding_weights[0]), params['embedding_dim'], input_length=params['sequence_length'],                        weights=embedding_weights))    #model.add(Dropout(params['dropout_prob'][0], input_shape=(params['sequence_length'], params['embedding_dim'])))    model.add(LSTM(2048))    #model.add(Dropout(params['dropout_prob'][1]))    model.add(Dense(output_dim=params["n_out"], init="uniform"))    model.add(Activation(params['final_activation']))    logging.debug("Output CNN: %s" % str(model.output_shape))    if params['final_activation'] == 'linear':        model.add(Lambda(lambda x :K.l2_normalize(x, axis=1)))    return model# CRNN Arch for audio 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:22,代码来源:models.py


示例3: yolo_body

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def yolo_body(inputs, num_anchors, num_classes):    """Create YOLO_V2 model CNN body in Keras."""    darknet = Model(inputs, darknet_body()(inputs))    conv13 = darknet.get_layer('batchnormalization_13').output    conv20 = compose(        DarknetConv2D_BN_Leaky(1024, 3, 3),        DarknetConv2D_BN_Leaky(1024, 3, 3))(darknet.output)    # TODO: Allow Keras Lambda to use func arguments for output_shape?    conv13_reshaped = Lambda(        space_to_depth_x2,        output_shape=space_to_depth_x2_output_shape,        name='space_to_depth')(conv13)    # Concat conv13 with conv20.    x = merge([conv13_reshaped, conv20], mode='concat')    x = DarknetConv2D_BN_Leaky(1024, 3, 3)(x)    x = DarknetConv2D(num_anchors * (num_classes + 5), 1, 1)(x)    return Model(inputs, x) 
开发者ID:PiSimo,项目名称:PiCamNN,代码行数:21,代码来源:keras_yolo.py


示例4: yolo_body

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def yolo_body(inputs, num_anchors, num_classes):    """Create YOLO_V2 model CNN body in Keras."""    darknet = Model(inputs, darknet_body()(inputs))    conv20 = compose(        DarknetConv2D_BN_Leaky(1024, (3, 3)),        DarknetConv2D_BN_Leaky(1024, (3, 3)))(darknet.output)    conv13 = darknet.layers[43].output    conv21 = DarknetConv2D_BN_Leaky(64, (1, 1))(conv13)    # TODO: Allow Keras Lambda to use func arguments for output_shape?    conv21_reshaped = Lambda(        space_to_depth_x2,        output_shape=space_to_depth_x2_output_shape,        name='space_to_depth')(conv21)    x = concatenate([conv21_reshaped, conv20])    x = DarknetConv2D_BN_Leaky(1024, (3, 3))(x)    x = DarknetConv2D(num_anchors * (num_classes + 5), (1, 1))(x)    return Model(inputs, x) 
开发者ID:kaka-lin,项目名称:object-detection,代码行数:21,代码来源:keras_yolo.py


示例5: GenerateMCSamples

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def GenerateMCSamples(inp, layers, K_mc=20):    if K_mc == 1:        return apply_layers(inp, layers)    output_list = []    for _ in xrange(K_mc):        output_list += [apply_layers(inp, layers)]  # THIS IS BAD!!! we create new dense layers at every call!!!!    def pack_out(output_list):        #output = K.pack(output_list) # K_mc x nb_batch x nb_classes        output = K.stack(output_list) # K_mc x nb_batch x nb_classes        return K.permute_dimensions(output, (1, 0, 2)) # nb_batch x K_mc x nb_classes    def pack_shape(s):        s = s[0]        assert len(s) == 2        return (s[0], K_mc, s[1])    out = Lambda(pack_out, output_shape=pack_shape)(output_list)    return out# evaluation for classification tasks 
开发者ID:YingzhenLi,项目名称:Dropout_BBalpha,代码行数:20,代码来源:BBalpha_dropout.py


示例6: SiameseNetwork

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def SiameseNetwork(input_shape=(5880,)):    base_network = create_base_network(input_shape)    input_a = Input(shape=input_shape)    input_b = Input(shape=input_shape)    processed_a = base_network(input_a)    processed_b = base_network(input_b)    distance = Lambda(euclidean_distance,                  output_shape=eucl_dist_output_shape)([processed_a, processed_b])    model = Model([input_a, input_b], distance)    rms = RMSprop()    model.compile(loss=contrastive_loss, optimizer=rms, metrics=[accuracy])        return model, base_network 
开发者ID:ericzhao28,项目名称:DogEmbeddings,代码行数:20,代码来源:siamese.py


示例7: Highway

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def Highway(x, num_layers=1, activation='relu', name_prefix=''):    '''    Layer wrapper function for Highway network    # Arguments:        x: tensor, shape = (B, input_size)    # Optional Arguments:        num_layers: int, dafault is 1, the number of Highway network layers        activation: keras activation, default is 'relu'        name_prefix: str, default is '', layer name prefix    # Returns:        out: tensor, shape = (B, input_size)    '''    input_size = K.int_shape(x)[1]    for i in range(num_layers):        gate_ratio_name = '{}Highway/Gate_ratio_{}'.format(name_prefix, i)        fc_name = '{}Highway/FC_{}'.format(name_prefix, i)        gate_name = '{}Highway/Gate_{}'.format(name_prefix, i)        gate_ratio = Dense(input_size, activation='sigmoid', name=gate_ratio_name)(x)        fc = Dense(input_size, activation=activation, name=fc_name)(x)        x = Lambda(lambda args: args[0] * args[2] + args[1] * (1 - args[2]), name=gate_name)([fc, x, gate_ratio])    return x 
开发者ID:tyo-yo,项目名称:SeqGAN,代码行数:24,代码来源:models.py


示例8: model_masking

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def model_masking(discrete_time, init_alpha, max_beta):    model = Sequential()    model.add(Masking(mask_value=mask_value,                      input_shape=(n_timesteps, n_features)))    model.add(TimeDistributed(Dense(2)))    model.add(Lambda(wtte.output_lambda, arguments={"init_alpha": init_alpha,                                                    "max_beta_value": max_beta}))    if discrete_time:        loss = wtte.loss(kind='discrete', reduce_loss=False).loss_function    else:        loss = wtte.loss(kind='continuous', reduce_loss=False).loss_function    model.compile(loss=loss, optimizer=RMSprop(        lr=lr), sample_weight_mode='temporal')    return model 
开发者ID:ragulpr,项目名称:wtte-rnn,代码行数:19,代码来源:test_keras.py


示例9: train

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def train(model, image_data, y_true, log_dir='logs/'):    '''retrain/fine-tune the model'''    model.compile(optimizer='adam', loss={        # use custom yolo_loss Lambda layer.        'yolo_loss': lambda y_true, y_pred: y_pred})    logging = TensorBoard(log_dir=log_dir)    checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5",        monitor='val_loss', save_weights_only=True, save_best_only=True)    early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')    model.fit([image_data, *y_true],              np.zeros(len(image_data)),              validation_split=.1,              batch_size=32,              epochs=30,              callbacks=[logging, checkpoint, early_stopping])    model.save_weights(log_dir + 'trained_weights.h5')    # Further training. 
开发者ID:scutan90,项目名称:YOLO-3D-Box,代码行数:21,代码来源:train.py


示例10: channel_shuffle_lambda

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def channel_shuffle_lambda(channels,                           groups,                           **kwargs):    """    Channel shuffle layer. This is a wrapper over the same operation. It is designed to save the number of groups.    Parameters:    ----------    channels : int        Number of channels.    groups : int        Number of groups.    Returns    -------    Layer        Channel shuffle layer.    """    assert (channels % groups == 0)    return nn.Lambda(channel_shuffle, arguments={"groups": groups}, **kwargs) 
开发者ID:osmr,项目名称:imgclsmob,代码行数:23,代码来源:common.py


示例11: get_variational_encoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_variational_encoder(node_num, d,                            n_units, nu1, nu2,                            activation_fn):    K = len(n_units) + 1    # Input    x = Input(shape=(node_num,))    # Encoder layers    y = [None] * (K + 3)    y[0] = x    for i in range(K - 1):        y[i + 1] = Dense(n_units[i], activation=activation_fn,                         W_regularizer=Reg.l1_l2(l1=nu1, l2=nu2))(y[i])    y[K] = Dense(d, activation=activation_fn,                 W_regularizer=Reg.l1_l2(l1=nu1, l2=nu2))(y[K - 1])    y[K + 1] = Dense(d)(y[K - 1])    # y[K + 1] = Dense(d, W_regularizer=Reg.l1_l2(l1=nu1, l2=nu2))(y[K - 1])    y[K + 2] = Lambda(sampling, output_shape=(d,))([y[K], y[K + 1]])    # Encoder model    encoder = Model(input=x, outputs=[y[K], y[K + 1], y[K + 2]])    return encoder 
开发者ID:palash1992,项目名称:GEM-Benchmark,代码行数:22,代码来源:sdne_utils.py


示例12: profile_contrib

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def profile_contrib(p):    return kl.Lambda(lambda p:                     K.mean(K.sum(K.stop_gradient(tf.nn.softmax(p, dim=-2)) * p, axis=-2), axis=-1)                     )(p) 
开发者ID:kipoi,项目名称:models,代码行数:6,代码来源:model.py


示例13: channel_split

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def channel_split(self, x):        def splitter(y):            # keras Lambda saving bug!!!            # x_left = layers.Lambda(lambda y: y[:, :, :, :int(int(y.shape[-1]) // self.groups)])(x)            # x_right = layers.Lambda(lambda y: y[:, :, :, int(int(y.shape[-1]) // self.groups):])(x)            # return x_left, x_right            return tf.split(y, num_or_size_splits=self.groups, axis=-1)        return layers.Lambda(lambda y: splitter(y))(x) 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:11,代码来源:lednet.py


示例14: get_model_3

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_3(params):    # metadata    inputs2 = Input(shape=(params["n_metafeatures"],))    x2 = Dropout(params["dropout_factor"])(inputs2)    if params["n_dense"] > 0:        dense2 = Dense(output_dim=params["n_dense"], init="uniform", activation='relu')        x2 = dense2(x2)        logging.debug("Output CNN: %s" % str(dense2.output_shape))        x2 = Dropout(params["dropout_factor"])(x2)    if params["n_dense_2"] > 0:        dense3 = Dense(output_dim=params["n_dense_2"], init="uniform", activation='relu')        x2 = dense3(x2)        logging.debug("Output CNN: %s" % str(dense3.output_shape))        x2 = Dropout(params["dropout_factor"])(x2)    dense4 = Dense(output_dim=params["n_out"], init="uniform", activation=params['final_activation'])    xout = dense4(x2)    logging.debug("Output CNN: %s" % str(dense4.output_shape))    if params['final_activation'] == 'linear':        reg = Lambda(lambda x :K.l2_normalize(x, axis=1))        xout = reg(xout)    model = Model(input=inputs2, output=xout)    return model# Metadata 2 inputs, post-merge with dense layers 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:36,代码来源:models.py


示例15: get_model_32

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_32(params):    # metadata    inputs = Input(shape=(params["n_metafeatures"],))    reg = Lambda(lambda x :K.l2_normalize(x, axis=1))    x1 = reg(inputs)    inputs2 = Input(shape=(params["n_metafeatures2"],))    reg2 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x2 = reg2(inputs2)    # merge    x = merge([x1, x2], mode='concat', concat_axis=1)    x = Dropout(params["dropout_factor"])(x)    if params['n_dense'] > 0:        dense2 = Dense(output_dim=params["n_dense"], init="uniform", activation='relu')        x = dense2(x)        logging.debug("Output CNN: %s" % str(dense2.output_shape))    dense4 = Dense(output_dim=params["n_out"], init="uniform", activation=params['final_activation'])    xout = dense4(x)    logging.debug("Output CNN: %s" % str(dense4.output_shape))    if params['final_activation'] == 'linear':        reg = Lambda(lambda x :K.l2_normalize(x, axis=1))        xout = reg(xout)    model = Model(input=[inputs,inputs2], output=xout)    return model# Metadata 3 inputs, pre-merge and l2 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:36,代码来源:models.py


示例16: get_model_33

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_33(params):    # metadata    inputs = Input(shape=(params["n_metafeatures"],))    reg = Lambda(lambda x :K.l2_normalize(x, axis=1))    x1 = reg(inputs)    inputs2 = Input(shape=(params["n_metafeatures2"],))    reg2 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x2 = reg2(inputs2)    inputs3 = Input(shape=(params["n_metafeatures3"],))    reg3 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x3 = reg3(inputs3)    # merge    x = merge([x1, x2, x3], mode='concat', concat_axis=1)    x = Dropout(params["dropout_factor"])(x)    if params['n_dense'] > 0:        dense2 = Dense(output_dim=params["n_dense"], init="uniform", activation='relu')        x = dense2(x)        logging.debug("Output CNN: %s" % str(dense2.output_shape))    dense4 = Dense(output_dim=params["n_out"], init="uniform", activation=params['final_activation'])    xout = dense4(x)    logging.debug("Output CNN: %s" % str(dense4.output_shape))    if params['final_activation'] == 'linear':        reg = Lambda(lambda x :K.l2_normalize(x, axis=1))        xout = reg(xout)    model = Model(input=[inputs,inputs2,inputs3], output=xout)    return model# Metadata 4 inputs, pre-merge and l2 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:41,代码来源:models.py


示例17: get_model_34

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_34(params):    # metadata    inputs = Input(shape=(params["n_metafeatures"],))    reg = Lambda(lambda x :K.l2_normalize(x, axis=1))    x1 = reg(inputs)    inputs2 = Input(shape=(params["n_metafeatures2"],))    reg2 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x2 = reg2(inputs2)    inputs3 = Input(shape=(params["n_metafeatures3"],))    reg3 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x3 = reg3(inputs3)    inputs4 = Input(shape=(params["n_metafeatures4"],))    reg4 = Lambda(lambda x :K.l2_normalize(x, axis=1))    x4 = reg4(inputs4)    # merge    x = merge([x1, x2, x3, x4], mode='concat', concat_axis=1)    x = Dropout(params["dropout_factor"])(x)    if params['n_dense'] > 0:        dense2 = Dense(output_dim=params["n_dense"], init="uniform", activation='relu')        x = dense2(x)        logging.debug("Output CNN: %s" % str(dense2.output_shape))    dense4 = Dense(output_dim=params["n_out"], init="uniform", activation=params['final_activation'])    xout = dense4(x)    logging.debug("Output CNN: %s" % str(dense4.output_shape))    if params['final_activation'] == 'linear':        reg = Lambda(lambda x :K.l2_normalize(x, axis=1))        xout = reg(xout)    model = Model(input=[inputs,inputs2,inputs3,inputs4], output=xout)    return model 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:42,代码来源:models.py


示例18: get_model_6

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def get_model_6(params):    # metadata    inputs2 = Input(shape=(params["n_metafeatures"],))    #x2 = Dropout(params["dropout_factor"])(inputs2)    if params["n_dense"] > 0:        dense21 = Dense(output_dim=params["n_dense"], init="uniform", activation='relu')        x21 = dense21(inputs2)        logging.debug("Output CNN: %s" % str(dense21.output_shape))        dense22 = Dense(output_dim=params["n_dense"], init="uniform", activation='tanh')        x22 = dense22(inputs2)        logging.debug("Output CNN: %s" % str(dense22.output_shape))        dense23 = Dense(output_dim=params["n_dense"], init="uniform", activation='sigmoid')        x23 = dense23(inputs2)        logging.debug("Output CNN: %s" % str(dense23.output_shape))        # merge        x = merge([x21, x22, x23], mode='concat', concat_axis=1)        x2 = Dropout(params["dropout_factor"])(x)    if params["n_dense_2"] > 0:        dense3 = Dense(output_dim=params["n_dense_2"], init="uniform", activation='relu')        x2 = dense3(x2)        logging.debug("Output CNN: %s" % str(dense3.output_shape))        x2 = Dropout(params["dropout_factor"])(x2)    dense4 = Dense(output_dim=params["n_out"], init="uniform", activation=params['final_activation'])    xout = dense4(x2)    logging.debug("Output CNN: %s" % str(dense4.output_shape))    if params['final_activation'] == 'linear':        reg = Lambda(lambda x :K.l2_normalize(x, axis=1))        xout = reg(xout)    model = Model(input=inputs2, output=xout)    return model 
开发者ID:sergiooramas,项目名称:tartarus,代码行数:43,代码来源:models.py


示例19: rpn_graph

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def rpn_graph(feature_map, anchors_per_location, anchor_stride):    """Builds the computation graph of Region Proposal Network.    feature_map: backbone features [batch, height, width, depth]    anchors_per_location: number of anchors per pixel in the feature map    anchor_stride: Controls the density of anchors. Typically 1 (anchors for                   every pixel in the feature map), or 2 (every other pixel).    Returns:        rpn_class_logits: [batch, H * W * anchors_per_location, 2] Anchor classifier logits (before softmax)        rpn_probs: [batch, H * W * anchors_per_location, 2] Anchor classifier probabilities.        rpn_bbox: [batch, H * W * anchors_per_location, (dy, dx, log(dh), log(dw))] Deltas to be                  applied to anchors.    """    # TODO: check if stride of 2 causes alignment issues if the feature map    # is not even.    # Shared convolutional base of the RPN    shared = KL.Conv2D(512, (3, 3), padding='same', activation='relu',                       strides=anchor_stride,                       name='rpn_conv_shared')(feature_map)    # Anchor Score. [batch, height, width, anchors per location * 2].    x = KL.Conv2D(2 * anchors_per_location, (1, 1), padding='valid',                  activation='linear', name='rpn_class_raw')(shared)    # Reshape to [batch, anchors, 2]    rpn_class_logits = KL.Lambda(        lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 2]))(x)    # Softmax on last dimension of BG/FG.    rpn_probs = KL.Activation(        "softmax", name="rpn_class_xxx")(rpn_class_logits)    # Bounding box refinement. [batch, H, W, anchors per location * depth]    # where depth is [x, y, log(w), log(h)]    x = KL.Conv2D(anchors_per_location * 4, (1, 1), padding="valid",                  activation='linear', name='rpn_bbox_pred')(shared)    # Reshape to [batch, anchors, 4]    rpn_bbox = KL.Lambda(lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 4]))(x)    return [rpn_class_logits, rpn_probs, rpn_bbox] 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:42,代码来源:model.py


示例20: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,            weights_path='model_data/yolo_weights.h5'):    '''create the training model'''    K.clear_session() # get a new session    image_input = Input(shape=(None, None, 3))    h, w = input_shape    num_anchors = len(anchors)    # y_true = [Input(shape=(416//{0:32, 1:16, 2:8}[l], 416//{0:32, 1:16, 2:8}[l], 9//3, 80+5)) for l in range(3)]    y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], num_anchors//3, num_classes+5)) for l in range(3)]    model_body = yolo_body(image_input, num_anchors//3, num_classes)    print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))    if load_pretrained:        model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)        print('Load weights {}.'.format(weights_path))        if freeze_body in [1, 2]:            # Freeze darknet53 body or freeze all but 3 output layers.            num = (185, len(model_body.layers)-3)[freeze_body-1]            for i in range(num): model_body.layers[i].trainable = False            print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))    model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',        arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(        [*model_body.output, *y_true])    model = Model([model_body.input, *y_true], model_loss)    print('model_body.input: ', model_body.input)    print('model.input: ', model.input)    return model 
开发者ID:bing0037,项目名称:keras-yolo3,代码行数:33,代码来源:train.py


示例21: create_tiny_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def create_tiny_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,            weights_path='model_data/tiny_yolo_weights.h5'):    '''create the training model, for Tiny YOLOv3'''    K.clear_session() # get a new session    image_input = Input(shape=(None, None, 3))    h, w = input_shape    num_anchors = len(anchors)    y_true = [Input(shape=(h//{0:32, 1:16}[l], w//{0:32, 1:16}[l], /        num_anchors//2, num_classes+5)) for l in range(2)]    model_body = tiny_yolo_body(image_input, num_anchors//2, num_classes)    print('Create Tiny YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))    if load_pretrained:        model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)        print('Load weights {}.'.format(weights_path))        if freeze_body in [1, 2]:            # Freeze the darknet body or freeze all but 2 output layers.            num = (20, len(model_body.layers)-2)[freeze_body-1]            for i in range(num): model_body.layers[i].trainable = False            print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))    model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',        arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.7})(        [*model_body.output, *y_true])    model = Model([model_body.input, *y_true], model_loss)    return model 
开发者ID:bing0037,项目名称:keras-yolo3,代码行数:31,代码来源:train.py


示例22: space_to_depth_x2

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def space_to_depth_x2(x):    """Thin wrapper for Tensorflow space_to_depth with block_size=2."""    # Import currently required to make Lambda work.    # See: https://github.com/fchollet/keras/issues/5088#issuecomment-273851273    import tensorflow as tf    return tf.space_to_depth(x, block_size=2) 
开发者ID:PiSimo,项目名称:PiCamNN,代码行数:8,代码来源:keras_yolo.py


示例23: space_to_depth_x2_output_shape

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def space_to_depth_x2_output_shape(input_shape):    """Determine space_to_depth output shape for block_size=2.    Note: For Lambda with TensorFlow backend, output shape may not be needed.    """    return (input_shape[0], input_shape[1] // 2, input_shape[2] // 2, 4 *            input_shape[3]) if input_shape[1] else (input_shape[0], None, None,                                                    4 * input_shape[3]) 
开发者ID:PiSimo,项目名称:PiCamNN,代码行数:10,代码来源:keras_yolo.py


示例24: _buildEncoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def _buildEncoder(self, x, latent_rep_size, max_length, epsilon_std=0.01):    h = Convolution1D(9, 9, activation='relu', name='conv_1')(x)    h = Convolution1D(9, 9, activation='relu', name='conv_2')(h)    h = Convolution1D(10, 11, activation='relu', name='conv_3')(h)    h = Flatten(name='flatten_1')(h)    h = Dense(435, activation='relu', name='dense_1')(h)    def sampling(args):      z_mean_, z_log_var_ = args      batch_size = K.shape(z_mean_)[0]      epsilon = K.random_normal(          shape=(batch_size, latent_rep_size), mean=0., std=epsilon_std)      return z_mean_ + K.exp(z_log_var_ / 2) * epsilon    z_mean = Dense(latent_rep_size, name='z_mean', activation='linear')(h)    z_log_var = Dense(latent_rep_size, name='z_log_var', activation='linear')(h)    def vae_loss(x, x_decoded_mean):      x = K.flatten(x)      x_decoded_mean = K.flatten(x_decoded_mean)      xent_loss = max_length * objectives.binary_crossentropy(x, x_decoded_mean)      kl_loss = -0.5 * K.mean(          1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)      return xent_loss + kl_loss    return (vae_loss, Lambda(        sampling, output_shape=(latent_rep_size,),        name='lambda')([z_mean, z_log_var])) 
开发者ID:deepchem,项目名称:deepchem,代码行数:30,代码来源:model.py


示例25: build_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def build_model(self):        input = Input(shape=self.state_size)        shared = Conv2D(32, (8, 8), strides=(4, 4), activation='relu')(input)        shared = Conv2D(64, (4, 4), strides=(2, 2), activation='relu')(shared)        shared = Conv2D(64, (3, 3), strides=(1, 1), activation='relu')(shared)        flatten = Flatten()(shared)        # network separate state value and advantages        advantage_fc = Dense(512, activation='relu')(flatten)        advantage = Dense(self.action_size)(advantage_fc)        advantage = Lambda(lambda a: a[:, :] - K.mean(a[:, :], keepdims=True),                           output_shape=(self.action_size,))(advantage)        value_fc = Dense(512, activation='relu')(flatten)        value =  Dense(1)(value_fc)        value = Lambda(lambda s: K.expand_dims(s[:, 0], -1),                       output_shape=(self.action_size,))(value)        # network merged and make Q Value        q_value = merge([value, advantage], mode='sum')        model = Model(inputs=input, outputs=q_value)        model.summary()        return model    # after some time interval update the target model to be same with model 
开发者ID:rlcode,项目名称:reinforcement-learning,代码行数:28,代码来源:breakout_dueling_ddqn.py


示例26: _grouped_convolution_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Lambda [as 别名]def _grouped_convolution_block(input, grouped_channels, cardinality, strides, weight_decay=5e-4):    ''' Adds a grouped convolution block. It is an equivalent block from the paper    Args:        input: input tensor        grouped_channels: grouped number of filters        cardinality: cardinality factor describing the number of groups        strides: performs strided convolution for downscaling if > 1        weight_decay: weight decay term    Returns: a keras tensor    '''    init = input    channel_axis = 1 if K.image_data_format() == 'channels_first' else -1    group_list = []    if cardinality == 1:        # with cardinality 1, it is a standard convolution        x = Conv2D(grouped_channels, (3, 3), padding='same', use_bias=False, strides=strides,                   kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(init)        x = BatchNormalization(axis=channel_axis)(x)        x = Activation('relu')(x)        return x    for c in range(cardinality):        x = Lambda(lambda z: z[:, :, :, c * grouped_channels:(c + 1) * grouped_channels]                   if K.image_data_format() == 'channels_last' else                   lambda z: z[:, c * grouped_channels:(c + 1) * grouped_channels, :, :])(input)        x = Conv2D(grouped_channels, (3, 3), padding='same', use_bias=False, strides=strides,                   kernel_initializer='he_normal', kernel_regularizer=l2(weight_decay))(x)        group_list.append(x)    group_merge = concatenate(group_list, axis=channel_axis)    group_merge = BatchNormalization(axis=channel_axis)(group_merge)    group_merge = Activation('relu')(group_merge)    return group_merge 
开发者ID:titu1994,项目名称:Keras-DualPathNetworks,代码行数:39,代码来源:dual_path_network.py


51自学网,即我要自学网,自学EXCEL、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。
京ICP备13026421号-1