[source]

GraphCNN

GraphCNN(output_dim, num_filters, graph_conv_filters,  activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

GraphCNN layer assumes a fixed input graph structure which is passed as a layer argument. As a result, the input order of graph nodes are fixed for the model and should match the nodes order in inputs. Also, graph structure can not be changed once the model is compiled. This choice enable us to use Keras Sequential API but comes with some constraints (for instance shuffling is not possible anymore in-or-after each epoch). See further remarks below about this specific choice.

Arguments

  • output_dim: Positive integer, dimensionality of each graph node feature output space (or also referred dimension of graph node embedding).
  • num_filters: Positive integer, number of graph filters used for constructing graph_conv_filters input.
  • graph_conv_filters input as a 2D tensor with shape: (num_filters*num_graph_nodes, num_graph_nodes)
    num_filters is different number of graph convolution filters to be applied on graph. For instance num_filters could be power of graph Laplacian. Here list of graph convolutional matrices are stacked along second-last axis.
  • activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shapes

  • 2D tensor with shape: (num_graph_nodes, input_dim) representing graph node input feature matrix.

Output shape

  • 2D tensor with shape: (num_graph_nodes, output_dim) representing convoluted output graph node embedding (or signal) matrix.

[source]

Example 1: Graph Semi-Supervised Learning (or Node Classification)

# A sample code for applying GraphCNN layer to perform node classification. 
# See examples/gcnn_node_classification_example.py for complete code.

from keras_dgl.layers import GraphCNN

model = Sequential()
model.add(GraphCNN(16, 2, graph_conv_filters, input_shape=(X.shape[1],), activation='elu', kernel_regularizer=l2(5e-4)))
model.add(Dropout(0.2))
model.add(GraphCNN(Y.shape[1], 2, graph_conv_filters, kernel_regularizer=l2(5e-4)))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['acc'])
model.fit(X, Y_train, sample_weight=train_mask, batch_size=A.shape[0], epochs=500, shuffle=False, verbose=0)

[source]

Example 2: Graph Edge Convolution for Node Classification

# A sample code for applying GraphCNN layer while taking edge features into account to perform node label classification. 
# For edge convolution all we need is to provide a graph_conv_filters which contains (stack) adjacency matrices corresponding to each edge features. See note below on example2.
# See graphcnn_example2.py for complete code.

from keras_dgl.layers import GraphCNN

model = Sequential()
model.add(GraphCNN(16, 2, graph_conv_filters, activation='elu'))
model.add(Dropout(0.2))
model.add(GraphCNN(Y.shape[1], 2, graph_conv_filters))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
model.fit(X, Y_train, sample_weight=train_mask, batch_size=A.shape[0], epochs=500, shuffle=False)

Note on Example 2: Equation in the paper (see reference [3]) can be written as . This is defined as graph edge convolution. All we have to do is stack and feed to GraphCNN layer to perform graph edge convolution.


[source]

MutliGraphCNN

MutliGraphCNN(output_dim, num_filters, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

MutliGraphCNN assumes that the number of nodes for each graph in the dataset is same. For graph with arbitrary size, one can simply append appropriate zero rows or columns in adjacency matrix (and node feature matrix) based on max graph size in the dataset to achieve this uniformity.

Arguments

  • output_dim: Positive integer, dimensionality of each graph node feature output space (or also referred dimension of graph node embedding).
  • num_filters: Positive integer, number of graph filters used for constructing graph_conv_filters input.
  • activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shapes

  • graph node feature matrix input as a 3D tensor with shape: (batch_size, num_graph_nodes, input_dim) corresponding to graph node input feature matrix for each graph.
  • graph_conv_filters input as a 3D tensor with shape: (batch_size, num_filters*num_graph_nodes, num_graph_nodes)
    num_filters is different number of graph convolution filters to be applied on graph. For instance num_filters could be power of graph Laplacian.

Output shape

  • 3D tensor with shape: (batch_size, num_graph_nodes, output_dim) representing convoluted output graph node embedding matrix for each graph in batch size.

[source]

Example 3: Graph Classification

# See multi_gcnn_graph_classification_example.py for complete code.

from keras_dgl.layers import MultiGraphCNN

X_shape = Input(shape=(X.shape[1], X.shape[2]))
graph_conv_filters_shape = Input(shape=(graph_conv_filters.shape[1], graph_conv_filters.shape[2]))

output = MultiGraphCNN(100, num_filters, activation='elu')([X_shape, graph_conv_filters_shape])
output = Dropout(0.2)(output)
output = MultiGraphCNN(100, num_filters, activation='elu')([output, graph_conv_filters_shape])
output = Dropout(0.2)(output)
output = Lambda(lambda x: K.mean(x, axis=1))(output)  # adding a node invariant layer to make sure output does not depends upon the node order in a graph.
output = Dense(Y.shape[1])(output)
output = Activation('softmax')(output)

nb_epochs = 200
batch_size = 169

model = Model(inputs=[X_shape, graph_conv_filters_shape], outputs=output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
model.fit([X, graph_conv_filters], Y, batch_size=batch_size, validation_split=0.1, epochs=nb_epochs, shuffle=True, verbose=1)

Remarks

Why pass graph_conv_filters as a layer argument and not as an input in GraphCNN?
The problem lies with keras multi-input functional API. It requires --- all input arrays (x) should have the same number of samples i.e., all inputs first dimension axis should be same. In special cases the first dimension of inputs could be same, for example check out Kipf .et.al. keras implementation [source]. But in cases such as a graph recurrent neural networks this does not hold true.

Why pass graph_conv_filters as 2D tensor of this specific format?
Passing graph_conv_filters input as a 2D tensor with shape: (K*num_graph_nodes, num_graph_nodes) cut down few number of tensor computation operations.

References:
[1] Kipf, Thomas N., and Max Welling. "Semi-supervised classification with graph convolutional networks." arXiv preprint arXiv:1609.02907 (2016).
[2] Defferrard, Michaƫl, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering." In Advances in Neural Information Processing Systems, pp. 3844-3852. 2016.
[3] Simonovsky, Martin, and Nikos Komodakis. "Dynamic edge-conditioned filters in convolutional neural networks on graphs." In Proc. CVPR. 2017.