Server Class

class fedgraph.server_class.Server(feature_dim: int, args_hidden: int, class_num: int, device: device, trainers: list, args: Any)[source]

This is a server class for federated learning which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.

Parameters:
  • feature_dim (int) – The dimensionality of the feature vectors in the dataset.

  • args_hidden (int) – The number of hidden units.

  • class_num (int) – The number of classes for classification in the dataset.

  • device (torch.device) – The device initialized for the server model.

  • trainers (list[Trainer_General]) – A list of Trainer_General instances representing the trainers.

  • args (Any) – Additional arguments required for initializing the server model and other configurations.

model

The central GCN model that is trained in a federated manner.

Type:

[AggreGCN, GCN_arxiv, SAGE_products, GCN]

trainers

The list of trainer instances.

Type:

list[Trainer_General]

num_of_trainers

The number of trainers.

Type:

int

aggregate_encrypted_feature_sums(encrypted_sums)[source]
aggregate_encrypted_params(encrypted_params_list)[source]
broadcast_params(current_global_epoch: int) None[source]

Broadcasts the current parameters of the central model to all trainers.

Parameters:

current_global_epoch (int) – The current global epoch number during the federated learning process.

get_encrypted_params()[source]
prepare_params_for_encryption(params)[source]
train(current_global_epoch: int, sampling_type: str = 'random', sample_ratio: float = 1) None[source]

Training round which performs aggregating parameters from sampled trainers (by index), updating the central model, and then broadcasting the updated parameters back to all trainers.

Parameters:

current_global_epoch (int) – The current global epoch number during the federated learning process.

zero_params() None[source]

Zeros out the parameters of the central model.

class fedgraph.server_class.Server_GC(model: Module, device: device, use_cluster: bool)[source]

This is a server class for federated graph classification which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.

Parameters:
  • model (torch.nn.Module) – The base model that the federated learning is performed on.

  • device (torch.device) – The device to run the model on.

model

The base model for the server.

Type:

torch.nn.Module

W

Dictionary containing the model parameters.

Type:

dict

model_cache

List of tuples, where each tuple contains the model parameters and the accuracies of the trainers.

Type:

list

aggregate_clusterwise(trainer_clusters: list) None[source]

Perform weighted aggregation among the trainers in each cluster. The weights are the number of training samples.

Parameters:

trainer_clusters (list) – list of cluster-specified trainer groups, where each group contains the trainer objects in a cluster

aggregate_weights(selected_trainers: list) None[source]

Perform weighted aggregation among selected trainers. The weights are the number of training samples.

Parameters:

selected_trainers (list) – list of trainer objects

cache_model(idcs: list, params: dict, accuracies: list) None[source]

Cache the model parameters and accuracies of the trainers.

Parameters:
  • idcs (list) – list of trainer indices

  • params (dict) – dictionary containing the model parameters of the trainers

  • accuracies (list) – list of accuracies of the trainers

compute_max_update_norm(cluster: list) float[source]

Compute the maximum update norm (i.e., dW) among the trainers in the cluster. This function is used to determine whether the cluster is ready to be split.

Parameters:

cluster (list) – list of trainer objects

compute_mean_update_norm(cluster: list) float[source]

Compute the mean update norm (i.e., dW) among the trainers in the cluster. This function is used to determine whether the cluster is ready to be split.

Parameters:

cluster (list) – list of trainer objects

compute_pairwise_distances(seqs: list, standardize: bool = False) ndarray[source]

This function computes the pairwise distances between the gradient norm sequences of the trainers.

Parameters:
  • seqs (list) – list of 1D np.ndarray, where each 1D np.ndarray contains the gradient norm sequence of a trainer

  • standardize (bool) – whether to standardize the distance matrix

Returns:

distances – 2D np.ndarray of shape len(seqs) * len(seqs), which contains the pairwise distances

Return type:

np.ndarray

compute_pairwise_similarities(trainers: list) ndarray[source]

This function computes the pairwise cosine similarities between the gradients of the trainers.

Parameters:

trainers (list) – list of trainer objects

Returns:

2D np.ndarray of shape len(trainers) * len(trainers), which contains the pairwise cosine similarities

Return type:

np.ndarray

min_cut(similarity: ndarray, idc: list) tuple[source]

This function computes the minimum cut of the graph defined by the pairwise cosine similarities.

Parameters:
  • similarity (np.ndarray) – 2D np.ndarray of shape len(trainers) * len(trainers), which contains the pairwise cosine similarities

  • idc (list) – list of trainer indices

Returns:

(c1, c2) – tuple of two lists, where each list contains the indices of the trainers in a cluster

Return type:

tuple

random_sample_trainers(all_trainers: list, frac: float) list[source]

Randomly sample a fraction of trainers.

Parameters:
  • all_trainers (list) – list of trainer objects

  • frac (float) – fraction of trainers to be sampled

Returns:

(sampled_trainers) – list of trainer objects

Return type:

list

class fedgraph.server_class.Server_LP(number_of_users: int, number_of_items: int, meta_data: tuple, trainers: list, args_cuda: bool = False)[source]

This is a server class for federated graph link prediction which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.

Parameters:
  • number_of_users (int) – The number of users in the dataset.

  • number_of_items (int) – The number of items in the dataset.

  • meta_data (dict) – Dictionary containing the meta data of the dataset.

  • args_cuda (bool) – Whether to run the model on GPU.

fedavg(gnn_only: bool = False) dict[source]

This function performs federated averaging on the model parameters of the clients.

Parameters:
  • clients (list) – List of client objects

  • gnn_only (bool, optional) – Whether to get only the GNN parameters

Returns:

model_avg_parameter – The averaged model parameters

Return type:

dict

get_model_parameter(gnn_only: bool = False) dict[source]

Get the model parameters

Parameters:

gnn_only (bool) – Whether to get only the GNN parameters

Returns:

The model parameters

Return type:

dict

set_model_parameter(model_state_dict: dict, gnn_only: bool = False) None[source]

Set the model parameters

Parameters:
  • model_state_dict (dict) – The model parameters

  • gnn_only (bool, optional) – Whether to set only the GNN parameters