Server Class
- class fedgraph.server_class.Server(feature_dim: int, args_hidden: int, class_num: int, device: device, trainers: list, args: Any)[source]
This is a server class for federated learning which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.
- Parameters:
feature_dim (int) – The dimensionality of the feature vectors in the dataset.
args_hidden (int) – The number of hidden units.
class_num (int) – The number of classes for classification in the dataset.
device (torch.device) – The device initialized for the server model.
trainers (list[Trainer_General]) – A list of Trainer_General instances representing the trainers.
args (Any) – Additional arguments required for initializing the server model and other configurations.
- model
The central GCN model that is trained in a federated manner.
- Type:
- trainers
The list of trainer instances.
- Type:
- broadcast_params(current_global_epoch: int) None [source]
Broadcasts the current parameters of the central model to all trainers.
- Parameters:
current_global_epoch (int) – The current global epoch number during the federated learning process.
- train(current_global_epoch: int, sampling_type: str = 'random', sample_ratio: float = 1) None [source]
Training round which performs aggregating parameters from sampled trainers (by index), updating the central model, and then broadcasting the updated parameters back to all trainers.
- Parameters:
current_global_epoch (int) – The current global epoch number during the federated learning process.
- class fedgraph.server_class.Server_GC(model: Module, device: device, use_cluster: bool)[source]
This is a server class for federated graph classification which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.
- Parameters:
model (torch.nn.Module) – The base model that the federated learning is performed on.
device (torch.device) – The device to run the model on.
- model
The base model for the server.
- Type:
torch.nn.Module
- model_cache
List of tuples, where each tuple contains the model parameters and the accuracies of the trainers.
- Type:
- aggregate_clusterwise(trainer_clusters: list) None [source]
Perform weighted aggregation among the trainers in each cluster. The weights are the number of training samples.
- Parameters:
trainer_clusters (list) – list of cluster-specified trainer groups, where each group contains the trainer objects in a cluster
- aggregate_weights(selected_trainers: list) None [source]
Perform weighted aggregation among selected trainers. The weights are the number of training samples.
- Parameters:
selected_trainers (list) – list of trainer objects
- cache_model(idcs: list, params: dict, accuracies: list) None [source]
Cache the model parameters and accuracies of the trainers.
- compute_max_update_norm(cluster: list) float [source]
Compute the maximum update norm (i.e., dW) among the trainers in the cluster. This function is used to determine whether the cluster is ready to be split.
- Parameters:
cluster (list) – list of trainer objects
- compute_mean_update_norm(cluster: list) float [source]
Compute the mean update norm (i.e., dW) among the trainers in the cluster. This function is used to determine whether the cluster is ready to be split.
- Parameters:
cluster (list) – list of trainer objects
- compute_pairwise_distances(seqs: list, standardize: bool = False) ndarray [source]
This function computes the pairwise distances between the gradient norm sequences of the trainers.
- Parameters:
- Returns:
distances – 2D np.ndarray of shape len(seqs) * len(seqs), which contains the pairwise distances
- Return type:
np.ndarray
- compute_pairwise_similarities(trainers: list) ndarray [source]
This function computes the pairwise cosine similarities between the gradients of the trainers.
- Parameters:
trainers (list) – list of trainer objects
- Returns:
2D np.ndarray of shape len(trainers) * len(trainers), which contains the pairwise cosine similarities
- Return type:
np.ndarray
- min_cut(similarity: ndarray, idc: list) tuple [source]
This function computes the minimum cut of the graph defined by the pairwise cosine similarities.
- Parameters:
similarity (np.ndarray) – 2D np.ndarray of shape len(trainers) * len(trainers), which contains the pairwise cosine similarities
idc (list) – list of trainer indices
- Returns:
(c1, c2) – tuple of two lists, where each list contains the indices of the trainers in a cluster
- Return type:
- class fedgraph.server_class.Server_LP(number_of_users: int, number_of_items: int, meta_data: tuple, trainers: list, args_cuda: bool = False)[source]
This is a server class for federated graph link prediction which is responsible for aggregating model parameters from different trainers, updating the central model, and then broadcasting the updated model parameters back to the trainers.
- Parameters:
- fedavg(gnn_only: bool = False) dict [source]
This function performs federated averaging on the model parameters of the clients.