p2pfl.learning.aggregators packageΒΆ
Aggregation algorithms for P2PFL.
- class p2pfl.learning.aggregators.FedAdagrad(eta=0.1, beta_1=0.9, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBaseFedAdagrad - Adaptive Federated Optimization using Adagrad [Reddi et al., 2020].
FedAdagrad adapts the Adagrad optimizer to federated settings, maintaining adaptive learning rates on the server side based on accumulated squared gradients.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float)beta_1 (
float)tau (
float)disable_partial_aggregation (
bool)
- class p2pfl.learning.aggregators.FedAdam(eta=0.1, beta_1=0.9, beta_2=0.99, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBaseFedAdam - Adaptive Federated Optimization using Adam [Reddi et al., 2020].
FedAdam adapts the Adam optimizer to federated settings, maintaining both momentum and adaptive learning rates on the server side.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float)beta_1 (
float)beta_2 (
float)tau (
float)disable_partial_aggregation (
bool)
- class p2pfl.learning.aggregators.FedOptBase(eta=0.1, beta_1=0.9, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedAvgBase class for Federated Optimization (FedOpt) family [Reddi et al., 2020].
This class extends FedAvg to provide common functionality for adaptive federated optimization algorithms like FedAdagrad, FedAdam, and FedYogi.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float)beta_1 (
float)tau (
float)disable_partial_aggregation (
bool)
-
SUPPORTS_PARTIAL_AGGREGATION:
bool= FalseΒΆ
- aggregate(models)[source]ΒΆ
Aggregate models using FedOpt algorithm.
- Parameters:
models (
list[P2PFLModel]) β List of P2PFLModel objects to aggregate.- Return type:
- Returns:
A P2PFLModel with the optimized parameters.
- class p2pfl.learning.aggregators.FedYogi(eta=0.01, beta_1=0.9, beta_2=0.99, tau=0.001, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBaseFedYogi - Adaptive Federated Optimization using Yogi [Reddi et al., 2020].
FedYogi adapts the Yogi optimizer to federated settings, maintaining adaptive learning rates on the server side to handle heterogeneous data distributions.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float)beta_1 (
float)beta_2 (
float)tau (
float)disable_partial_aggregation (
bool)
SubpackagesΒΆ
SubmodulesΒΆ
- p2pfl.learning.aggregators.aggregator module
AggregatorAggregator.SUPPORTS_PARTIAL_AGGREGATIONAggregator.add_model()Aggregator.aggregate()Aggregator.clear()Aggregator.get_aggregated_models()Aggregator.get_missing_models()Aggregator.get_model()Aggregator.get_required_callbacks()Aggregator.set_nodes_to_aggregate()Aggregator.wait_and_get_aggregation()
NoModelsToAggregateError
- p2pfl.learning.aggregators.fedavg module
- p2pfl.learning.aggregators.fedmedian module
- p2pfl.learning.aggregators.fedprox module
- p2pfl.learning.aggregators.krum module
- p2pfl.learning.aggregators.scaffold module