p2pfl.learning.aggregators.fedopt packageΒΆ
FedOpt family of aggregators for P2PFL.
This package provides implementations of the FedOpt family of federated optimization algorithms. See individual modules for detailed documentation.
- class p2pfl.learning.aggregators.fedopt.FedAdagrad(eta=0.1, beta_1=0.9, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBase
FedAdagrad - Adaptive Federated Optimization using Adagrad [Reddi et al., 2020].
FedAdagrad adapts the Adagrad optimizer to federated settings, maintaining adaptive learning rates on the server side based on accumulated squared gradients.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float
)beta_1 (
float
)tau (
float
)disable_partial_aggregation (
bool
)
- class p2pfl.learning.aggregators.fedopt.FedAdam(eta=0.1, beta_1=0.9, beta_2=0.99, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBase
FedAdam - Adaptive Federated Optimization using Adam [Reddi et al., 2020].
FedAdam adapts the Adam optimizer to federated settings, maintaining both momentum and adaptive learning rates on the server side.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float
)beta_1 (
float
)beta_2 (
float
)tau (
float
)disable_partial_aggregation (
bool
)
- class p2pfl.learning.aggregators.fedopt.FedOptBase(eta=0.1, beta_1=0.9, tau=1e-09, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedAvg
Base class for Federated Optimization (FedOpt) family [Reddi et al., 2020].
This class extends FedAvg to provide common functionality for adaptive federated optimization algorithms like FedAdagrad, FedAdam, and FedYogi.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float
)beta_1 (
float
)tau (
float
)disable_partial_aggregation (
bool
)
-
SUPPORTS_PARTIAL_AGGREGATION:
bool
= FalseΒΆ
-
addr:
str
ΒΆ
- aggregate(models)[source]ΒΆ
Aggregate models using FedOpt algorithm.
- Parameters:
models (
list
[P2PFLModel
]) β List of P2PFLModel objects to aggregate.- Return type:
- Returns:
A P2PFLModel with the optimized parameters.
-
current_weights:
list
[ndarray
]ΒΆ
-
m_t:
list
[ndarray
]ΒΆ
-
partial_aggregation:
bool
ΒΆ
- class p2pfl.learning.aggregators.fedopt.FedYogi(eta=0.01, beta_1=0.9, beta_2=0.99, tau=0.001, disable_partial_aggregation=False)[source]ΒΆ
Bases:
FedOptBase
FedYogi - Adaptive Federated Optimization using Yogi [Reddi et al., 2020].
FedYogi adapts the Yogi optimizer to federated settings, maintaining adaptive learning rates on the server side to handle heterogeneous data distributions.
Paper: https://arxiv.org/abs/2003.00295
- Parameters:
eta (
float
)beta_1 (
float
)beta_2 (
float
)tau (
float
)disable_partial_aggregation (
bool
)