-
Notifications
You must be signed in to change notification settings - Fork 465
Open
Description
Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round
specifically, making the local update in federated_main.py to be executed by parallel processes
for idx in idxs_users:
local_model = LocalUpdate(args=args, dataset=train_dataset,
idxs=user_groups[idx], logger=logger)
w, loss = local_model.update_weights(
model=copy.deepcopy(global_model), global_round=epoch)
local_weights.append(copy.deepcopy(w))
local_losses.append(copy.deepcopy(loss))
or
are there suggestions for start working on this approach?
Metadata
Metadata
Assignees
Labels
No labels