Skip to content

Add a new experimental restart policy for large scale model training #922

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion torchx/specs/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,11 +237,15 @@ class RetryPolicy(str, Enum):
application to deal with failed replica departures and
replacement replica admittance.
2. APPLICATION: Restarts the entire application.

3. HOT_SPARE: Restarts the replicas for a role as long as quorum (min_replicas)
is not violated using extra hosts as spares. It does not really support
elasticity and just uses the delta between num_replicas and min_replicas
as spares (EXPERIMENTAL).
"""

REPLICA = "REPLICA"
APPLICATION = "APPLICATION"
HOT_SPARE = "HOT_SPARE"


class MountType(str, Enum):
Expand Down Expand Up @@ -340,6 +344,8 @@ class Role:
and num_replicas depending on the cluster resources and
policies. If the scheduler doesn't support auto scaling this
field is ignored and the job size will be num_replicas.
EXPERIMENTAL: For HOT_SPARE restart policy this field is used to
indicate the quorum required for the job to run.
max_retries: max number of retries before giving up
retry_policy: retry behavior upon replica failures
resource: Resource requirement for the role. The role should be scheduled
Expand Down
Loading