Skip to content

Add high-level Makefile #4

Add high-level Makefile

Add high-level Makefile #4

Workflow file for this run

#
# Copyright (C) 2025 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# This GitHub Actions workflow is named "LLMart Test Runner".
# It is triggered on push and pull request events to the "main" branches.
# The workflow consists of a single job named "build" that runs on the latest Ubuntu environment.
# The job sets an environment variable HUGGINGFACE_TOKEN using a secret.
# The job performs the following steps:
# 1. Checks out the repository using the actions/checkout@v4 action.
# 2. Installs the 'uv' tool and the 'huggingface_hub' Python package, then logs in to Hugging Face using the provided token.
# 3. Runs the commands specified in the Makefile located in the root directory.
# 4. Runs the commands specified in the Makefile located in the 'examples/' directory and performs cleanup.
name: LLMart Test Runner
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
env:
HUGGINGFACE_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Install uv, huggingface_hub and login
run: |
curl -LsSf https://astral.sh/uv/install.sh | sh
pip install huggingface_hub
echo -e "${HUGGINGFACE_TOKEN}\nY\n" | huggingface-cli login
- name: Test: README commands on CPU - 1 - llama3-8b-instruct - advbench_behavior

Check failure on line 39 in .github/workflows/release.yml

View workflow run for this annotation

GitHub Actions / .github/workflows/release.yml

Invalid workflow file

You have an error in your yaml syntax on line 39
run: make -j{nproc} -C examples/ -f makefile_commands.mk ARGS="model.device=cpu model=llama3-8b-instruct data=advbench_behavior data.subset=[0] loss=model"
- name: Test: README commands on CPU - 2 - llama3.1-70b-instruct - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk ARGS="model.device=cpu model=llama3.1-70b-instruct model.device_map=auto data=basic loss=model"
- name: Test: README commands on CPU - 3 - Intel/neural-chat-7b-v3-3 - advbench_behavior
run: make -j{nproc} -C examples/ -f makefile_commands.mk ARGS="model.device=cpu model=custom model.name=Intel/neural-chat-7b-v3-3 model.revision=7506dfc5fb325a8a8e0c4f9a6a001671833e5b8e"
- name: Test: README commands on CPU - 4 - llama3-8b-instruct - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk ARGS="model.device=cpu model=llama3-8b-instruct data=basic loss=model"
- name: Test: README commands on CPU - 5 - deepseek-r1-distill-llama-8b - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk ARGS="model.device=cpu model=deepseek-r1-distill-llama-8b data=basic per_device_bs=64 "response.replace_with=`echo -e '\"<think>\nOkay, so I need to tell someone about Saturn.\n</think>\n\nNO WAY JOSE\"'`" && make clean
- name: Test: README commands on GPU - 1 - llama3-8b-instruct - advbench_behavior
run: make -j{nproc} -C examples/ -f makefile_commands.mk NUM_GPU=4 ARGS="model.device=cuda model=llama3-8b-instruct data=advbench_behavior data.subset=[0] loss=model" && make clean
- name: Test: README commands on GPU - 2 - llama3.1-70b-instruct - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk NUM_GPU=4 ARGS="model.device=cuda model=llama3.1-70b-instruct model.device_map=auto data=basic loss=model" && make clean
- name: Test: README commands on GPU - 3 - Intel/neural-chat-7b-v3-3 - advbench_behavior
run: make -j{nproc} -C examples/ -f makefile_commands.mk NUM_GPU=4 ARGS="model.device=cuda model=custom model.name=Intel/neural-chat-7b-v3-3 model.revision=7506dfc5fb325a8a8e0c4f9a6a001671833e5b8e" && make clean
- name: Test: README commands on GPU - 4 - llama3-8b-instruct - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk NUM_GPU=4 ARGS="model.device=cuda model=llama3-8b-instruct data=basic loss=model" && make clean
- name: Test: README commands on GPU - 5 - deepseek-r1-distill-llama-8b - basic
run: make -j{nproc} -C examples/ -f makefile_commands.mk NUM_GPU=4 ARGS="model.device=cuda model=deepseek-r1-distill-llama-8b data=basic per_device_bs=64 "response.replace_with=`echo -e '\"<think>\nOkay, so I need to tell someone about Saturn.\n</think>\n\nNO WAY JOSE\"'`" && make clean
- name: Test: Running examples/autorcg on CPU
run: make -C examples/autorcg run gpu=0 && make clean
- name: Test: Running examples/basic on CPU
run: make -C examples/basic run gpu=0 && make clean
- name: Test: Running examples/random_strings on CPU
run: make -C examples/random_strings run gpu=0 && make clean
- name: Test: Running examples/unlearning on CPU
run: make -C examples/unlearning run gpu=0 && make clean
- name: Test: Running examples/llmguard on CPU
run: make -C examples/llmguard run gpu=0 && make clean
- name: Test: Running examples/fact_checking on CPU
run: make -C examples/fact_checking run gpu=0 && make clean
- name: Test: Running examples/autorcg on GPU
run: make -C examples/autorcg run gpu=1 && make clean
- name: Test: Running examples/basic on GPU
run: make -C examples/basic run gpu=1 && make clean
- name: Test: Running examples/random_strings on GPU
run: make -C examples/random_strings run gpu=1 && make clean
- name: Test: Running examples/unlearning on GPU
run: make -C examples/unlearning run gpu=1 && make clean
- name: Test: Running examples/llmguard on GPU
run: make -C examples/llmguard run gpu=1 && make clean
- name: Test: Running examples/fact_checking on GPU
run: make -C examples/fact_checking run gpu=1 && make clean
- name: Test: Alternative to Custom Runs - Running all examples on GPU
run: make -C examples run gpu=1 && make clean
- name: Test: Alternative to Custom Runs - Running all examples on CPU
run: make -C examples/fact_checking run gpu=0 && make clean