Skip to content

Commit 48bd72c

Browse files
BioGeekrasbt
andauthored
fix typos, add codespell pre-commit hook (rasbt#264)
* fix typos, add codespell pre-commit hook * Update .pre-commit-config.yaml --------- Co-authored-by: Sebastian Raschka <[email protected]>
1 parent 6ffd628 commit 48bd72c

File tree

5 files changed

+21
-4
lines changed

5 files changed

+21
-4
lines changed

.pre-commit-config.yaml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# A tool used by developers to identify spelling errors in text.
2+
# Readers may ignore this file.
3+
4+
default_stages: [commit]
5+
6+
repos:
7+
- repo: https://github.com/codespell-project/codespell
8+
rev: v2.3.0
9+
hooks:
10+
- id: codespell
11+
name: codespell
12+
description: Check for spelling errors in text.
13+
entry: codespell
14+
language: python
15+
args:
16+
- "-L ocassion,occassion,ot,te,tje"
17+
files: \.txt$|\.md$|\.py|\.ipynb$

ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,7 @@
317317
"id": "f78e346f-3b85-44e6-9feb-f01131381148"
318318
},
319319
"source": [
320-
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention calld [flash attention](https://arxiv.org/abs/2205.14135)"
320+
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention called [flash attention](https://arxiv.org/abs/2205.14135)"
321321
]
322322
},
323323
{

ch04/01_main-chapter-code/ch04.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1043,7 +1043,7 @@
10431043
"id": "dec7d03d-9ff3-4ca3-ad67-01b67c2f5457",
10441044
"metadata": {},
10451045
"source": [
1046-
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a useable GPT architecture\n",
1046+
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a usable GPT architecture\n",
10471047
"- Note that the transformer block is repeated multiple times; in the case of the smallest 124M GPT-2 model, we repeat it 12 times:"
10481048
]
10491049
},

ch06/02_bonus_additional-experiments/additional-experiments.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,7 @@ def replace_linear_with_lora(model, rank, alpha):
370370
action='store_true',
371371
default=False,
372372
help=(
373-
"Disable padding, which means each example may have a different lenght."
373+
"Disable padding, which means each example may have a different length."
374374
" This requires setting `--batch_size 1`."
375375
)
376376
)

ch07/02_dataset-utilities/create-passive-voice-entries.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@
166166
" return response.choices[0].message.content\n",
167167
"\n",
168168
"\n",
169-
"# Prepare intput\n",
169+
"# Prepare input\n",
170170
"sentence = \"I ate breakfast\"\n",
171171
"prompt = f\"Convert the following sentence to passive voice: '{sentence}'\"\n",
172172
"run_chatgpt(prompt, client)"

0 commit comments

Comments
 (0)