Skip to content

Commit 43a606a

Browse files
authored
[DOC] Fix typos (#1290)
Found via `codespell -L nam,ans,bage,te,mapp,zar,caf,fro,som,tha,tje,yot,bu,fo,ressources,onl,regon,licens,variabl`
1 parent 6bb6a2a commit 43a606a

14 files changed

+23
-23
lines changed

examples/notebooks/Pivoting Data from Wide to Long.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3564,7 +3564,7 @@
35643564
"cell_type": "markdown",
35653565
"metadata": {},
35663566
"source": [
3567-
"The data above requires extracting `a`, `ab` and `ac` from `1` and `2`. This is another example of a paired column. We could solve this using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html); infact there is a very good solution from [Stack Overflow](https://stackoverflow.com/a/45124775/7175713)"
3567+
"The data above requires extracting `a`, `ab` and `ac` from `1` and `2`. This is another example of a paired column. We could solve this using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html); in fact there is a very good solution from [Stack Overflow](https://stackoverflow.com/a/45124775/7175713)"
35683568
]
35693569
},
35703570
{

examples/notebooks/anime.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@
5555
"metadata": {},
5656
"outputs": [],
5757
"source": [
58-
"# Supress user warnings when we try overwriting our custom pandas flavor functions\n",
58+
"# Suppress user warnings when we try overwriting our custom pandas flavor functions\n",
5959
"import warnings\n",
6060
"warnings.filterwarnings('ignore')"
6161
]
@@ -1316,7 +1316,7 @@
13161316
" :param df: A pandas DataFrame.\n",
13171317
" :param column_name: A `str` indicating which column the split action is to be made.\n",
13181318
" :param start: optional An `int` for the start index of the slice\n",
1319-
" :param stop: optinal An `int` for the end index of the slice\n",
1319+
" :param stop: optional An `int` for the end index of the slice\n",
13201320
" :param pat: String or regular expression to split on. If not specified, split on whitespace.\n",
13211321
"\n",
13221322
" \"\"\"\n",

examples/notebooks/board_games.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -938,7 +938,7 @@
938938
"cell_type": "markdown",
939939
"metadata": {},
940940
"source": [
941-
"### What is the relationship between games' player numbers, reccomended minimum age, and the game's estimated length?"
941+
"### What is the relationship between games' player numbers, recommended minimum age, and the game's estimated length?"
942942
]
943943
},
944944
{

examples/notebooks/dirty_data.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -708,7 +708,7 @@
708708
{
709709
"cell_type": "markdown",
710710
"source": [
711-
"Note how now we have really nice column names! You might be wondering why I'm not modifying the two certifiation columns -- that is the next thing we'll tackle."
711+
"Note how now we have really nice column names! You might be wondering why I'm not modifying the two certification columns -- that is the next thing we'll tackle."
712712
],
713713
"metadata": {}
714714
},

examples/notebooks/medium_franchise.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"\n",
1616
"* String operations with regular expressions (with `pandas-favor`)\n",
1717
"* Data type changes (with `pyjanitor`)\n",
18-
"* Split strings in cells into seperate rows (with `pandas-flavor`)\n",
18+
"* Split strings in cells into separate rows (with `pandas-flavor`)\n",
1919
"* Split strings in cells into separate columns (with `pyjanitor` + `pandas-flavor`)\n",
2020
"* Filter dataframe values based on substring pattern (with `pyjanitor`)\n",
2121
"* Column value remapping with fuzzy substring matching (with `pyjanitor` + `pandas-flavor`)\n",
@@ -66,7 +66,7 @@
6666
"metadata": {},
6767
"outputs": [],
6868
"source": [
69-
"# Supress user warnings when we try overwriting our custom pandas flavor functions\n",
69+
"# Suppress user warnings when we try overwriting our custom pandas flavor functions\n",
7070
"import warnings\n",
7171
"warnings.filterwarnings('ignore')"
7272
]
@@ -220,7 +220,7 @@
220220
"# [pandas-flavor]\n",
221221
"@pf.register_dataframe_method\n",
222222
"def str_remove(df, column_name: str, pattern: str = ''):\n",
223-
" \"\"\"Remove string patten from a column\n",
223+
" \"\"\"Remove string pattern from a column\n",
224224
"\n",
225225
" Wrapper around df.str.replace()\n",
226226
"\n",
@@ -595,7 +595,7 @@
595595
" column_name: str\n",
596596
" Name of the column to be operated on\n",
597597
" into: List[str], default to None\n",
598-
" New column names for the splitted columns\n",
598+
" New column names for the split columns\n",
599599
" sep: str, default to ''\n",
600600
" Separator at which to split the column\n",
601601
"\n",

janitor/functions/complete.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ def _computations_complete(
326326
# instead of assign (which is also a for loop),
327327
# to cater for scenarios where the column_name is not a string
328328
# assign only works with keys that are strings
329-
# Also, the output wil be floats (for numeric types),
329+
# Also, the output will be floats (for numeric types),
330330
# even if all the columns could be integers
331331
# user can always convert to int if required
332332
for column_name, value in fill_value.items():

janitor/functions/conditional_join.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -798,7 +798,7 @@ def _multiple_conditional_join_le_lt(
798798
# and then build the remaining indices,
799799
# using _generate_indices function
800800
# the aim of this for loop is to see if there is
801-
# the possiblity of a range join, and if there is,
801+
# the possibility of a range join, and if there is,
802802
# then use the optimised path
803803
le_lt = None
804804
ge_gt = None

janitor/functions/factorize_columns.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def factorize_columns(
1717
1818
This method will create a new column with the string `_enc` appended
1919
after the original column's name.
20-
This can be overriden with the suffix parameter.
20+
This can be overridden with the suffix parameter.
2121
2222
Internally, this method uses pandas `factorize` method.
2323
It takes in an optional suffix and keyword arguments also.

mkdocs/devguide.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ and mount the repository directory inside your Docker container.
3535
Follow best practices to submit a pull request by making a feature branch.
3636
Now, hack away, and submit in your pull request!
3737

38-
You shouln't be able to access the cloned repo
38+
You shouldn't be able to access the cloned repo
3939
on your local hard drive.
4040
If you do want local access, then clone the repo locally first
4141
before selecting "Remote Containers: Open Folder In Container".
@@ -153,7 +153,7 @@ Now you can make your changes locally.
153153

154154
### Check your environment
155155

156-
To ensure that your environemnt is properly set up, run the following command:
156+
To ensure that your environment is properly set up, run the following command:
157157

158158
```bash
159159
python -m pytest -m "not turtle"
@@ -165,7 +165,7 @@ development and you are ready to contribute 🥳.
165165
### Check your code
166166

167167
When you're done making changes, commit your staged files with a meaningful message.
168-
While we have automated checks that run before code is commited via pre-commit and GitHub Actions
168+
While we have automated checks that run before code is committed via pre-commit and GitHub Actions
169169
to run tests before code can be merged,
170170
you can still manually run the following commands to check that your changes are properly
171171
formatted and that all tests still pass.
@@ -188,7 +188,7 @@ To do so:
188188
the optional dependencies (e.g. `rdkit`) installed.
189189

190190
!!! info
191-
* pre-commit **does not run** your tests locally rather all tests are run in continous integration (CI).
191+
* pre-commit **does not run** your tests locally rather all tests are run in continuous integration (CI).
192192
* All tests must pass in CI before the pull request is accepted,
193193
and the continuous integration system up on GitHub Actions
194194
will help run all of the tests before they are committed to the repository.

nbconvert_config.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@
269269
#
270270
# you can overwrite :meth:`preprocess_cell` to apply a transformation
271271
# independently on each cell or :meth:`preprocess` if you prefer your own logic.
272-
# See corresponding docstring for informations.
272+
# See corresponding docstring for information.
273273
#
274274
# Disabled by default and can be enabled via the config by
275275
# 'c.YourPreprocessorName.enabled = True'
@@ -430,7 +430,7 @@
430430
# DebugWriter(WriterBase) configuration
431431
# ------------------------------------------------------------------------------
432432

433-
## Consumes output from nbconvert export...() methods and writes usefull
433+
## Consumes output from nbconvert export...() methods and writes useful
434434
# debugging information to the stdout. The information includes a list of
435435
# resources that were extracted from the notebook(s) during export.
436436

tests/functions/test_coalesce.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def test_coalesce_without_target(df):
5656

5757
@pytest.mark.functions
5858
def test_coalesce_without_delete():
59-
"""Test ouptut if nulls remain and `default_value` is provided."""
59+
"""Test output if nulls remain and `default_value` is provided."""
6060
df = pd.DataFrame(
6161
{"s1": [np.nan, np.nan, 6, 9, 9], "s2": [np.nan, 8, 7, 9, 9]}
6262
)

tests/functions/test_pivot_wider.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ def test_pivot_long_wide_long():
217217
assert_frame_equal(result, df_in)
218218

219219

220-
@pytest.mark.xfail(reason="doesnt match, since pivot implicitly sorts")
220+
@pytest.mark.xfail(reason="doesn't match, since pivot implicitly sorts")
221221
def test_pivot_wide_long_wide():
222222
"""
223223
Test that transformation from pivot_longer to wider and

tests/functions/test_select_columns.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -382,7 +382,7 @@ def test_callable_length(numbers):
382382

383383
@pytest.mark.functions
384384
def test_callable_dtype(dataframe):
385-
"""Test output when selecting columnns based on dtype"""
385+
"""Test output when selecting columns based on dtype"""
386386
expected = dataframe.select_dtypes("number")
387387
actual = dataframe.select_columns(is_numeric_dtype)
388388
assert_frame_equal(expected, actual)

tests/functions/test_sort_column_value_order.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Below, company_sales and company_sales_2 are both dfs.
33
44
company_sales_2 is inverted, April is the first month
5-
where in comapny_sales Jan is the first month
5+
where in company_sales Jan is the first month
66
77
The values found in each row are the same
88
company_sales's Jan row contains the
@@ -14,7 +14,7 @@
1414
1515
Test 3 asserts that company_sales_2 and
1616
company_sales with columns sorted
17-
will become equivilent, meaning
17+
will become equivalent, meaning
1818
the columns have been successfully ordered.
1919
"""
2020

0 commit comments

Comments
 (0)