You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
importpandasaspddf=pd.read_csv("test.csv", dtype_backend="pyarrow", dtype={"col2":"string[pyarrow]"})
dfcol1col20abc11dfg2df.dtypescol1string[pyarrow]
col2string[pyarrow]
dtype: object# Series of col1 shows string[pyarrow] dtypedf["col1"]
0abc1dfgName: col1, dtype: string[pyarrow]
# Seris of col2 does NOT show string[pyarrow], but string dtypedf["col2"]
0112Name: col2, dtype: string# Using ArrowDtype instead of string alias with the dtype parameter of read_csv() correctly shows string[pyarrow] as the dtype of the Series consisting of col2df1=pd.read_csv("test.csv", dtype_backend="pyarrow", dtype={"col2": pd.ArrowDtype(pyarrow.string())})
df1["col2"]
0112Name: col2, dtype: string[pyarrow]
Issue Description
When reading a CSV with dtype_backend="pyarrow" and specifying a column as "string[pyarrow]" via the dtype parameter, the resulting Series displays as dtype: string instead of the expected string[pyarrow], even though df.dtypes shows string[pyarrow]. This inconsistency only occurs when using the string alias — using pd.ArrowDtype(pyarrow.string()) correctly preserves and displays the Arrow-backed string[pyarrow] dtype in the Series.
Looking at this again, I am sure this is not a bug. Sorry for causing confusion. What I got confused about boils down this example:
ser = pd.Series(["a", "b"], dtype = "string[pyarrow]")
ser.dtypes
# string[pyarrow]
ser
# 0 a
# 1 b
# dtype: string
I had expected the display output of ser to also show "string[pyarrow]" as the dtype, not string. As I learned from this comment it is not the case. However, when specifying pyarrow as the dtype backend, e.g. pd.read_csv(..., dtype_backend='pyarrow'), the pyarrow-backed nullable ArrowDtype is used not the the string alias (which I did not realize).
The string alias "string[pyarrow]" maps to pd.StringDtype("pyarrow") which is not equivalent to specifying dtype=pd.ArrowDtype(pa.string()). Generally, operations on the data will behave similarly except pd.StringDtype("pyarrow") can return NumPy-backed nullable types while pd.ArrowDtype(pa.string()) will return [ArrowDtype](https://pandas.pydata.org/docs/reference/api/pandas.ArrowDtype.html#pandas.ArrowDtype).
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
When reading a CSV with dtype_backend="pyarrow" and specifying a column as "string[pyarrow]" via the dtype parameter, the resulting Series displays as dtype: string instead of the expected string[pyarrow], even though df.dtypes shows string[pyarrow]. This inconsistency only occurs when using the string alias — using pd.ArrowDtype(pyarrow.string()) correctly preserves and displays the Arrow-backed string[pyarrow] dtype in the Series.
Expected Behavior
df1["col2"]
0 1
1 2
Name: col2, dtype: string[pyarrow]
Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.2.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: