diff --git a/.canvas b/.canvas deleted file mode 100644 index 9c0f654da..000000000 --- a/.canvas +++ /dev/null @@ -1,10 +0,0 @@ ---- -:lessons: -- :id: 255345 - :course_id: 6130 - :canvas_url: https://learning.flatironschool.com/courses/6130/pages/phase-4-full-stack-application-project-template - :type: page -- :id: 297405 - :course_id: 7560 - :canvas_url: https://learning.flatironschool.com/courses/7560/pages/phase-4-full-stack-application-project-template - :type: page diff --git a/.github/workflows/canvas-sync-ruby-update.yml b/.github/workflows/canvas-sync-ruby-update.yml deleted file mode 100644 index f8818dc0c..000000000 --- a/.github/workflows/canvas-sync-ruby-update.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: Sync with Canvas Ruby v2.7 - -on: - push: - branches: [master, main] - paths: - - 'README.md' - -jobs: - sync: - name: Sync with Canvas - - runs-on: ubuntu-latest - - steps: - - uses: actions/checkout@v2 - - - name: Set up Ruby - uses: ruby/setup-ruby@v1 - with: - ruby-version: 2.7 - - - name: Install github-to-canvas - run: gem install github-to-canvas - - # Secret stored in learn-co-curriculum Settings/Secrets - - name: Sync from .canvas file - run: github-to-canvas -a -lr - env: - CANVAS_API_KEY: ${{ secrets.CANVAS_API_KEY }} - CANVAS_API_PATH: ${{ secrets.CANVAS_API_PATH }} diff --git a/Pipfile b/Pipfile deleted file mode 100644 index 9c0ba76e7..000000000 --- a/Pipfile +++ /dev/null @@ -1,18 +0,0 @@ -[[source]] -url = "https://pypi.org/simple" -verify_ssl = true -name = "pypi" - -[packages] -ipdb = "0.13.9" -flask = "*" -flask-sqlalchemy = "3.0.3" -Werkzeug = "2.2.2" -flask-migrate = "*" -sqlalchemy-serializer = "*" -flask-restful = "*" -flask-cors = "*" -faker = "*" - -[requires] -python_full_version = "3.8.13" diff --git a/README.md b/README.md deleted file mode 100644 index c71c9b0f3..000000000 --- a/README.md +++ /dev/null @@ -1,357 +0,0 @@ -# Phase 4 Full-Stack Application Project Template - -## Learning Goals - -- Discuss the basic directory structure of a full-stack Flask/React application. -- Carry out the first steps in creating your Phase 4 project. - ---- - -## Introduction - -Fork and clone this lesson for a template for your full-stack application. Take -a look at the directory structure before we begin (NOTE: node_modules will be -generated in a subsequent step): - -```console -$ tree -L 2 -$ # the -L argument limits the depth at which we look into the directory structure -. -├── CONTRIBUTING.md -├── LICENSE.md -├── Pipfile -├── README.md -├── client -│ ├── README.md -│ ├── package.json -│ ├── public -│ └── src -└── server - ├── app.py - ├── config.py - ├── models.py - └── seed.py -``` - -A `migrations` folder will be added to the `server` directory in a later step. - -The `client` folder contains a basic React application, while the `server` -folder contains a basic Flask application. You will adapt both folders to -implement the code for your project . - -NOTE: If you did not previously install `tree` in your environment setup, MacOS -users can install this with the command `brew install tree`. WSL and Linux users -can run `sudo apt-get install tree` to download it as well. - -## Where Do I Start? - -Just as with your Phase 3 Project, this will likely be one of the biggest -projects you've undertaken so far. Your first task should be creating a Git -repository to keep track of your work and roll back any undesired changes. - -### Removing Existing Git Configuration - -If you're using this template, start off by removing the existing metadata for -Github and Canvas. Run the following command to carry this out: - -```console -$ rm -rf .git .canvas -``` - -The `rm` command removes files from your computer's memory. The `-r` flag tells -the console to remove _recursively_, which allows the command to remove -directories and the files within them. `-f` removes them permanently. - -`.git` contains this directory's configuration to track changes and push to -Github (you want to track and push _your own_ changes instead), and `.canvas` -contains the metadata to create a Canvas page from your Git repo. You don't have -the permissions to edit our Canvas course, so it's not worth keeping around. - -### Creating Your Own Git Repo - -First things first- rename this directory! Once you have an idea for a name, -move one level up with `cd ..` and run -`mv python-p4-project-template ` to change its name (replace - with an appropriate project directory name). - -> **Note: If you typed the `mv` command in a terminal within VS Code, you should -> close VS Code then reopen it.** - -> **Note: `mv` actually stands for "move", but your computer interprets this -> rename as a move from a directory with the old name to a directory with a new -> name.** - -`cd` back into your new directory and run `git init` to create a local git -repository. Add all of your local files to version control with `git add --all`, -then commit them with `git commit -m'initial commit'`. (You can change the -message here- this one is just a common choice.) - -Navigate to [GitHub](https://github.com). In the upper-right corner of the page, -click on the "+" dropdown menu, then select "New repository". Enter the name of -your local repo, choose whether you would like it to be public or private, make -sure "Initialize this repository with a README" is unchecked (you already have -one), then click "Create repository". - -Head back to the command line and enter -`git remote add origin git@github.com:github-username/new-repository-name.git`. -NOTE: Replace `github-username` with your github username, and -`new-repository-name` with the name of your new repository. This command will -map the remote repository to your local repository. Finally, push your first -commit with `git push -u origin main`. - -Your project is now version-controlled locally and online. This will allow you -to create different versions of your project and pick up your work on a -different machine if the need arises. - ---- - -## Setup - -### `server/` - -The `server/` directory contains all of your backend code. - -`app.py` is your Flask application. You'll want to use Flask to build a simple -API backend like we have in previous modules. You should use Flask-RESTful for -your routes. You should be familiar with `models.py` and `seed.py` by now, but -remember that you will need to use Flask-SQLAlchemy, Flask-Migrate, and -SQLAlchemy-Serializer instead of SQLAlchemy and Alembic in your models. - -The project contains a default `Pipfile` with some basic dependencies. You may -adapt the `Pipfile` if there are additional dependencies you want to add for -your project. - -To download the dependencies for the backend server, run: - -```console -pipenv install -pipenv shell -``` - -You can run your Flask API on [`localhost:5555`](http://localhost:5555) by -running: - -```console -python server/app.py -``` - -Check that your server serves the default route `http://localhost:5555`. You -should see a web page with the heading "Project Server". - -### `client/` - -The `client/` directory contains all of your frontend code. The file -`package.json` has been configured with common React application dependencies, -include `react-router-dom`. The file also sets the `proxy` field to forward -requests to `"http://localhost:5555". Feel free to change this to another port- -just remember to configure your Flask app to use another port as well! - -To download the dependencies for the frontend client, run: - -```console -npm install --prefix client -``` - -You can run your React app on [`localhost:3000`](http://localhost:3000) by -running: - -```sh -npm start --prefix client -``` - -Check that your the React client displays a default page -`http://localhost:3000`. You should see a web page with the heading "Project -Client". - -## Generating Your Database - -NOTE: The initial project directory structure does not contain the `instance` or -`migrations` folders. Change into the `server` directory: - -```console -cd server -``` - -Then enter the commands to create the `instance` and `migrations` folders and -the database `app.db` file: - -``` -flask db init -flask db upgrade head -``` - -Type `tree -L 2` within the `server` folder to confirm the new directory -structure: - -```console -. -├── app.py -├── config.py -├── instance -│ └── app.db -├── migrations -│ ├── README -│ ├── __pycache__ -│ ├── alembic.ini -│ ├── env.py -│ ├── script.py.mako -│ └── versions -├── models.py -└── seed.py -``` - -Edit `models.py` and start creating your models. Import your models as needed in -other modules, i.e. `from models import ...`. - -Remember to regularly run -`flask db revision --autogenerate -m''`, replacing -`` with an appropriate message, and `flask db upgrade head` -to track your modifications to the database and create checkpoints in case you -ever need to roll those modifications back. - -> **Tip: It's always a good idea to start with an empty revision! This allows -> you to roll all the way back while still holding onto your database. You can -> create this empty revision with `flask db revision -m'Create DB'`.** - -If you want to seed your database, now would be a great time to write out your -`seed.py` script and run it to generate some test data. Faker has been included -in the Pipfile if you'd like to use that library. - ---- - -#### `config.py` - -When developing a large Python application, you might run into a common issue: -_circular imports_. A circular import occurs when two modules import from one -another, such as `app.py` and `models.py`. When you create a circular import and -attempt to run your app, you'll see the following error: - -```console -ImportError: cannot import name -``` - -If you're going to need an object in multiple modules like `app` or `db`, -creating a _third_ module to instantiate these objects can save you a great deal -of circular grief. Here's a good start to a Flask config file (you may need more -if you intend to include features like authentication and passwords): - -```py -# Standard library imports - -# Remote library imports -from flask import Flask -from flask_cors import CORS -from flask_migrate import Migrate -from flask_restful import Api -from flask_sqlalchemy import SQLAlchemy -from sqlalchemy import MetaData - -# Local imports - -# Instantiate app, set attributes -app = Flask(__name__) -app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db' -app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False -app.json.compact = False - -# Define metadata, instantiate db -metadata = MetaData(naming_convention={ - "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s", -}) -db = SQLAlchemy(metadata=metadata) -migrate = Migrate(app, db) -db.init_app(app) - -# Instantiate REST API -api = Api(app) - -# Instantiate CORS -CORS(app) - -``` - -Now let's review that last line... - -#### CORS - -CORS (Cross-Origin Resource Sharing) is a system that uses HTTP headers to -determine whether resources from different servers-of-origin can be accessed. If -you're using the fetch API to connect your frontend to your Flask backend, you -need to configure CORS on your Flask application instance. Lucky for us, that -only takes one line: - -```py -CORS(app) - -``` - -By default, Flask-CORS enables CORS on all routes in your application with all -fetching servers. You can also specify the resources that allow CORS. The -following specifies that routes beginning with `api/` allow CORS from any -originating server: - -```py -CORS(app, resources={r"/api/*": {"origins": "*"}}) - -``` - -You can also set this up resource-by-resource by importing and using the -`@cross_origin` decorator: - -```py -@app.route("/") -@cross_origin() -def howdy(): - return "Howdy partner!" - -``` - ---- - -## Updating Your README.md - -`README.md` is a Markdown file that describes your project. These files can be -used in many different ways- you may have noticed that we use them to generate -entire Canvas lessons- but they're most commonly used as homepages for online -Git repositories. **When you develop something that you want other people to -use, you need to have a README.** - -Markdown is not a language that we cover in Flatiron's Software Engineering -curriculum, but it's not a particularly difficult language to learn (if you've -ever left a comment on Reddit, you might already know the basics). Refer to the -cheat sheet in this lesson's resources for a basic guide to Markdown. - -### What Goes into a README? - -This README should serve as a template for your own- go through the important -files in your project and describe what they do. Each file that you edit (you -can ignore your migration files) should get at least a paragraph. Each function -should get a small blurb. - -You should descibe your application first, and with a good level of detail. The -rest should be ordered by importance to the user. (Probably routes next, then -models.) - -Screenshots and links to resources that you used throughout are also useful to -users and collaborators, but a little more syntactically complicated. Only add -these in if you're feeling comfortable with Markdown. - ---- - -## Conclusion - -A lot of work goes into a full-stack application, but it all relies on concepts -that you've practiced thoroughly throughout this phase. Hopefully this template -and guide will get you off to a good start with your Phase 4 Project. - -Happy coding! - ---- - -## Resources - -- [Setting up a respository - Atlassian](https://www.atlassian.com/git/tutorials/setting-up-a-repository) -- [Create a repo- GitHub Docs](https://docs.github.com/en/get-started/quickstart/create-a-repo) -- [Markdown Cheat Sheet](https://www.markdownguide.org/cheat-sheet/) -- [Python Circular Imports - StackAbuse](https://stackabuse.com/python-circular-imports/) -- [Flask-CORS](https://flask-cors.readthedocs.io/en/latest/) diff --git a/client/src/components/App.js b/client/src/components/App.js deleted file mode 100644 index af444c5b1..000000000 --- a/client/src/components/App.js +++ /dev/null @@ -1,8 +0,0 @@ -import React, { useEffect, useState } from "react"; -import { Switch, Route } from "react-router-dom"; - -function App() { - return

Project Client

; -} - -export default App; diff --git a/client/src/index.css b/client/src/index.css deleted file mode 100644 index b457c3223..000000000 --- a/client/src/index.css +++ /dev/null @@ -1,7 +0,0 @@ -:root { - --bg: #282c34; - --header: #fff; - --link: #61dafb; - --text: hsla(0, 0%, 100%, 0.88); -} - diff --git a/client/src/index.js b/client/src/index.js deleted file mode 100644 index be98dc77e..000000000 --- a/client/src/index.js +++ /dev/null @@ -1,8 +0,0 @@ -import React from "react"; -import App from "./components/App"; -import "./index.css"; -import { createRoot } from "react-dom/client"; - -const container = document.getElementById("root"); -const root = createRoot(container); -root.render(); diff --git a/server/app.py b/server/app.py deleted file mode 100644 index d7f4c5b4c..000000000 --- a/server/app.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 - -# Standard library imports - -# Remote library imports -from flask import request -from flask_restful import Resource - -# Local imports -from config import app, db, api -# Add your model imports - - -# Views go here! - -@app.route('/') -def index(): - return '

Project Server

' - - -if __name__ == '__main__': - app.run(port=5555, debug=True) - diff --git a/server/config.py b/server/config.py deleted file mode 100644 index b0603b320..000000000 --- a/server/config.py +++ /dev/null @@ -1,31 +0,0 @@ -# Standard library imports - -# Remote library imports -from flask import Flask -from flask_cors import CORS -from flask_migrate import Migrate -from flask_restful import Api -from flask_sqlalchemy import SQLAlchemy -from sqlalchemy import MetaData - -# Local imports - -# Instantiate app, set attributes -app = Flask(__name__) -app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db' -app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False -app.json.compact = False - -# Define metadata, instantiate db -metadata = MetaData(naming_convention={ - "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s", -}) -db = SQLAlchemy(metadata=metadata) -migrate = Migrate(app, db) -db.init_app(app) - -# Instantiate REST API -api = Api(app) - -# Instantiate CORS -CORS(app) diff --git a/server/models.py b/server/models.py deleted file mode 100644 index c34c5001f..000000000 --- a/server/models.py +++ /dev/null @@ -1,6 +0,0 @@ -from sqlalchemy_serializer import SerializerMixin -from sqlalchemy.ext.associationproxy import association_proxy - -from config import db - -# Models go here! diff --git a/server/seed.py b/server/seed.py deleted file mode 100644 index 849d148ad..000000000 --- a/server/seed.py +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env python3 - -# Standard library imports -from random import randint, choice as rc - -# Remote library imports -from faker import Faker - -# Local imports -from app import app -from models import db - -if __name__ == '__main__': - fake = Faker() - with app.app_context(): - print("Starting seed...") - # Seed code goes here! diff --git a/staybuddy/README_BACKEND.md b/staybuddy/README_BACKEND.md new file mode 100644 index 000000000..4c301af43 --- /dev/null +++ b/staybuddy/README_BACKEND.md @@ -0,0 +1,38 @@ +# Backend Server Setup + +The frontend is configured to connect to a Flask backend server running on `http://localhost:5000`. + +## To start the backend server: + +1. **Navigate to the server directory:** + + ```bash + cd staybuddy/server + ``` + +2. **Install Python dependencies:** + + ```bash + pip install -r requirements.txt + ``` + +3. **Start the Flask server:** + ```bash + python app.py + ``` + +The server should start and be available at `http://localhost:5000`. Once running, the frontend forms and API calls will work properly. + +## API Endpoints Available: + +- `POST /auth/login` - User login +- `POST /auth/signup` - User signup +- `GET /auth/check_session` - Check user session +- `GET /stays` - Get all stays +- `GET /stays/:id` - Get stay details +- `POST /bookings` - Create booking +- `GET /bookings` - Get user bookings + +## Database: + +The app uses SQLite database (`app.db`) which will be created automatically when you first run the server. diff --git a/client/.gitignore b/staybuddy/client/.gitignore similarity index 100% rename from client/.gitignore rename to staybuddy/client/.gitignore diff --git a/client/README.md b/staybuddy/client/README.md similarity index 79% rename from client/README.md rename to staybuddy/client/README.md index 0c83cde2c..58beeaccd 100644 --- a/client/README.md +++ b/staybuddy/client/README.md @@ -9,10 +9,10 @@ In the project directory, you can run: ### `npm start` Runs the app in the development mode.\ -Open [http://localhost:3000](http://localhost:3000) to view it in the browser. +Open [http://localhost:3000](http://localhost:3000) to view it in your browser. -The page will reload if you make edits.\ -You will also see any lint errors in the console. +The page will reload when you make changes.\ +You may also see any lint errors in the console. ### `npm test` @@ -31,13 +31,13 @@ See the section about [deployment](https://facebook.github.io/create-react-app/d ### `npm run eject` -**Note: this is a one-way operation. Once you `eject`, you can’t go back!** +**Note: this is a one-way operation. Once you `eject`, you can't go back!** -If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. +If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. -Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. +Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. -You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. +You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More diff --git a/client/package.json b/staybuddy/client/package.json similarity index 65% rename from client/package.json rename to staybuddy/client/package.json index 2e4db7320..7e0e10eb2 100644 --- a/client/package.json +++ b/staybuddy/client/package.json @@ -2,18 +2,21 @@ "name": "client", "version": "0.1.0", "private": true, - "proxy": "http://localhost:5555", + "proxy": "http://localhost:5000", "dependencies": { - "@testing-library/jest-dom": "^5.17.0", - "@testing-library/react": "^13.4.0", + "@testing-library/dom": "^10.4.0", + "@testing-library/jest-dom": "^6.6.3", + "@testing-library/react": "^16.3.0", "@testing-library/user-event": "^13.5.0", - "react": "^18.2.0", - "react-dom": "^18.2.0", - "react-router-dom": "^5.2.0", + "formik": "^2.4.6", + "jwt-decode": "^4.0.0", + "react": "^19.1.0", + "react-dom": "^19.1.0", + "react-router-dom": "^7.6.3", "react-scripts": "5.0.1", - "web-vitals": "^2.1.4" + "web-vitals": "^2.1.4", + "yup": "^1.6.1" }, - "devDependencies": {"@babel/plugin-proposal-private-property-in-object":"^7.21.11"}, "scripts": { "start": "react-scripts start", "build": "react-scripts build", diff --git a/client/public/favicon.ico b/staybuddy/client/public/favicon.ico similarity index 100% rename from client/public/favicon.ico rename to staybuddy/client/public/favicon.ico diff --git a/client/public/index.html b/staybuddy/client/public/index.html similarity index 88% rename from client/public/index.html rename to staybuddy/client/public/index.html index a7bfc838f..aa069f27c 100644 --- a/client/public/index.html +++ b/staybuddy/client/public/index.html @@ -3,11 +3,6 @@ - - - Reciplease + React App diff --git a/client/public/logo192.png b/staybuddy/client/public/logo192.png similarity index 100% rename from client/public/logo192.png rename to staybuddy/client/public/logo192.png diff --git a/client/public/logo512.png b/staybuddy/client/public/logo512.png similarity index 100% rename from client/public/logo512.png rename to staybuddy/client/public/logo512.png diff --git a/client/public/manifest.json b/staybuddy/client/public/manifest.json similarity index 100% rename from client/public/manifest.json rename to staybuddy/client/public/manifest.json diff --git a/client/public/robots.txt b/staybuddy/client/public/robots.txt similarity index 100% rename from client/public/robots.txt rename to staybuddy/client/public/robots.txt diff --git a/staybuddy/client/src/App.css b/staybuddy/client/src/App.css new file mode 100644 index 000000000..ebe798e63 --- /dev/null +++ b/staybuddy/client/src/App.css @@ -0,0 +1,40 @@ +body { + font-family: Arial, sans-serif; + background-color: #f7f7f7; + margin: 0; + padding: 0; + display: flex; + flex-direction: column; +} + +nav { + background-color: #ff6b00; + padding: 10px; + display: flex; + gap: 1rem; +} + +nav a, +nav button { + color: white; + text-decoration: none; + font-weight: bold; + border: none; + background: none; + cursor: pointer; +} + +nav a:hover, +nav button:hover { + text-decoration: underline; +} + +.container { + max-width: 1644px; + margin: 20px auto; + background: white; + padding: 20px; + border-radius: 8px; + display: flex; + flex-direction: column; +} diff --git a/staybuddy/client/src/App.js b/staybuddy/client/src/App.js new file mode 100644 index 000000000..f14c54474 --- /dev/null +++ b/staybuddy/client/src/App.js @@ -0,0 +1,32 @@ +import { Routes, Route } from 'react-router-dom'; +import NavBar from './components/NavBar'; +import StayList from './components/StayList'; +import StayDetails from './components/StayDetails'; +import NewStayForm from './components/NewStayForm'; +import LoginForm from './components/LoginForm'; +import SignupForm from './components/SignupForm'; +import BookingForm from './components/BookingForm'; // to be implemented +import MyBookings from './components/MyBookings'; // to be implemented +import './App.css'; + +function App() { + return ( + <> + +
+ + } /> + } /> + } /> + } /> + } /> + } /> + } /> + +
+ + ); +} + + +export default App; diff --git a/staybuddy/client/src/App.test.js b/staybuddy/client/src/App.test.js new file mode 100644 index 000000000..1f03afeec --- /dev/null +++ b/staybuddy/client/src/App.test.js @@ -0,0 +1,8 @@ +import { render, screen } from '@testing-library/react'; +import App from './App'; + +test('renders learn react link', () => { + render(); + const linkElement = screen.getByText(/learn react/i); + expect(linkElement).toBeInTheDocument(); +}); diff --git a/staybuddy/client/src/UserContext.jsx b/staybuddy/client/src/UserContext.jsx new file mode 100644 index 000000000..a4c6ae446 --- /dev/null +++ b/staybuddy/client/src/UserContext.jsx @@ -0,0 +1,35 @@ +import { createContext, useState, useEffect } from 'react'; +// client/src/UserContext.jsx +import { jwtDecode } from 'jwt-decode'; // ✅ Correct way + + +export const UserContext = createContext(); + +const UserProvider = ({ children }) => { + const [user, setUser] = useState(null); + + useEffect(() => { + const token = localStorage.getItem('token'); + if (token) { + try { + const decoded = jwtDecode(token); + setUser({ id: decoded.sub || decoded.identity }); + } catch (e) { + localStorage.removeItem('token'); + } + } + }, []); + + const logout = () => { + localStorage.removeItem('token'); + setUser(null); + }; + + return ( + + {children} + + ); +}; + +export default UserProvider; \ No newline at end of file diff --git a/staybuddy/client/src/components/App.jsx b/staybuddy/client/src/components/App.jsx new file mode 100644 index 000000000..f68723925 --- /dev/null +++ b/staybuddy/client/src/components/App.jsx @@ -0,0 +1,21 @@ +import { BrowserRouter as Router, Routes, Route } from 'react-router-dom'; +import NavBar from './components/NavBar'; +import StayList from './components/StayList'; +import StayDetails from './components/StayDetails'; +import BookingForm from './components/BookingForm'; +import NewStayForm from './components/NewStayForm'; + +function App() { + return ( + + + + } /> + } /> + } /> + + + ); +} + +export default App; \ No newline at end of file diff --git a/staybuddy/client/src/components/BookingForm.jsx b/staybuddy/client/src/components/BookingForm.jsx new file mode 100644 index 000000000..a27791582 --- /dev/null +++ b/staybuddy/client/src/components/BookingForm.jsx @@ -0,0 +1,61 @@ +import { useParams, useNavigate } from "react-router-dom"; +import { Formik, Form, Field, ErrorMessage } from "formik"; +import * as Yup from "yup"; +import { getToken } from "../services/api"; + +const BookingForm = () => { + const { stayId } = useParams(); + const navigate = useNavigate(); + + return ( + { + try { + const res = await fetch("http://localhost:5000/bookings", { + method: "POST", + headers: { + "Content-Type": "application/json", + Authorization: `Bearer ${getToken()}`, + }, + body: JSON.stringify({ ...values, stay_id: stayId }), + }); + if (res.ok) { + navigate("/bookings"); + } else { + const data = await res.json(); + setErrors({ start_date: data.error || "Booking failed" }); + } + } catch (err) { + console.error("Network error:", err); + setErrors({ + start_date: + "Server unavailable. Please check if the backend server is running on port 5000.", + }); + } + setSubmitting(false); + }} + > +
+ + + + + + + + + + + + + +
+ ); +}; + +export default BookingForm; diff --git a/staybuddy/client/src/components/LoginForm.jsx b/staybuddy/client/src/components/LoginForm.jsx new file mode 100644 index 000000000..81afda191 --- /dev/null +++ b/staybuddy/client/src/components/LoginForm.jsx @@ -0,0 +1,54 @@ +import { Formik, Form, Field, ErrorMessage } from "formik"; +import * as Yup from "yup"; +import { useNavigate } from "react-router-dom"; + +const LoginForm = () => { + const navigate = useNavigate(); + + return ( + { + try { + const res = await fetch("http://localhost:5000/auth/login", { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(values), + }); + const data = await res.json(); + if (res.ok) { + localStorage.setItem("token", data.token); + navigate("/"); + } else { + setErrors({ email: data.error }); + } + } catch (err) { + console.error("Network error:", err); + setErrors({ + email: + "Server unavailable. Please check if the backend server is running on port 5000.", + }); + } + setSubmitting(false); + }} + > +
+ + + + + + + + + + +
+ ); +}; + +export default LoginForm; diff --git a/staybuddy/client/src/components/MyBookings.jsx b/staybuddy/client/src/components/MyBookings.jsx new file mode 100644 index 000000000..de8ff72dd --- /dev/null +++ b/staybuddy/client/src/components/MyBookings.jsx @@ -0,0 +1,36 @@ +import { useEffect, useState } from 'react'; +import { getToken } from '../services/api'; + +const MyBookings = () => { + const [bookings, setBookings] = useState([]); + + useEffect(() => { + fetch('http://localhost:5000/bookings', { + headers: { + Authorization: `Bearer ${getToken()}` + } + }) + .then(res => res.json()) + .then(data => setBookings(data)); + }, []); + + return ( +
+

My Bookings

+ {bookings.length === 0 ? ( +

No bookings yet.

+ ) : ( +
    + {bookings.map(b => ( +
  • + Stay ID: {b.stay_id}, from {b.start_date} to {b.end_date} + {b.note &&

    Note: {b.note}

    } +
  • + ))} +
+ )} +
+ ); +}; + +export default MyBookings; \ No newline at end of file diff --git a/staybuddy/client/src/components/NavBar.jsx b/staybuddy/client/src/components/NavBar.jsx new file mode 100644 index 000000000..8f1121fcb --- /dev/null +++ b/staybuddy/client/src/components/NavBar.jsx @@ -0,0 +1,34 @@ +import { Link, useNavigate } from 'react-router-dom'; +import { useContext } from 'react'; +import { UserContext } from '../UserContext'; + +function NavBar() { + const { user, logout } = useContext(UserContext); + const navigate = useNavigate(); + + const handleLogout = () => { + logout(); + navigate('/login'); + }; + + return ( + + ); +} + +export default NavBar; \ No newline at end of file diff --git a/staybuddy/client/src/components/NewStayForm.jsx b/staybuddy/client/src/components/NewStayForm.jsx new file mode 100644 index 000000000..a6c1f6f43 --- /dev/null +++ b/staybuddy/client/src/components/NewStayForm.jsx @@ -0,0 +1,47 @@ +import { useState } from 'react'; +import { postWithToken } from '../services/api'; +import { useNavigate } from 'react-router-dom'; + +function NewStayForm() { + const [title, setTitle] = useState(''); + const [location, setLocation] = useState(''); + const [price, setPrice] = useState(''); + const navigate = useNavigate(); + + const handleSubmit = async (e) => { + e.preventDefault(); + const newStay = { title, location, price: parseFloat(price) }; + const data = await postWithToken('http://localhost:5000/stays', newStay); + + if (data.id) { + alert('Stay created!'); + navigate('/stays'); + } else { + alert(data.error || 'Something went wrong'); + } + }; + + return ( +
+

Create a New Stay

+ + setTitle(e.target.value)} required /> + + + setLocation(e.target.value)} required /> + + + setPrice(e.target.value)} + step="0.01" + required + /> + + +
+ ); +} + +export default NewStayForm; diff --git a/staybuddy/client/src/components/SignupForm.jsx b/staybuddy/client/src/components/SignupForm.jsx new file mode 100644 index 000000000..dd7ca1f67 --- /dev/null +++ b/staybuddy/client/src/components/SignupForm.jsx @@ -0,0 +1,60 @@ +// client/src/components/SignupForm.jsx +import { Formik, Form, Field, ErrorMessage } from "formik"; +import * as Yup from "yup"; +import { useNavigate } from "react-router-dom"; + +const SignupForm = () => { + const navigate = useNavigate(); + + return ( + { + try { + const res = await fetch("http://localhost:5000/signup", { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(values), + }); + const data = await res.json(); + if (res.ok) { + localStorage.setItem("token", data.token); + navigate("/"); + } else { + setErrors({ email: data.error }); + } + } catch (err) { + console.error("Network error:", err); + setErrors({ + email: + "Server unavailable. Please check if the backend server is running on port 5000.", + }); + } + setSubmitting(false); + }} + > +
+ + + + + + + + + + + + + + +
+ ); +}; + +export default SignupForm; diff --git a/staybuddy/client/src/components/StayDetails.jsx b/staybuddy/client/src/components/StayDetails.jsx new file mode 100644 index 000000000..78e2a8c15 --- /dev/null +++ b/staybuddy/client/src/components/StayDetails.jsx @@ -0,0 +1,26 @@ +import { useParams } from 'react-router-dom'; +import { useEffect, useState } from 'react'; + +const StayDetails = () => { + const { id } = useParams(); + const [stay, setStay] = useState(null); + + useEffect(() => { + fetch(`http://localhost:5000/stays/${id}`) + .then(res => res.json()) + .then(data => setStay(data)); + }, [id]); + + if (!stay) return

Loading...

; + + return ( +
+

{stay.title}

+

Location: {stay.location}

+

Price: ${stay.price}/night

+

{stay.description}

+
+ ); +}; + +export default StayDetails; \ No newline at end of file diff --git a/staybuddy/client/src/components/StayList.jsx b/staybuddy/client/src/components/StayList.jsx new file mode 100644 index 000000000..f91bf4d09 --- /dev/null +++ b/staybuddy/client/src/components/StayList.jsx @@ -0,0 +1,48 @@ +import { useEffect, useState } from "react"; +import { Link } from "react-router-dom"; +import { apiCall } from "../services/api"; + +const StayList = () => { + const [stays, setStays] = useState([]); + const [error, setError] = useState(null); + + useEffect(() => { + const fetchStays = async () => { + try { + const data = await apiCall("/stays"); + setStays(data); + setError(null); + } catch (err) { + console.error("Network error:", err); + setError( + "Unable to load stays. Please check if the backend server is running.", + ); + } + }; + + fetchStays(); + }, []); + + return ( +
+

All Stays

+ {error && ( +
+ {error} +
+ )} +
    + {stays.map((stay) => ( +
  • + + {stay.title} — {stay.location} — ${stay.price} + /night + +
  • + ))} +
+
+ ); +}; + +export default StayList; diff --git a/staybuddy/client/src/index.css b/staybuddy/client/src/index.css new file mode 100644 index 000000000..ec2585e8c --- /dev/null +++ b/staybuddy/client/src/index.css @@ -0,0 +1,13 @@ +body { + margin: 0; + font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', + 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', + sans-serif; + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +code { + font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New', + monospace; +} diff --git a/staybuddy/client/src/index.js b/staybuddy/client/src/index.js new file mode 100644 index 000000000..16e455ddb --- /dev/null +++ b/staybuddy/client/src/index.js @@ -0,0 +1,23 @@ +import React from 'react'; +import ReactDOM from 'react-dom/client'; +import './index.css'; +import App from './App'; +import { BrowserRouter } from 'react-router-dom'; +import reportWebVitals from './reportWebVitals'; +import UserProvider from './UserContext'; + +const root = ReactDOM.createRoot(document.getElementById('root')); +root.render( + + + + + + + +); + +// If you want to start measuring performance in your app, pass a function +// to log results (for example: reportWebVitals(console.log)) +// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals +reportWebVitals(); diff --git a/staybuddy/client/src/logo.svg b/staybuddy/client/src/logo.svg new file mode 100644 index 000000000..9dfc1c058 --- /dev/null +++ b/staybuddy/client/src/logo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/staybuddy/client/src/reportWebVitals.js b/staybuddy/client/src/reportWebVitals.js new file mode 100644 index 000000000..5253d3ad9 --- /dev/null +++ b/staybuddy/client/src/reportWebVitals.js @@ -0,0 +1,13 @@ +const reportWebVitals = onPerfEntry => { + if (onPerfEntry && onPerfEntry instanceof Function) { + import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => { + getCLS(onPerfEntry); + getFID(onPerfEntry); + getFCP(onPerfEntry); + getLCP(onPerfEntry); + getTTFB(onPerfEntry); + }); + } +}; + +export default reportWebVitals; diff --git a/staybuddy/client/src/services/api.js b/staybuddy/client/src/services/api.js new file mode 100644 index 000000000..99800d0f3 --- /dev/null +++ b/staybuddy/client/src/services/api.js @@ -0,0 +1,47 @@ +// client/src/services/api.js + +// Environment-aware API base URL +const API_BASE_URL = + process.env.NODE_ENV === "production" + ? "/api" // Use relative URL in production (will use proxy) + : "http://localhost:5000"; // Use localhost in development + +export const getToken = () => localStorage.getItem("token"); + +export const apiCall = async (endpoint, options = {}) => { + const url = `${API_BASE_URL}${endpoint}`; + const defaultOptions = { + headers: { + "Content-Type": "application/json", + }, + }; + + const token = getToken(); + if (token) { + defaultOptions.headers.Authorization = `Bearer ${token}`; + } + + const finalOptions = { + ...defaultOptions, + ...options, + headers: { + ...defaultOptions.headers, + ...options.headers, + }, + }; + + const response = await fetch(url, finalOptions); + + if (!response.ok) { + throw new Error(`HTTP error! status: ${response.status}`); + } + + return response.json(); +}; + +export const postWithToken = async (url, data) => { + return apiCall(url, { + method: "POST", + body: JSON.stringify(data), + }); +}; diff --git a/staybuddy/client/src/setupTests.js b/staybuddy/client/src/setupTests.js new file mode 100644 index 000000000..8f2609b7b --- /dev/null +++ b/staybuddy/client/src/setupTests.js @@ -0,0 +1,5 @@ +// jest-dom adds custom jest matchers for asserting on DOM nodes. +// allows you to do things like: +// expect(element).toHaveTextContent(/react/i) +// learn more: https://github.com/testing-library/jest-dom +import '@testing-library/jest-dom'; diff --git a/staybuddy/server/.flaskenv b/staybuddy/server/.flaskenv new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/Pipfile b/staybuddy/server/Pipfile new file mode 100644 index 000000000..949a9e197 --- /dev/null +++ b/staybuddy/server/Pipfile @@ -0,0 +1,17 @@ +[[source]] +url = "https://pypi.org/simple" +verify_ssl = true +name = "pypi" + +[packages] +flask-cors = "*" +flask-bcrypt = "*" +flask = "*" +flask-sqlalchemy = "*" +flask-migrate = "*" +flask-jwt-extended = "*" + +[dev-packages] + +[requires] +python_version = "3.11" diff --git a/staybuddy/server/Pipfile.lock b/staybuddy/server/Pipfile.lock new file mode 100644 index 000000000..8ce97a7bf --- /dev/null +++ b/staybuddy/server/Pipfile.lock @@ -0,0 +1,388 @@ +{ + "_meta": { + "hash": { + "sha256": "ba5fc5e0466ab86f09f00c42d24231aabc0124b63d077c03abe515c3cd647077" + }, + "pipfile-spec": 6, + "requires": { + "python_version": "3.11" + }, + "sources": [ + { + "name": "pypi", + "url": "https://pypi.org/simple", + "verify_ssl": true + } + ] + }, + "default": { + "alembic": { + "hashes": [ + "sha256:5f42e9bd0afdbd1d5e3ad856c01754530367debdebf21ed6894e34af52b3bb03", + "sha256:e53c38ff88dadb92eb22f8b150708367db731d58ad7e9d417c9168ab516cbed8" + ], + "markers": "python_version >= '3.9'", + "version": "==1.16.2" + }, + "bcrypt": { + "hashes": [ + "sha256:0042b2e342e9ae3d2ed22727c1262f76cc4f345683b5c1715f0250cf4277294f", + "sha256:0142b2cb84a009f8452c8c5a33ace5e3dfec4159e7735f5afe9a4d50a8ea722d", + "sha256:08bacc884fd302b611226c01014eca277d48f0a05187666bca23aac0dad6fe24", + "sha256:0d3efb1157edebfd9128e4e46e2ac1a64e0c1fe46fb023158a407c7892b0f8c3", + "sha256:0e30e5e67aed0187a1764911af023043b4542e70a7461ad20e837e94d23e1d6c", + "sha256:107d53b5c67e0bbc3f03ebf5b030e0403d24dda980f8e244795335ba7b4a027d", + "sha256:12fa6ce40cde3f0b899729dbd7d5e8811cb892d31b6f7d0334a1f37748b789fd", + "sha256:17a854d9a7a476a89dcef6c8bd119ad23e0f82557afbd2c442777a16408e614f", + "sha256:191354ebfe305e84f344c5964c7cd5f924a3bfc5d405c75ad07f232b6dffb49f", + "sha256:2ef6630e0ec01376f59a006dc72918b1bf436c3b571b80fa1968d775fa02fe7d", + "sha256:3004df1b323d10021fda07a813fd33e0fd57bef0e9a480bb143877f6cba996fe", + "sha256:335a420cfd63fc5bc27308e929bee231c15c85cc4c496610ffb17923abf7f231", + "sha256:33752b1ba962ee793fa2b6321404bf20011fe45b9afd2a842139de3011898fef", + "sha256:3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18", + "sha256:3b8d62290ebefd49ee0b3ce7500f5dbdcf13b81402c05f6dafab9a1e1b27212f", + "sha256:3e36506d001e93bffe59754397572f21bb5dc7c83f54454c990c74a468cd589e", + "sha256:41261d64150858eeb5ff43c753c4b216991e0ae16614a308a15d909503617732", + "sha256:50e6e80a4bfd23a25f5c05b90167c19030cf9f87930f7cb2eacb99f45d1c3304", + "sha256:531457e5c839d8caea9b589a1bcfe3756b0547d7814e9ce3d437f17da75c32b0", + "sha256:55a935b8e9a1d2def0626c4269db3fcd26728cbff1e84f0341465c31c4ee56d8", + "sha256:57967b7a28d855313a963aaea51bf6df89f833db4320da458e5b3c5ab6d4c938", + "sha256:584027857bc2843772114717a7490a37f68da563b3620f78a849bcb54dc11e62", + "sha256:59e1aa0e2cd871b08ca146ed08445038f42ff75968c7ae50d2fdd7860ade2180", + "sha256:5bd3cca1f2aa5dbcf39e2aa13dd094ea181f48959e1071265de49cc2b82525af", + "sha256:5c1949bf259a388863ced887c7861da1df681cb2388645766c89fdfd9004c669", + "sha256:62f26585e8b219cdc909b6a0069efc5e4267e25d4a3770a364ac58024f62a761", + "sha256:67a561c4d9fb9465ec866177e7aebcad08fe23aaf6fbd692a6fab69088abfc51", + "sha256:6fb1fd3ab08c0cbc6826a2e0447610c6f09e983a281b919ed721ad32236b8b23", + "sha256:74a8d21a09f5e025a9a23e7c0fd2c7fe8e7503e4d356c0a2c1486ba010619f09", + "sha256:79e70b8342a33b52b55d93b3a59223a844962bef479f6a0ea318ebbcadf71505", + "sha256:7a4be4cbf241afee43f1c3969b9103a41b40bcb3a3f467ab19f891d9bc4642e4", + "sha256:7c03296b85cb87db865d91da79bf63d5609284fc0cab9472fdd8367bbd830753", + "sha256:842d08d75d9fe9fb94b18b071090220697f9f184d4547179b60734846461ed59", + "sha256:864f8f19adbe13b7de11ba15d85d4a428c7e2f344bac110f667676a0ff84924b", + "sha256:97eea7408db3a5bcce4a55d13245ab3fa566e23b4c67cd227062bb49e26c585d", + "sha256:a839320bf27d474e52ef8cb16449bb2ce0ba03ca9f44daba6d93fa1d8828e48a", + "sha256:afe327968aaf13fc143a56a3360cb27d4ad0345e34da12c7290f1b00b8fe9a8b", + "sha256:b4d4e57f0a63fd0b358eb765063ff661328f69a04494427265950c71b992a39a", + "sha256:b6354d3760fcd31994a14c89659dee887f1351a06e5dac3c1142307172a79f90", + "sha256:b693dbb82b3c27a1604a3dff5bfc5418a7e6a781bb795288141e5f80cf3a3492", + "sha256:bdc6a24e754a555d7316fa4774e64c6c3997d27ed2d1964d55920c7c227bc4ce", + "sha256:beeefe437218a65322fbd0069eb437e7c98137e08f22c4660ac2dc795c31f8bb", + "sha256:c5eeac541cefd0bb887a371ef73c62c3cd78535e4887b310626036a7c0a817bb", + "sha256:c950d682f0952bafcceaf709761da0a32a942272fad381081b51096ffa46cea1", + "sha256:d9af79d322e735b1fc33404b5765108ae0ff232d4b54666d46730f8ac1a43676", + "sha256:e53e074b120f2877a35cc6c736b8eb161377caae8925c17688bd46ba56daaa5b", + "sha256:e965a9c1e9a393b8005031ff52583cedc15b7884fce7deb8b0346388837d6cfe", + "sha256:f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281", + "sha256:f1e3ffa1365e8702dc48c8b360fef8d7afeca482809c5e45e653af82ccd088c1", + "sha256:f6746e6fec103fcd509b96bacdfdaa2fbde9a553245dbada284435173a6f1aef", + "sha256:f81b0ed2639568bf14749112298f9e4e2b28853dab50a8b357e31798686a036d" + ], + "markers": "python_version >= '3.8'", + "version": "==4.3.0" + }, + "blinker": { + "hashes": [ + "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", + "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc" + ], + "markers": "python_version >= '3.9'", + "version": "==1.9.0" + }, + "click": { + "hashes": [ + "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", + "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b" + ], + "markers": "python_version >= '3.10'", + "version": "==8.2.1" + }, + "flask": { + "hashes": [ + "sha256:07aae2bb5eaf77993ef57e357491839f5fd9f4dc281593a81a9e4d79a24f295c", + "sha256:284c7b8f2f58cb737f0cf1c30fd7eaf0ccfcde196099d24ecede3fc2005aa59e" + ], + "index": "pypi", + "version": "==3.1.1" + }, + "flask-bcrypt": { + "hashes": [ + "sha256:062fd991dc9118d05ac0583675507b9fe4670e44416c97e0e6819d03d01f808a", + "sha256:f07b66b811417ea64eb188ae6455b0b708a793d966e1a80ceec4a23bc42a4369" + ], + "index": "pypi", + "version": "==1.0.1" + }, + "flask-cors": { + "hashes": [ + "sha256:c7b2cbfb1a31aa0d2e5341eea03a6805349f7a61647daee1a15c46bbe981494c", + "sha256:d81bcb31f07b0985be7f48406247e9243aced229b7747219160a0559edd678db" + ], + "index": "pypi", + "version": "==6.0.1" + }, + "flask-jwt-extended": { + "hashes": [ + "sha256:52f35bf0985354d7fb7b876e2eb0e0b141aaff865a22ff6cc33d9a18aa987978", + "sha256:8085d6757505b6f3291a2638c84d207e8f0ad0de662d1f46aa2f77e658a0c976" + ], + "index": "pypi", + "version": "==4.7.1" + }, + "flask-migrate": { + "hashes": [ + "sha256:1a336b06eb2c3ace005f5f2ded8641d534c18798d64061f6ff11f79e1434126d", + "sha256:24d8051af161782e0743af1b04a152d007bad9772b2bca67b7ec1e8ceeb3910d" + ], + "index": "pypi", + "version": "==4.1.0" + }, + "flask-sqlalchemy": { + "hashes": [ + "sha256:4ba4be7f419dc72f4efd8802d69974803c37259dd42f3913b0dcf75c9447e0a0", + "sha256:e4b68bb881802dda1a7d878b2fc84c06d1ee57fb40b874d3dc97dabfa36b8312" + ], + "index": "pypi", + "version": "==3.1.1" + }, + "greenlet": { + "hashes": [ + "sha256:003c930e0e074db83559edc8705f3a2d066d4aa8c2f198aff1e454946efd0f26", + "sha256:024571bbce5f2c1cfff08bf3fbaa43bbc7444f580ae13b0099e95d0e6e67ed36", + "sha256:02b0df6f63cd15012bed5401b47829cfd2e97052dc89da3cfaf2c779124eb892", + "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83", + "sha256:0cc73378150b8b78b0c9fe2ce56e166695e67478550769536a6742dca3651688", + "sha256:1afd685acd5597349ee6d7a88a8bec83ce13c106ac78c196ee9dde7c04fe87be", + "sha256:22eb5ba839c4b2156f18f76768233fe44b23a31decd9cc0d4cc8141c211fd1b4", + "sha256:25ad29caed5783d4bd7a85c9251c651696164622494c00802a139c00d639242d", + "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5", + "sha256:2c724620a101f8170065d7dded3f962a2aea7a7dae133a009cada42847e04a7b", + "sha256:2d8aa5423cd4a396792f6d4580f88bdc6efcb9205891c9d40d20f6e670992efb", + "sha256:3d04332dddb10b4a211b68111dabaee2e1a073663d117dc10247b5b1642bac86", + "sha256:419e60f80709510c343c57b4bb5a339d8767bf9aef9b8ce43f4f143240f88b7c", + "sha256:42efc522c0bd75ffa11a71e09cd8a399d83fafe36db250a87cf1dacfaa15dc64", + "sha256:4532f0d25df67f896d137431b13f4cdce89f7e3d4a96387a41290910df4d3a57", + "sha256:49c8cfb18fb419b3d08e011228ef8a25882397f3a859b9fe1436946140b6756b", + "sha256:500b8689aa9dd1ab26872a34084503aeddefcb438e2e7317b89b11eaea1901ad", + "sha256:5035d77a27b7c62db6cf41cf786cfe2242644a7a337a0e155c80960598baab95", + "sha256:5195fb1e75e592dd04ce79881c8a22becdfa3e6f500e7feb059b1e6fdd54d3e3", + "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147", + "sha256:68671180e3849b963649254a882cd544a3c75bfcd2c527346ad8bb53494444db", + "sha256:706d016a03e78df129f68c4c9b4c4f963f7d73534e48a24f5f5a7101ed13dbbb", + "sha256:72e77ed69312bab0434d7292316d5afd6896192ac4327d44f3d613ecb85b037c", + "sha256:731e154aba8e757aedd0781d4b240f1225b075b4409f1bb83b05ff410582cf00", + "sha256:7454d37c740bb27bdeddfc3f358f26956a07d5220818ceb467a483197d84f849", + "sha256:751261fc5ad7b6705f5f76726567375bb2104a059454e0226e1eef6c756748ba", + "sha256:761917cac215c61e9dc7324b2606107b3b292a8349bdebb31503ab4de3f559ac", + "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822", + "sha256:7e70ea4384b81ef9e84192e8a77fb87573138aa5d4feee541d8014e452b434da", + "sha256:8186162dffde068a465deab08fc72c767196895c39db26ab1c17c0b77a6d8b97", + "sha256:8324319cbd7b35b97990090808fdc99c27fe5338f87db50514959f8059999805", + "sha256:83a8761c75312361aa2b5b903b79da97f13f556164a7dd2d5448655425bd4c34", + "sha256:86c2d68e87107c1792e2e8d5399acec2487a4e993ab76c792408e59394d52141", + "sha256:8704b3768d2f51150626962f4b9a9e4a17d2e37c8a8d9867bbd9fa4eb938d3b3", + "sha256:873abe55f134c48e1f2a6f53f7d1419192a3d1a4e873bace00499a4e45ea6af0", + "sha256:88cd97bf37fe24a6710ec6a3a7799f3f81d9cd33317dcf565ff9950c83f55e0b", + "sha256:8b0dd8ae4c0d6f5e54ee55ba935eeb3d735a9b58a8a1e5b5cbab64e01a39f365", + "sha256:8c37ef5b3787567d322331d5250e44e42b58c8c713859b8a04c6065f27efbf72", + "sha256:8c47aae8fbbfcf82cc13327ae802ba13c9c36753b67e760023fd116bc124a62a", + "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc", + "sha256:93d48533fade144203816783373f27a97e4193177ebaaf0fc396db19e5d61163", + "sha256:96c20252c2f792defe9a115d3287e14811036d51e78b3aaddbee23b69b216302", + "sha256:a07d3472c2a93117af3b0136f246b2833fdc0b542d4a9799ae5f41c28323faef", + "sha256:a433dbc54e4a37e4fff90ef34f25a8c00aed99b06856f0119dcf09fbafa16392", + "sha256:aaa7aae1e7f75eaa3ae400ad98f8644bb81e1dc6ba47ce8a93d3f17274e08322", + "sha256:baeedccca94880d2f5666b4fa16fc20ef50ba1ee353ee2d7092b383a243b0b0d", + "sha256:be52af4b6292baecfa0f397f3edb3c6092ce071b499dd6fe292c9ac9f2c8f264", + "sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b", + "sha256:ce539fb52fb774d0802175d37fcff5c723e2c7d249c65916257f0a940cee8904", + "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf", + "sha256:d760f9bdfe79bff803bad32b4d8ffb2c1d2ce906313fc10a83976ffb73d64ca7", + "sha256:ed6cfa9200484d234d8394c70f5492f144b20d4533f69262d530a1a082f6ee9a", + "sha256:efc6dc8a792243c31f2f5674b670b3a95d46fa1c6a912b8e310d6f542e7b0712", + "sha256:f4bfbaa6096b1b7a200024784217defedf46a07c2eee1a498e94a1b5f8ec5728" + ], + "markers": "python_version < '3.14' and platform_machine == 'aarch64' or (platform_machine == 'ppc64le' or (platform_machine == 'x86_64' or (platform_machine == 'amd64' or (platform_machine == 'AMD64' or (platform_machine == 'win32' or platform_machine == 'WIN32')))))", + "version": "==3.2.3" + }, + "itsdangerous": { + "hashes": [ + "sha256:c6242fc49e35958c8b15141343aa660db5fc54d4f13a1db01a3f5891b98700ef", + "sha256:e0050c0b7da1eea53ffaf149c0cfbb5c6e2e2b69c4bef22c81fa6eb73e5f6173" + ], + "markers": "python_version >= '3.8'", + "version": "==2.2.0" + }, + "jinja2": { + "hashes": [ + "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", + "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67" + ], + "markers": "python_version >= '3.7'", + "version": "==3.1.6" + }, + "mako": { + "hashes": [ + "sha256:99579a6f39583fa7e5630a28c3c1f440e4e97a414b80372649c0ce338da2ea28", + "sha256:baef24a52fc4fc514a0887ac600f9f1cff3d82c61d4d700a1fa84d597b88db59" + ], + "markers": "python_version >= '3.8'", + "version": "==1.3.10" + }, + "markupsafe": { + "hashes": [ + "sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4", + "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", + "sha256:1225beacc926f536dc82e45f8a4d68502949dc67eea90eab715dea3a21c1b5f0", + "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", + "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", + "sha256:1a9d3f5f0901fdec14d8d2f66ef7d035f2157240a433441719ac9a3fba440b13", + "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", + "sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca", + "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", + "sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832", + "sha256:3169b1eefae027567d1ce6ee7cae382c57fe26e82775f460f0b2778beaad66c0", + "sha256:3809ede931876f5b2ec92eef964286840ed3540dadf803dd570c3b7e13141a3b", + "sha256:38a9ef736c01fccdd6600705b09dc574584b89bea478200c5fbf112a6b0d5579", + "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", + "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", + "sha256:48032821bbdf20f5799ff537c7ac3d1fba0ba032cfc06194faffa8cda8b560ff", + "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", + "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", + "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", + "sha256:57cb5a3cf367aeb1d316576250f65edec5bb3be939e9247ae594b4bcbc317dfb", + "sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e", + "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", + "sha256:6af100e168aa82a50e186c82875a5893c5597a0c1ccdb0d8b40240b1f28b969a", + "sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d", + "sha256:6e296a513ca3d94054c2c881cc913116e90fd030ad1c656b3869762b754f5f8a", + "sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b", + "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8", + "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", + "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", + "sha256:88b49a3b9ff31e19998750c38e030fc7bb937398b1f78cfa599aaef92d693144", + "sha256:8c4e8c3ce11e1f92f6536ff07154f9d49677ebaaafc32db9db4620bc11ed480f", + "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", + "sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d", + "sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93", + "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", + "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158", + "sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84", + "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", + "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", + "sha256:b424c77b206d63d500bcb69fa55ed8d0e6a3774056bdc4839fc9298a7edca171", + "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", + "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", + "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", + "sha256:bbcb445fa71794da8f178f0f6d66789a28d7319071af7a496d4d507ed566270d", + "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", + "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", + "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", + "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", + "sha256:cfad01eed2c2e0c01fd0ecd2ef42c492f7f93902e39a42fc9ee1692961443a29", + "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", + "sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798", + "sha256:e07c3764494e3776c602c1e78e298937c3315ccc9043ead7e685b7f2b8d47b3c", + "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", + "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", + "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", + "sha256:eaa0a10b7f72326f1372a713e73c3f739b524b3af41feb43e4921cb529f5929a", + "sha256:eb7972a85c54febfb25b5c4b4f3af4dcc731994c7da0d8a0b4a6eb0640e1d178", + "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", + "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", + "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", + "sha256:fcabf5ff6eea076f859677f5f0b6b5c1a51e70a376b0579e0eadef8db48c6b50" + ], + "markers": "python_version >= '3.9'", + "version": "==3.0.2" + }, + "pyjwt": { + "hashes": [ + "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953", + "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb" + ], + "markers": "python_version >= '3.9'", + "version": "==2.10.1" + }, + "sqlalchemy": { + "hashes": [ + "sha256:023b3ee6169969beea3bb72312e44d8b7c27c75b347942d943cf49397b7edeb5", + "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582", + "sha256:05132c906066142103b83d9c250b60508af556982a385d96c4eaa9fb9720ac2b", + "sha256:087b6b52de812741c27231b5a3586384d60c353fbd0e2f81405a814b5591dc8b", + "sha256:0b3dbf1e7e9bc95f4bac5e2fb6d3fb2f083254c3fdd20a1789af965caf2d2348", + "sha256:118c16cd3f1b00c76d69343e38602006c9cfb9998fa4f798606d28d63f23beda", + "sha256:1936af879e3db023601196a1684d28e12f19ccf93af01bf3280a3262c4b6b4e5", + "sha256:1e3f196a0c59b0cae9a0cd332eb1a4bda4696e863f4f1cf84ab0347992c548c2", + "sha256:23a8825495d8b195c4aa9ff1c430c28f2c821e8c5e2d98089228af887e5d7e29", + "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8", + "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f", + "sha256:34ea30ab3ec98355235972dadc497bb659cc75f8292b760394824fab9cf39826", + "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504", + "sha256:41836fe661cc98abfae476e14ba1906220f92c4e528771a8a3ae6a151242d2ae", + "sha256:4d44522480e0bf34c3d63167b8cfa7289c1c54264c2950cc5fc26e7850967e45", + "sha256:4eeb195cdedaf17aab6b247894ff2734dcead6c08f748e617bfe05bd5a218443", + "sha256:4f67766965996e63bb46cfbf2ce5355fc32d9dd3b8ad7e536a920ff9ee422e23", + "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576", + "sha256:598d9ebc1e796431bbd068e41e4de4dc34312b7aa3292571bb3674a0cb415dd1", + "sha256:5b14e97886199c1f52c14629c11d90c11fbb09e9334fa7bb5f6d068d9ced0ce0", + "sha256:5e22575d169529ac3e0a120cf050ec9daa94b6a9597993d1702884f6954a7d71", + "sha256:60c578c45c949f909a4026b7807044e7e564adf793537fc762b2489d522f3d11", + "sha256:6145afea51ff0af7f2564a05fa95eb46f542919e6523729663a5d285ecb3cf5e", + "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f", + "sha256:6854175807af57bdb6425e47adbce7d20a4d79bbfd6f6d6519cd10bb7109a7f8", + "sha256:6ab60a5089a8f02009f127806f777fca82581c49e127f08413a66056bd9166dd", + "sha256:725875a63abf7c399d4548e686debb65cdc2549e1825437096a0af1f7e374814", + "sha256:7492967c3386df69f80cf67efd665c0f667cee67032090fe01d7d74b0e19bb08", + "sha256:81965cc20848ab06583506ef54e37cf15c83c7e619df2ad16807c03100745dea", + "sha256:81c24e0c0fde47a9723c81d5806569cddef103aebbf79dbc9fcbb617153dea30", + "sha256:81eedafa609917040d39aa9332e25881a8e7a0862495fcdf2023a9667209deda", + "sha256:81f413674d85cfd0dfcd6512e10e0f33c19c21860342a4890c3a2b59479929f9", + "sha256:8280856dd7c6a68ab3a164b4a4b1c51f7691f6d04af4d4ca23d6ecf2261b7923", + "sha256:82ca366a844eb551daff9d2e6e7a9e5e76d2612c8564f58db6c19a726869c1df", + "sha256:8b4af17bda11e907c51d10686eda89049f9ce5669b08fbe71a29747f1e876036", + "sha256:90144d3b0c8b139408da50196c5cad2a6909b51b23df1f0538411cd23ffa45d3", + "sha256:906e6b0d7d452e9a98e5ab8507c0da791856b2380fdee61b765632bb8698026f", + "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6", + "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04", + "sha256:9a420a91913092d1e20c86a2f5f1fc85c1a8924dbcaf5e0586df8aceb09c9cc2", + "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560", + "sha256:a104c5694dfd2d864a6f91b0956eb5d5883234119cb40010115fd45a16da5e70", + "sha256:a373a400f3e9bac95ba2a06372c4fd1412a7cee53c37fc6c05f829bf672b8769", + "sha256:a62448526dd9ed3e3beedc93df9bb6b55a436ed1474db31a2af13b313a70a7e1", + "sha256:a8808d5cf866c781150d36a3c8eb3adccfa41a8105d031bf27e92c251e3969d6", + "sha256:b1f09b6821406ea1f94053f346f28f8215e293344209129a9c0fcc3578598d7b", + "sha256:b2ac41acfc8d965fb0c464eb8f44995770239668956dc4cdf502d1b1ffe0d747", + "sha256:b46fa6eae1cd1c20e6e6f44e19984d438b6b2d8616d21d783d150df714f44078", + "sha256:b50eab9994d64f4a823ff99a0ed28a6903224ddbe7fef56a6dd865eec9243440", + "sha256:bfc9064f6658a3d1cadeaa0ba07570b83ce6801a1314985bf98ec9b95d74e15f", + "sha256:c0b0e5e1b5d9f3586601048dd68f392dc0cc99a59bb5faf18aab057ce00d00b2", + "sha256:c153265408d18de4cc5ded1941dcd8315894572cddd3c58df5d5b5705b3fa28d", + "sha256:d4ae769b9c1c7757e4ccce94b0641bc203bbdf43ba7a2413ab2523d8d047d8dc", + "sha256:dc56c9788617b8964ad02e8fcfeed4001c1f8ba91a9e1f31483c0dffb207002a", + "sha256:dd5ec3aa6ae6e4d5b5de9357d2133c07be1aff6405b136dad753a16afb6717dd", + "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9", + "sha256:ff8e80c4c4932c10493ff97028decfdb622de69cae87e0f127a7ebe32b4069c6" + ], + "markers": "python_version >= '3.7'", + "version": "==2.0.41" + }, + "typing-extensions": { + "hashes": [ + "sha256:8676b788e32f02ab42d9e7c61324048ae4c6d844a399eebace3d4979d75ceef4", + "sha256:a1514509136dd0b477638fc68d6a91497af5076466ad0fa6c338e44e359944af" + ], + "markers": "python_version >= '3.9'", + "version": "==4.14.0" + }, + "werkzeug": { + "hashes": [ + "sha256:54b78bf3716d19a65be4fceccc0d1d7b89e608834989dfae50ea87564639213e", + "sha256:60723ce945c19328679790e3282cc758aa4a6040e4bb330f53d30fa546d44746" + ], + "markers": "python_version >= '3.9'", + "version": "==3.1.3" + } + }, + "develop": {} +} diff --git a/staybuddy/server/app.py b/staybuddy/server/app.py new file mode 100644 index 000000000..e59009130 --- /dev/null +++ b/staybuddy/server/app.py @@ -0,0 +1,12 @@ +from app import create_app, db + +app = create_app() + +@app.route('/') +def home(): + return "Hello, Flask!" + +if __name__ == '__main__': + with app.app_context(): + db.create_all() # Create database tables + app.run(debug=True, port=5000) diff --git a/staybuddy/server/app/__init__.py b/staybuddy/server/app/__init__.py new file mode 100644 index 000000000..1997f4ed8 --- /dev/null +++ b/staybuddy/server/app/__init__.py @@ -0,0 +1,41 @@ +from flask import Flask +from flask_sqlalchemy import SQLAlchemy +from flask_migrate import Migrate +from flask_cors import CORS +from flask_bcrypt import Bcrypt +from flask_jwt_extended import JWTManager + +# Initialize extensions +db = SQLAlchemy() +migrate = Migrate() +bcrypt = Bcrypt() +jwt = JWTManager() + +def create_app(): + app = Flask(__name__) + app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db' + app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False + app.config['SECRET_KEY'] = 'secret' + + # Attach extensions + db.init_app(app) + migrate.init_app(app, db) # ✅ Only once and after db.init_app + bcrypt.init_app(app) + jwt.init_app(app) + CORS(app) + + # Import models before migration to register them + from app import models # ✅ Correct position here + + # Register blueprints + from app.routes.auth_routes import auth_bp + from app.routes.stay_routes import stay_bp + from app.routes.booking_routes import booking_bp + from app.routes.review_routes import review_bp + + app.register_blueprint(auth_bp, url_prefix='/auth') + app.register_blueprint(stay_bp) # stay_bp already has /stays prefix + app.register_blueprint(booking_bp, url_prefix='/bookings') + app.register_blueprint(review_bp, url_prefix='/reviews') + + return app diff --git a/staybuddy/server/app/config.py b/staybuddy/server/app/config.py new file mode 100644 index 000000000..11b55744f --- /dev/null +++ b/staybuddy/server/app/config.py @@ -0,0 +1,4 @@ +class Config: + SQLALCHEMY_DATABASE_URI = 'sqlite:///staybuddy.db' + SQLALCHEMY_TRACK_MODIFICATIONS = False + JWT_SECRET_KEY = 'super-secret-key' diff --git a/staybuddy/server/app/models.py b/staybuddy/server/app/models.py new file mode 100644 index 000000000..0fe2d666f --- /dev/null +++ b/staybuddy/server/app/models.py @@ -0,0 +1,66 @@ +from app import db, bcrypt +from werkzeug.security import generate_password_hash, check_password_hash + +class User(db.Model): + id = db.Column(db.Integer, primary_key=True) + username = db.Column(db.String(80), unique=True, nullable=False) + email = db.Column(db.String(120), unique=True, nullable=False) + password_hash = db.Column(db.String(128), nullable=False) + + def set_password(self, password): + self.password_hash = generate_password_hash(password) + + def check_password(self, password): + return check_password_hash(self.password_hash, password) + +class Stay(db.Model): + id = db.Column(db.Integer, primary_key=True) + title = db.Column(db.String(100), nullable=False) + location = db.Column(db.String(100), nullable=False) + price = db.Column(db.Float, nullable=False) + description = db.Column(db.String(255)) + user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) + + def to_dict(self): + return { + 'id': self.id, + 'title': self.title, + 'location': self.location, + 'price': self.price, + 'description': self.description, + 'user_id': self.user_id + } + +class Booking(db.Model): + id = db.Column(db.Integer, primary_key=True) + stay_id = db.Column(db.Integer, db.ForeignKey('stay.id'), nullable=False) + user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) + start_date = db.Column(db.String, nullable=False) + end_date = db.Column(db.String, nullable=False) + note = db.Column(db.String(255)) + + def to_dict(self): + return { + 'id': self.id, + 'stay_id': self.stay_id, + 'user_id': self.user_id, + 'start_date': self.start_date, + 'end_date': self.end_date, + 'note': self.note + } + +class Review(db.Model): + id = db.Column(db.Integer, primary_key=True) + stay_id = db.Column(db.Integer, db.ForeignKey('stay.id'), nullable=False) + user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) + rating = db.Column(db.Integer, nullable=False) + comment = db.Column(db.String(255)) + + def to_dict(self): + return { + 'id': self.id, + 'stay_id': self.stay_id, + 'user_id': self.user_id, + 'rating': self.rating, + 'comment': self.comment + } diff --git a/staybuddy/server/app/routes/__init__.py b/staybuddy/server/app/routes/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/app/routes/auth_routes.py b/staybuddy/server/app/routes/auth_routes.py new file mode 100644 index 000000000..fd5c27e36 --- /dev/null +++ b/staybuddy/server/app/routes/auth_routes.py @@ -0,0 +1,45 @@ +from flask import Blueprint, request, jsonify +from app.models import db, User +from flask_jwt_extended import create_access_token, jwt_required, get_jwt_identity + +auth_bp = Blueprint('auth', __name__, url_prefix='/auth') + +@auth_bp.route('/signup', methods=['POST']) +def signup(): + data = request.get_json() + username = data.get('username') + email = data.get('email') + password = data.get('password') + + if not username or not email or not password: + return jsonify({'error': 'All fields are required'}), 400 + + if User.query.filter_by(email=email).first(): + return jsonify({'error': 'Email already in use'}), 409 + + user = User(username=username, email=email) + user.set_password(password) + db.session.add(user) + db.session.commit() + + token = create_access_token(identity=user.id) + return jsonify({'token': token, 'user': {'id': user.id, 'username': user.username}}), 201 + +@auth_bp.route('/login', methods=['POST']) +def login(): + data = request.get_json() + email = data.get('email') + password = data.get('password') + + user = User.query.filter_by(email=email).first() + if user and user.check_password(password): + token = create_access_token(identity=user.id) + return jsonify({'token': token, 'user': {'id': user.id, 'username': user.username}}) + return jsonify({'error': 'Invalid credentials'}), 401 + +@auth_bp.route('/check_session', methods=['GET']) +@jwt_required() +def check_session(): + user_id = get_jwt_identity() + user = User.query.get(user_id) + return jsonify({'id': user.id, 'username': user.username}) \ No newline at end of file diff --git a/staybuddy/server/app/routes/booking_routes.py b/staybuddy/server/app/routes/booking_routes.py new file mode 100644 index 000000000..29b898c27 --- /dev/null +++ b/staybuddy/server/app/routes/booking_routes.py @@ -0,0 +1,29 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required, get_jwt_identity +from app import db +from app.models import Booking + +booking_bp = Blueprint('bookings', __name__, url_prefix='/bookings') + +@booking_bp.route('/', methods=['POST']) +@jwt_required() +def create_booking(): + user_id = get_jwt_identity() + data = request.get_json() + booking = Booking( + stay_id=data['stay_id'], + user_id=user_id, + start_date=data['start_date'], + end_date=data['end_date'], + note=data.get('note', '') + ) + db.session.add(booking) + db.session.commit() + return jsonify(booking.to_dict()), 201 + +@booking_bp.route('/', methods=['GET']) +@jwt_required() +def user_bookings(): + user_id = get_jwt_identity() + bookings = Booking.query.filter_by(user_id=user_id).all() + return jsonify([b.to_dict() for b in bookings]) \ No newline at end of file diff --git a/staybuddy/server/app/routes/review_routes.py b/staybuddy/server/app/routes/review_routes.py new file mode 100644 index 000000000..cb100bb3e --- /dev/null +++ b/staybuddy/server/app/routes/review_routes.py @@ -0,0 +1,26 @@ +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required, get_jwt_identity +from app import db +from app.models import Review + +review_bp = Blueprint('reviews', __name__, url_prefix='/reviews') + +@review_bp.route('/', methods=['POST']) +@jwt_required() +def create_review(): + user_id = get_jwt_identity() + data = request.get_json() + review = Review( + stay_id=data['stay_id'], + user_id=user_id, + rating=data['rating'], + comment=data.get('comment', '') + ) + db.session.add(review) + db.session.commit() + return jsonify(review.to_dict()), 201 + +@review_bp.route('/stay/', methods=['GET']) +def get_reviews_for_stay(stay_id): + reviews = Review.query.filter_by(stay_id=stay_id).all() + return jsonify([r.to_dict() for r in reviews]) diff --git a/staybuddy/server/app/routes/stay_routes.py b/staybuddy/server/app/routes/stay_routes.py new file mode 100644 index 000000000..756581b20 --- /dev/null +++ b/staybuddy/server/app/routes/stay_routes.py @@ -0,0 +1,60 @@ +# server/app/routes/stay_routes.py +from flask import Blueprint, request, jsonify +from flask_jwt_extended import jwt_required, get_jwt_identity +from app import db +from app.models import Stay, User # assume Stay model exists + +stay_bp = Blueprint('stays', __name__, url_prefix='/stays') + +@stay_bp.route('/', methods=['GET']) +def get_stays(): + stays = Stay.query.all() + return jsonify([stay.to_dict() for stay in stays]) + +@stay_bp.route('/', methods=['GET']) +def get_stay(id): + stay = Stay.query.get_or_404(id) + return jsonify(stay.to_dict()) + +@stay_bp.route('/', methods=['POST']) +@jwt_required() +def create_stay(): + user_id = get_jwt_identity() + data = request.get_json() + stay = Stay( + title=data['title'], + location=data['location'], + price=data['price'], + description=data.get('description', ''), + user_id=user_id + ) + db.session.add(stay) + db.session.commit() + return jsonify(stay.to_dict()), 201 + +@stay_bp.route('/', methods=['PATCH']) +@jwt_required() +def update_stay(id): + stay = Stay.query.get_or_404(id) + user_id = get_jwt_identity() + if stay.user_id != user_id: + return jsonify({'error': 'Unauthorized'}), 403 + + data = request.get_json() + for field in ['title', 'location', 'price', 'description']: + if field in data: + setattr(stay, field, data[field]) + db.session.commit() + return jsonify(stay.to_dict()) + +@stay_bp.route('/', methods=['DELETE']) +@jwt_required() +def delete_stay(id): + stay = Stay.query.get_or_404(id) + user_id = get_jwt_identity() + if stay.user_id != user_id: + return jsonify({'error': 'Unauthorized'}), 403 + + db.session.delete(stay) + db.session.commit() + return jsonify({'message': 'Stay deleted'}) \ No newline at end of file diff --git a/staybuddy/server/migrations/README b/staybuddy/server/migrations/README new file mode 100644 index 000000000..0e0484415 --- /dev/null +++ b/staybuddy/server/migrations/README @@ -0,0 +1 @@ +Single-database configuration for Flask. diff --git a/staybuddy/server/migrations/alembic.ini b/staybuddy/server/migrations/alembic.ini new file mode 100644 index 000000000..ec9d45c26 --- /dev/null +++ b/staybuddy/server/migrations/alembic.ini @@ -0,0 +1,50 @@ +# A generic, single database configuration. + +[alembic] +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic,flask_migrate + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[logger_flask_migrate] +level = INFO +handlers = +qualname = flask_migrate + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/migrations/env.py b/staybuddy/server/migrations/env.py new file mode 100644 index 000000000..4c9709271 --- /dev/null +++ b/staybuddy/server/migrations/env.py @@ -0,0 +1,113 @@ +import logging +from logging.config import fileConfig + +from flask import current_app + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) +logger = logging.getLogger('alembic.env') + + +def get_engine(): + try: + # this works with Flask-SQLAlchemy<3 and Alchemical + return current_app.extensions['migrate'].db.get_engine() + except (TypeError, AttributeError): + # this works with Flask-SQLAlchemy>=3 + return current_app.extensions['migrate'].db.engine + + +def get_engine_url(): + try: + return get_engine().url.render_as_string(hide_password=False).replace( + '%', '%%') + except AttributeError: + return str(get_engine().url).replace('%', '%%') + + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +config.set_main_option('sqlalchemy.url', get_engine_url()) +target_db = current_app.extensions['migrate'].db + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def get_metadata(): + if hasattr(target_db, 'metadatas'): + return target_db.metadatas[None] + return target_db.metadata + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, target_metadata=get_metadata(), literal_binds=True + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + # this callback is used to prevent an auto-migration from being generated + # when there are no changes to the schema + # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html + def process_revision_directives(context, revision, directives): + if getattr(config.cmd_opts, 'autogenerate', False): + script = directives[0] + if script.upgrade_ops.is_empty(): + directives[:] = [] + logger.info('No changes in schema detected.') + + conf_args = current_app.extensions['migrate'].configure_args + if conf_args.get("process_revision_directives") is None: + conf_args["process_revision_directives"] = process_revision_directives + + connectable = get_engine() + + with connectable.connect() as connection: + context.configure( + connection=connection, + target_metadata=get_metadata(), + **conf_args + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/migrations/script.py.mako b/staybuddy/server/migrations/script.py.mako new file mode 100644 index 000000000..2c0156303 --- /dev/null +++ b/staybuddy/server/migrations/script.py.mako @@ -0,0 +1,24 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade(): + ${upgrades if upgrades else "pass"} + + +def downgrade(): + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/requirements.txt b/staybuddy/server/requirements.txt new file mode 100644 index 000000000..b321844c9 --- /dev/null +++ b/staybuddy/server/requirements.txt @@ -0,0 +1,16 @@ +bcrypt==4.3.0 +blinker==1.9.0 +click==8.2.1 +Flask==3.1.1 +Flask-Bcrypt==1.0.1 +flask-cors==6.0.1 +Flask-SQLAlchemy==3.1.1 +greenlet==3.2.3 +importlib_metadata==8.5.0 +itsdangerous==2.2.0 +Jinja2==3.1.6 +MarkupSafe==3.0.2 +SQLAlchemy==2.0.41 +typing_extensions==4.14.0 +Werkzeug==3.1.3 +zipp==3.20.2 diff --git a/staybuddy/server/seed.py b/staybuddy/server/seed.py new file mode 100644 index 000000000..ec1921184 --- /dev/null +++ b/staybuddy/server/seed.py @@ -0,0 +1,58 @@ +# server/seed.py + +from app import create_app, db +from datetime import date + +app = create_app() + +with app.app_context(): + print("🌱 Seeding database...") + + # Import models after app context is established + from app.models import User, Stay, Booking, Review + + # Drop and recreate all tables + db.drop_all() + db.create_all() + + print("✅ Database tables created!") + + # -------------------- + # Create Users + # -------------------- + user1 = User(username="john", email="john@example.com") + user1.set_password("secret123") + user2 = User(username="sarah", email="sarah@example.com") + user2.set_password("password456") + + db.session.add_all([user1, user2]) + db.session.commit() + + # -------------------- + # Create Stays + # -------------------- + stay1 = Stay(title="Ocean View Apartment", location="Mombasa", price=3500, description="Enjoy a sea breeze with a coastal view.", user_id=user1.id) + stay2 = Stay(title="Nairobi CBD Loft", location="Nairobi", price=2800, description="Perfect for business trips!", user_id=user2.id) + + db.session.add_all([stay1, stay2]) + db.session.commit() + + # -------------------- + # Create Bookings + # -------------------- + booking1 = Booking(user_id=user2.id, stay_id=stay1.id, start_date=date(2025, 7, 1), end_date=date(2025, 7, 4), note="Anniversary getaway") + booking2 = Booking(user_id=user1.id, stay_id=stay2.id, start_date=date(2025, 8, 10), end_date=date(2025, 8, 13), note="Business trip") + + db.session.add_all([booking1, booking2]) + db.session.commit() + + # -------------------- + # Create Reviews + # -------------------- + review1 = Review(user_id=user2.id, stay_id=stay1.id, rating=5, comment="Amazing view and hospitality!") + review2 = Review(user_id=user1.id, stay_id=stay2.id, rating=4, comment="Very central and clean.") + + db.session.add_all([review1, review2]) + db.session.commit() + + print("✅ Done seeding!") diff --git a/staybuddy/server/venv/bin/Activate.ps1 b/staybuddy/server/venv/bin/Activate.ps1 new file mode 100644 index 000000000..b49d77ba4 --- /dev/null +++ b/staybuddy/server/venv/bin/Activate.ps1 @@ -0,0 +1,247 @@ +<# +.Synopsis +Activate a Python virtual environment for the current PowerShell session. + +.Description +Pushes the python executable for a virtual environment to the front of the +$Env:PATH environment variable and sets the prompt to signify that you are +in a Python virtual environment. Makes use of the command line switches as +well as the `pyvenv.cfg` file values present in the virtual environment. + +.Parameter VenvDir +Path to the directory that contains the virtual environment to activate. The +default value for this is the parent of the directory that the Activate.ps1 +script is located within. + +.Parameter Prompt +The prompt prefix to display when this virtual environment is activated. By +default, this prompt is the name of the virtual environment folder (VenvDir) +surrounded by parentheses and followed by a single space (ie. '(.venv) '). + +.Example +Activate.ps1 +Activates the Python virtual environment that contains the Activate.ps1 script. + +.Example +Activate.ps1 -Verbose +Activates the Python virtual environment that contains the Activate.ps1 script, +and shows extra information about the activation as it executes. + +.Example +Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv +Activates the Python virtual environment located in the specified location. + +.Example +Activate.ps1 -Prompt "MyPython" +Activates the Python virtual environment that contains the Activate.ps1 script, +and prefixes the current prompt with the specified string (surrounded in +parentheses) while the virtual environment is active. + +.Notes +On Windows, it may be required to enable this Activate.ps1 script by setting the +execution policy for the user. You can do this by issuing the following PowerShell +command: + +PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + +For more information on Execution Policies: +https://go.microsoft.com/fwlink/?LinkID=135170 + +#> +Param( + [Parameter(Mandatory = $false)] + [String] + $VenvDir, + [Parameter(Mandatory = $false)] + [String] + $Prompt +) + +<# Function declarations --------------------------------------------------- #> + +<# +.Synopsis +Remove all shell session elements added by the Activate script, including the +addition of the virtual environment's Python executable from the beginning of +the PATH variable. + +.Parameter NonDestructive +If present, do not remove this function from the global namespace for the +session. + +#> +function global:deactivate ([switch]$NonDestructive) { + # Revert to original values + + # The prior prompt: + if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { + Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt + Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT + } + + # The prior PYTHONHOME: + if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { + Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME + Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME + } + + # The prior PATH: + if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { + Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH + Remove-Item -Path Env:_OLD_VIRTUAL_PATH + } + + # Just remove the VIRTUAL_ENV altogether: + if (Test-Path -Path Env:VIRTUAL_ENV) { + Remove-Item -Path env:VIRTUAL_ENV + } + + # Just remove VIRTUAL_ENV_PROMPT altogether. + if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { + Remove-Item -Path env:VIRTUAL_ENV_PROMPT + } + + # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: + if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { + Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force + } + + # Leave deactivate function in the global namespace if requested: + if (-not $NonDestructive) { + Remove-Item -Path function:deactivate + } +} + +<# +.Description +Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the +given folder, and returns them in a map. + +For each line in the pyvenv.cfg file, if that line can be parsed into exactly +two strings separated by `=` (with any amount of whitespace surrounding the =) +then it is considered a `key = value` line. The left hand string is the key, +the right hand is the value. + +If the value starts with a `'` or a `"` then the first and last character is +stripped from the value before being captured. + +.Parameter ConfigDir +Path to the directory that contains the `pyvenv.cfg` file. +#> +function Get-PyVenvConfig( + [String] + $ConfigDir +) { + Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" + + # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). + $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue + + # An empty map will be returned if no config file is found. + $pyvenvConfig = @{ } + + if ($pyvenvConfigPath) { + + Write-Verbose "File exists, parse `key = value` lines" + $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath + + $pyvenvConfigContent | ForEach-Object { + $keyval = $PSItem -split "\s*=\s*", 2 + if ($keyval[0] -and $keyval[1]) { + $val = $keyval[1] + + # Remove extraneous quotations around a string value. + if ("'""".Contains($val.Substring(0, 1))) { + $val = $val.Substring(1, $val.Length - 2) + } + + $pyvenvConfig[$keyval[0]] = $val + Write-Verbose "Adding Key: '$($keyval[0])'='$val'" + } + } + } + return $pyvenvConfig +} + + +<# Begin Activate script --------------------------------------------------- #> + +# Determine the containing directory of this script +$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition +$VenvExecDir = Get-Item -Path $VenvExecPath + +Write-Verbose "Activation script is located in path: '$VenvExecPath'" +Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" +Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" + +# Set values required in priority: CmdLine, ConfigFile, Default +# First, get the location of the virtual environment, it might not be +# VenvExecDir if specified on the command line. +if ($VenvDir) { + Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" +} +else { + Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." + $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") + Write-Verbose "VenvDir=$VenvDir" +} + +# Next, read the `pyvenv.cfg` file to determine any required value such +# as `prompt`. +$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir + +# Next, set the prompt from the command line, or the config file, or +# just use the name of the virtual environment folder. +if ($Prompt) { + Write-Verbose "Prompt specified as argument, using '$Prompt'" +} +else { + Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" + if ($pyvenvCfg -and $pyvenvCfg['prompt']) { + Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" + $Prompt = $pyvenvCfg['prompt']; + } + else { + Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" + Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" + $Prompt = Split-Path -Path $venvDir -Leaf + } +} + +Write-Verbose "Prompt = '$Prompt'" +Write-Verbose "VenvDir='$VenvDir'" + +# Deactivate any currently active virtual environment, but leave the +# deactivate function in place. +deactivate -nondestructive + +# Now set the environment variable VIRTUAL_ENV, used by many tools to determine +# that there is an activated venv. +$env:VIRTUAL_ENV = $VenvDir + +if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { + + Write-Verbose "Setting prompt to '$Prompt'" + + # Set the prompt to include the env name + # Make sure _OLD_VIRTUAL_PROMPT is global + function global:_OLD_VIRTUAL_PROMPT { "" } + Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT + New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt + + function global:prompt { + Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " + _OLD_VIRTUAL_PROMPT + } + $env:VIRTUAL_ENV_PROMPT = $Prompt +} + +# Clear PYTHONHOME +if (Test-Path -Path Env:PYTHONHOME) { + Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME + Remove-Item -Path Env:PYTHONHOME +} + +# Add the venv to the PATH +Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH +$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/staybuddy/server/venv/bin/activate b/staybuddy/server/venv/bin/activate new file mode 100644 index 000000000..f3ace2e3d --- /dev/null +++ b/staybuddy/server/venv/bin/activate @@ -0,0 +1,69 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + +deactivate () { + # reset old environment variables + if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then + PATH="${_OLD_VIRTUAL_PATH:-}" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then + PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null + fi + + if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then + PS1="${_OLD_VIRTUAL_PS1:-}" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + unset VIRTUAL_ENV_PROMPT + if [ ! "${1:-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV="/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv" +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) +# could use `if (set -u; : $PYTHONHOME) ;` in bash +if [ -n "${PYTHONHOME:-}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1:-}" + PS1="(venv) ${PS1:-}" + export PS1 + VIRTUAL_ENV_PROMPT="(venv) " + export VIRTUAL_ENV_PROMPT +fi + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null +fi diff --git a/staybuddy/server/venv/bin/activate.csh b/staybuddy/server/venv/bin/activate.csh new file mode 100644 index 000000000..a430b3ee7 --- /dev/null +++ b/staybuddy/server/venv/bin/activate.csh @@ -0,0 +1,26 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . +# Ported to Python 3.3 venv by Andrew Svetlov + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV "/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv" + +set _OLD_VIRTUAL_PATH="$PATH" +setenv PATH "$VIRTUAL_ENV/bin:$PATH" + + +set _OLD_VIRTUAL_PROMPT="$prompt" + +if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then + set prompt = "(venv) $prompt" + setenv VIRTUAL_ENV_PROMPT "(venv) " +endif + +alias pydoc python -m pydoc + +rehash diff --git a/staybuddy/server/venv/bin/activate.fish b/staybuddy/server/venv/bin/activate.fish new file mode 100644 index 000000000..c990cabb9 --- /dev/null +++ b/staybuddy/server/venv/bin/activate.fish @@ -0,0 +1,69 @@ +# This file must be used with "source /bin/activate.fish" *from fish* +# (https://fishshell.com/); you cannot run it directly. + +function deactivate -d "Exit virtual environment and return to normal shell environment" + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + set -gx PATH $_OLD_VIRTUAL_PATH + set -e _OLD_VIRTUAL_PATH + end + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + set -e _OLD_FISH_PROMPT_OVERRIDE + # prevents error when using nested fish instances (Issue #93858) + if functions -q _old_fish_prompt + functions -e fish_prompt + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end + end + + set -e VIRTUAL_ENV + set -e VIRTUAL_ENV_PROMPT + if test "$argv[1]" != "nondestructive" + # Self-destruct! + functions -e deactivate + end +end + +# Unset irrelevant variables. +deactivate nondestructive + +set -gx VIRTUAL_ENV "/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv" + +set -gx _OLD_VIRTUAL_PATH $PATH +set -gx PATH "$VIRTUAL_ENV/bin" $PATH + +# Unset PYTHONHOME if set. +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # fish uses a function instead of an env var to generate the prompt. + + # Save the current fish_prompt function as the function _old_fish_prompt. + functions -c fish_prompt _old_fish_prompt + + # With the original prompt function renamed, we can override with our own. + function fish_prompt + # Save the return status of the last command. + set -l old_status $status + + # Output the venv prompt; color taken from the blue of the Python logo. + printf "%s%s%s" (set_color 4B8BBE) "(venv) " (set_color normal) + + # Restore the return status of the previous command. + echo "exit $old_status" | . + # Output the original/"old" prompt. + _old_fish_prompt + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" + set -gx VIRTUAL_ENV_PROMPT "(venv) " +end diff --git a/staybuddy/server/venv/bin/alembic b/staybuddy/server/venv/bin/alembic new file mode 100755 index 000000000..bfdd94988 --- /dev/null +++ b/staybuddy/server/venv/bin/alembic @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from alembic.config import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/staybuddy/server/venv/bin/flask b/staybuddy/server/venv/bin/flask new file mode 100755 index 000000000..00b53b5d6 --- /dev/null +++ b/staybuddy/server/venv/bin/flask @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from flask.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/staybuddy/server/venv/bin/mako-render b/staybuddy/server/venv/bin/mako-render new file mode 100755 index 000000000..02280acab --- /dev/null +++ b/staybuddy/server/venv/bin/mako-render @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mako.cmd import cmdline +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cmdline()) diff --git a/staybuddy/server/venv/bin/pip b/staybuddy/server/venv/bin/pip new file mode 100755 index 000000000..37d67f059 --- /dev/null +++ b/staybuddy/server/venv/bin/pip @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/staybuddy/server/venv/bin/pip3 b/staybuddy/server/venv/bin/pip3 new file mode 100755 index 000000000..37d67f059 --- /dev/null +++ b/staybuddy/server/venv/bin/pip3 @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/staybuddy/server/venv/bin/pip3.10 b/staybuddy/server/venv/bin/pip3.10 new file mode 100755 index 000000000..37d67f059 --- /dev/null +++ b/staybuddy/server/venv/bin/pip3.10 @@ -0,0 +1,8 @@ +#!/home/john-howard/assignment/phase4/python-p4-project-template/staybuddy/server/venv/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/staybuddy/server/venv/bin/python b/staybuddy/server/venv/bin/python new file mode 120000 index 000000000..b8a0adbbb --- /dev/null +++ b/staybuddy/server/venv/bin/python @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/staybuddy/server/venv/bin/python3 b/staybuddy/server/venv/bin/python3 new file mode 120000 index 000000000..94fbc6637 --- /dev/null +++ b/staybuddy/server/venv/bin/python3 @@ -0,0 +1 @@ +/home/john-howard/.pyenv/versions/3.10.13/bin/python3 \ No newline at end of file diff --git a/staybuddy/server/venv/bin/python3.10 b/staybuddy/server/venv/bin/python3.10 new file mode 120000 index 000000000..b8a0adbbb --- /dev/null +++ b/staybuddy/server/venv/bin/python3.10 @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/staybuddy/server/venv/include/site/python3.10/greenlet/greenlet.h b/staybuddy/server/venv/include/site/python3.10/greenlet/greenlet.h new file mode 100644 index 000000000..d02a16e43 --- /dev/null +++ b/staybuddy/server/venv/include/site/python3.10/greenlet/greenlet.h @@ -0,0 +1,164 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ + +/* Greenlet object interface */ + +#ifndef Py_GREENLETOBJECT_H +#define Py_GREENLETOBJECT_H + + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/* This is deprecated and undocumented. It does not change. */ +#define GREENLET_VERSION "1.0.0" + +#ifndef GREENLET_MODULE +#define implementation_ptr_t void* +#endif + +typedef struct _greenlet { + PyObject_HEAD + PyObject* weakreflist; + PyObject* dict; + implementation_ptr_t pimpl; +} PyGreenlet; + +#define PyGreenlet_Check(op) (op && PyObject_TypeCheck(op, &PyGreenlet_Type)) + + +/* C API functions */ + +/* Total number of symbols that are exported */ +#define PyGreenlet_API_pointers 12 + +#define PyGreenlet_Type_NUM 0 +#define PyExc_GreenletError_NUM 1 +#define PyExc_GreenletExit_NUM 2 + +#define PyGreenlet_New_NUM 3 +#define PyGreenlet_GetCurrent_NUM 4 +#define PyGreenlet_Throw_NUM 5 +#define PyGreenlet_Switch_NUM 6 +#define PyGreenlet_SetParent_NUM 7 + +#define PyGreenlet_MAIN_NUM 8 +#define PyGreenlet_STARTED_NUM 9 +#define PyGreenlet_ACTIVE_NUM 10 +#define PyGreenlet_GET_PARENT_NUM 11 + +#ifndef GREENLET_MODULE +/* This section is used by modules that uses the greenlet C API */ +static void** _PyGreenlet_API = NULL; + +# define PyGreenlet_Type \ + (*(PyTypeObject*)_PyGreenlet_API[PyGreenlet_Type_NUM]) + +# define PyExc_GreenletError \ + ((PyObject*)_PyGreenlet_API[PyExc_GreenletError_NUM]) + +# define PyExc_GreenletExit \ + ((PyObject*)_PyGreenlet_API[PyExc_GreenletExit_NUM]) + +/* + * PyGreenlet_New(PyObject *args) + * + * greenlet.greenlet(run, parent=None) + */ +# define PyGreenlet_New \ + (*(PyGreenlet * (*)(PyObject * run, PyGreenlet * parent)) \ + _PyGreenlet_API[PyGreenlet_New_NUM]) + +/* + * PyGreenlet_GetCurrent(void) + * + * greenlet.getcurrent() + */ +# define PyGreenlet_GetCurrent \ + (*(PyGreenlet * (*)(void)) _PyGreenlet_API[PyGreenlet_GetCurrent_NUM]) + +/* + * PyGreenlet_Throw( + * PyGreenlet *greenlet, + * PyObject *typ, + * PyObject *val, + * PyObject *tb) + * + * g.throw(...) + */ +# define PyGreenlet_Throw \ + (*(PyObject * (*)(PyGreenlet * self, \ + PyObject * typ, \ + PyObject * val, \ + PyObject * tb)) \ + _PyGreenlet_API[PyGreenlet_Throw_NUM]) + +/* + * PyGreenlet_Switch(PyGreenlet *greenlet, PyObject *args) + * + * g.switch(*args, **kwargs) + */ +# define PyGreenlet_Switch \ + (*(PyObject * \ + (*)(PyGreenlet * greenlet, PyObject * args, PyObject * kwargs)) \ + _PyGreenlet_API[PyGreenlet_Switch_NUM]) + +/* + * PyGreenlet_SetParent(PyObject *greenlet, PyObject *new_parent) + * + * g.parent = new_parent + */ +# define PyGreenlet_SetParent \ + (*(int (*)(PyGreenlet * greenlet, PyGreenlet * nparent)) \ + _PyGreenlet_API[PyGreenlet_SetParent_NUM]) + +/* + * PyGreenlet_GetParent(PyObject* greenlet) + * + * return greenlet.parent; + * + * This could return NULL even if there is no exception active. + * If it does not return NULL, you are responsible for decrementing the + * reference count. + */ +# define PyGreenlet_GetParent \ + (*(PyGreenlet* (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM]) + +/* + * deprecated, undocumented alias. + */ +# define PyGreenlet_GET_PARENT PyGreenlet_GetParent + +# define PyGreenlet_MAIN \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_MAIN_NUM]) + +# define PyGreenlet_STARTED \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_STARTED_NUM]) + +# define PyGreenlet_ACTIVE \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_ACTIVE_NUM]) + + + + +/* Macro that imports greenlet and initializes C API */ +/* NOTE: This has actually moved to ``greenlet._greenlet._C_API``, but we + keep the older definition to be sure older code that might have a copy of + the header still works. */ +# define PyGreenlet_Import() \ + { \ + _PyGreenlet_API = (void**)PyCapsule_Import("greenlet._C_API", 0); \ + } + +#endif /* GREENLET_MODULE */ + +#ifdef __cplusplus +} +#endif +#endif /* !Py_GREENLETOBJECT_H */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/LICENSE new file mode 100644 index 000000000..237ce8a36 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/LICENSE @@ -0,0 +1,31 @@ +Copyright (c) 2011 by Max Countryman. + +Some rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +* Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + +* The names of the contributors may not be used to endorse or + promote products derived from this software without specific + prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/METADATA new file mode 100644 index 000000000..7f0b3869f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/METADATA @@ -0,0 +1,74 @@ +Metadata-Version: 2.1 +Name: Flask-Bcrypt +Version: 1.0.1 +Summary: Brcrypt hashing for Flask. +Home-page: https://github.com/maxcountryman/flask-bcrypt +Author: Max Countryman +Author-email: maxc@me.com +License: BSD +Platform: any +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Description-Content-Type: text/markdown +Requires-Dist: Flask +Requires-Dist: bcrypt (>=3.1.1) + +[![Tests](https://img.shields.io/github/workflow/status/maxcountryman/flask-bcrypt/Tests/master?label=tests)](https://github.com/maxcountryman/flask-bcrypt/actions) +[![Version](https://img.shields.io/pypi/v/Flask-Bcrypt.svg)](https://pypi.python.org/pypi/Flask-Bcrypt) +[![Supported Python Versions](https://img.shields.io/pypi/pyversions/Flask-Bcrypt.svg)](https://pypi.python.org/pypi/Flask-Bcrypt) + +# Flask-Bcrypt + +Flask-Bcrypt is a Flask extension that provides bcrypt hashing utilities for +your application. + +Due to the recent increased prevalence of powerful hardware, such as modern +GPUs, hashes have become increasingly easy to crack. A proactive solution to +this is to use a hash that was designed to be "de-optimized". Bcrypt is such +a hashing facility; unlike hashing algorithms such as MD5 and SHA1, which are +optimized for speed, bcrypt is intentionally structured to be slow. + +For sensitive data that must be protected, such as passwords, bcrypt is an +advisable choice. + +## Installation + +Install the extension with one of the following commands: + + $ easy_install flask-bcrypt + +or alternatively if you have pip installed: + + $ pip install flask-bcrypt + +## Usage + +To use the extension simply import the class wrapper and pass the Flask app +object back to here. Do so like this: + + from flask import Flask + from flask_bcrypt import Bcrypt + + app = Flask(__name__) + bcrypt = Bcrypt(app) + +Two primary hashing methods are now exposed by way of the bcrypt object. Use +them like so: + + pw_hash = bcrypt.generate_password_hash('hunter2') + bcrypt.check_password_hash(pw_hash, 'hunter2') # returns True + +## Documentation + +The Sphinx-compiled documentation is available here: https://flask-bcrypt.readthedocs.io/ + + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/RECORD new file mode 100644 index 000000000..cbf95093b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/RECORD @@ -0,0 +1,9 @@ +Flask_Bcrypt-1.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Flask_Bcrypt-1.0.1.dist-info/LICENSE,sha256=RFRom0T_iGtIZZvvW5_AD14IAYQvR45h2hYe0Xp0Jak,1456 +Flask_Bcrypt-1.0.1.dist-info/METADATA,sha256=wO9naenfHK7Lgz41drDpI6BlMg_CpzjwGWsAiko2wsk,2615 +Flask_Bcrypt-1.0.1.dist-info/RECORD,, +Flask_Bcrypt-1.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Flask_Bcrypt-1.0.1.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +Flask_Bcrypt-1.0.1.dist-info/top_level.txt,sha256=HUgQw7e42Bb9jcMgo5popKpplnimwUQw3cyTi0K1N7o,13 +__pycache__/flask_bcrypt.cpython-310.pyc,, +flask_bcrypt.py,sha256=thnztmOYR9M6afp-bTRdSKsPef6R2cOFo4j_DD88-Ls,8856 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/WHEEL new file mode 100644 index 000000000..becc9a66e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/top_level.txt new file mode 100644 index 000000000..f7f0e8875 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Bcrypt-1.0.1.dist-info/top_level.txt @@ -0,0 +1 @@ +flask_bcrypt diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/LICENSE new file mode 100644 index 000000000..53090c15e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2016 Lily Acadia Gilbert + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/METADATA new file mode 100644 index 000000000..2036b282e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/METADATA @@ -0,0 +1,113 @@ +Metadata-Version: 2.1 +Name: Flask-JWT-Extended +Version: 4.7.1 +Summary: Extended JWT integration with Flask +Home-page: https://github.com/vimalloc/flask-jwt-extended +Author: Lily Acadia Gilbert +Author-email: lily.gilbert@hey.com +License: MIT +Keywords: flask,jwt,json web token +Platform: any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Framework :: Flask +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: >=3.9,<4 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: Werkzeug>=0.14 +Requires-Dist: Flask<4.0,>=2.0 +Requires-Dist: PyJWT<3.0,>=2.0 +Provides-Extra: asymmetric-crypto +Requires-Dist: cryptography>=3.3.1; extra == "asymmetric-crypto" + +# Flask-JWT-Extended + +### Features + +Flask-JWT-Extended not only adds support for using JSON Web Tokens (JWT) to Flask for protecting routes, +but also many helpful (and **optional**) features built in to make working with JSON Web Tokens +easier. These include: + +- Adding custom claims to JSON Web Tokens +- Automatic user loading (`current_user`). +- Custom claims validation on received tokens +- [Refresh tokens](https://auth0.com/blog/refresh-tokens-what-are-they-and-when-to-use-them/) +- First class support for fresh tokens for making sensitive changes. +- Token revoking/blocklisting +- Storing tokens in cookies and CSRF protection + +### Usage + +[View the documentation online](https://flask-jwt-extended.readthedocs.io/en/stable/) + +### Upgrading from 3.x.x to 4.0.0 + +[View the changes](https://flask-jwt-extended.readthedocs.io/en/stable/v4_upgrade_guide/) + +### Changelog + +You can view the changelog [here](https://github.com/vimalloc/flask-jwt-extended/releases). +This project follows [semantic versioning](https://semver.org/). + +### Chatting + +Come chat with the community or ask questions at https://discord.gg/EJBsbFd + +### Contributing + +Before making any changes, make sure to install the development requirements +and setup the git hooks which will automatically lint and format your changes. + +```bash +pip install -r requirements.txt +pre-commit install +``` + +We require 100% code coverage in our unit tests. You can run the tests locally +with `tox` which ensures that all tests pass, tests provide complete code coverage, +documentation builds, and style guide are adhered to + +```bash +tox +``` + +A subset of checks can also be ran by adding an argument to tox. The available +arguments are: + +- py37, py38, py39, py310, py311, py312, pypy3 + - Run unit tests on the given python version +- mypy + - Run mypy type checking +- coverage + - Run a code coverage check +- docs + - Ensure documentation builds and there are no broken links +- style + - Ensure style guide is adhered to + +```bash +tox -e py38 +``` + +We also require features to be well documented. You can generate a local copy +of the documentation by going to the `docs` directory and running: + +```bash +make clean && make html && open _build/html/index.html +``` diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/RECORD new file mode 100644 index 000000000..0b58131dd --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/RECORD @@ -0,0 +1,28 @@ +Flask_JWT_Extended-4.7.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Flask_JWT_Extended-4.7.1.dist-info/LICENSE,sha256=IZ7JSdUnG3ZAFNkV8wskplpL1tTu-ImxNFJM2Zfci5g,1076 +Flask_JWT_Extended-4.7.1.dist-info/METADATA,sha256=GuhF8kzZOl9nKVkGgeRx5X7KEa6FiNpv9xjRWZgiz9I,3812 +Flask_JWT_Extended-4.7.1.dist-info/RECORD,, +Flask_JWT_Extended-4.7.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Flask_JWT_Extended-4.7.1.dist-info/WHEEL,sha256=pxeNX5JdtCe58PUSYP9upmc7jdRPgvT0Gm9kb1SHlVw,109 +Flask_JWT_Extended-4.7.1.dist-info/top_level.txt,sha256=ScNvc4cNmrFFSnAe_rENqiMpfhog1M2HkaUaydRBbUU,19 +flask_jwt_extended/__init__.py,sha256=q3b7uG1tp7jXv9L2WokVrh920M7J8mRGRQKaCc2KbNo,1179 +flask_jwt_extended/__pycache__/__init__.cpython-310.pyc,, +flask_jwt_extended/__pycache__/config.cpython-310.pyc,, +flask_jwt_extended/__pycache__/default_callbacks.cpython-310.pyc,, +flask_jwt_extended/__pycache__/exceptions.cpython-310.pyc,, +flask_jwt_extended/__pycache__/internal_utils.cpython-310.pyc,, +flask_jwt_extended/__pycache__/jwt_manager.cpython-310.pyc,, +flask_jwt_extended/__pycache__/tokens.cpython-310.pyc,, +flask_jwt_extended/__pycache__/typing.cpython-310.pyc,, +flask_jwt_extended/__pycache__/utils.cpython-310.pyc,, +flask_jwt_extended/__pycache__/view_decorators.cpython-310.pyc,, +flask_jwt_extended/config.py,sha256=8hvI2WEUPcDRFumnWACjPQSNm5Sn10RfVzagP4Tv46g,10083 +flask_jwt_extended/default_callbacks.py,sha256=S-zUiOS5nMiC4n3URpUpGf9LfNA5bT2kWHVJdp14Ngs,5273 +flask_jwt_extended/exceptions.py,sha256=kleki6CySt5u-BiAwJ-uskWb8rySnXHLC-w-YvMqyk4,2400 +flask_jwt_extended/internal_utils.py,sha256=XDnS8GLQ2oaZka5FbfiuE9KBgxAmmHssuKR3mLDP1Jk,3350 +flask_jwt_extended/jwt_manager.py,sha256=OjTJhl2xPVq3Z0FH874NNtHR4VeOl2y75dnQLdPL8GM,23989 +flask_jwt_extended/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask_jwt_extended/tokens.py,sha256=GOreSqdqx1rAg3nV_DMn2ltqAngLpu2281zx6Y6V2M4,3131 +flask_jwt_extended/typing.py,sha256=l4nGV2_ARq2_BUK6CjnrOp5i43qXJ3yqLPy8YOs1soA,170 +flask_jwt_extended/utils.py,sha256=iA6Fk-S4DDmRruKNCwkB6KK70cuPWKOTVb8vsKUq2P4,15964 +flask_jwt_extended/view_decorators.py,sha256=gtDCZ4-3e6eldJREMukIjD9SzoYa1bScKdInUtPIhIs,13583 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/WHEEL new file mode 100644 index 000000000..104f38746 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: setuptools (75.6.0) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/top_level.txt new file mode 100644 index 000000000..ff6d88413 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_JWT_Extended-4.7.1.dist-info/top_level.txt @@ -0,0 +1 @@ +flask_jwt_extended diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/LICENSE new file mode 100644 index 000000000..2448fd266 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/LICENSE @@ -0,0 +1,20 @@ +The MIT License (MIT) + +Copyright (c) 2013 Miguel Grinberg + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/METADATA new file mode 100644 index 000000000..e859deffc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/METADATA @@ -0,0 +1,91 @@ +Metadata-Version: 2.2 +Name: Flask-Migrate +Version: 4.1.0 +Summary: SQLAlchemy database migrations for Flask applications using Alembic. +Author-email: Miguel Grinberg +License: MIT +Project-URL: Homepage, https://github.com/miguelgrinberg/flask-migrate +Project-URL: Bug Tracker, https://github.com/miguelgrinberg/flask-migrate/issues +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: Programming Language :: Python :: 3 +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Requires-Python: >=3.6 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: Flask>=0.9 +Requires-Dist: Flask-SQLAlchemy>=1.0 +Requires-Dist: alembic>=1.9.0 +Provides-Extra: dev +Requires-Dist: tox; extra == "dev" +Requires-Dist: flake8; extra == "dev" +Requires-Dist: pytest; extra == "dev" +Provides-Extra: docs +Requires-Dist: sphinx; extra == "docs" + +Flask-Migrate +============= + +[![Build status](https://github.com/miguelgrinberg/flask-migrate/workflows/build/badge.svg)](https://github.com/miguelgrinberg/flask-migrate/actions) + +Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are provided as command-line arguments under the `flask db` command. + +Installation +------------ + +Install Flask-Migrate with `pip`: + + pip install Flask-Migrate + +Example +------- + +This is an example application that handles database migrations through Flask-Migrate: + +```python +from flask import Flask +from flask_sqlalchemy import SQLAlchemy +from flask_migrate import Migrate + +app = Flask(__name__) +app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db' + +db = SQLAlchemy(app) +migrate = Migrate(app, db) + +class User(db.Model): + id = db.Column(db.Integer, primary_key=True) + name = db.Column(db.String(128)) +``` + +With the above application you can create the database or enable migrations if the database already exists with the following command: + + $ flask db init + +Note that the `FLASK_APP` environment variable must be set according to the Flask documentation for this command to work. This will add a `migrations` folder to your application. The contents of this folder need to be added to version control along with your other source files. + +You can then generate an initial migration: + + $ flask db migrate + +The migration script needs to be reviewed and edited, as Alembic currently does not detect every change you make to your models. In particular, Alembic is currently unable to detect indexes. Once finalized, the migration script also needs to be added to version control. + +Then you can apply the migration to the database: + + $ flask db upgrade + +Then each time the database models change repeat the `migrate` and `upgrade` commands. + +To sync the database in another system just refresh the `migrations` folder from source control and run the `upgrade` command. + +To see all the commands that are available run this command: + + $ flask db --help + +Resources +--------- + +- [Documentation](http://flask-migrate.readthedocs.io/en/latest/) +- [pypi](https://pypi.python.org/pypi/Flask-Migrate) +- [Change Log](https://github.com/miguelgrinberg/Flask-Migrate/blob/master/CHANGES.md) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/RECORD new file mode 100644 index 000000000..0a317228e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/RECORD @@ -0,0 +1,31 @@ +Flask_Migrate-4.1.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Flask_Migrate-4.1.0.dist-info/LICENSE,sha256=kfkXGlJQvKy3Y__6tAJ8ynIp1HQfeROXhL8jZU1d-DI,1082 +Flask_Migrate-4.1.0.dist-info/METADATA,sha256=jifIy8PzfDzjuCEeKLDKRJA8O56KOgLfj3s2lmzZA-8,3289 +Flask_Migrate-4.1.0.dist-info/RECORD,, +Flask_Migrate-4.1.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Flask_Migrate-4.1.0.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91 +Flask_Migrate-4.1.0.dist-info/top_level.txt,sha256=jLoPgiMG6oR4ugNteXn3IHskVVIyIXVStZOVq-AWLdU,14 +flask_migrate/__init__.py,sha256=JMySGA55Y8Gxy3HviWu7qq5rPUNQBWc2NID2OicpDyw,10082 +flask_migrate/__pycache__/__init__.cpython-310.pyc,, +flask_migrate/__pycache__/cli.cpython-310.pyc,, +flask_migrate/cli.py,sha256=IxrxBSC82S5sPfWac8Qg83_FVsRvqTYtCG7HRyMW8RU,11097 +flask_migrate/templates/aioflask-multidb/README,sha256=Ek4cJqTaxneVjtkue--BXMlfpfp3MmJRjqoZvnSizww,43 +flask_migrate/templates/aioflask-multidb/__pycache__/env.cpython-310.pyc,, +flask_migrate/templates/aioflask-multidb/alembic.ini.mako,sha256=SjYEmJKzz6K8QfuZWtLJAJWcCKOdRbfUhsVlpgv8ock,857 +flask_migrate/templates/aioflask-multidb/env.py,sha256=UcjeqkAbyUjTkuQFmCFPG7QOvqhco8-uGp8QEbto0T8,6573 +flask_migrate/templates/aioflask-multidb/script.py.mako,sha256=198VPxVEN3NZ3vHcRuCxSoI4XnOYirGWt01qkbPKoJw,1246 +flask_migrate/templates/aioflask/README,sha256=KKqWGl4YC2RqdOdq-y6quTDW0b7D_UZNHuM8glM1L-c,44 +flask_migrate/templates/aioflask/__pycache__/env.cpython-310.pyc,, +flask_migrate/templates/aioflask/alembic.ini.mako,sha256=SjYEmJKzz6K8QfuZWtLJAJWcCKOdRbfUhsVlpgv8ock,857 +flask_migrate/templates/aioflask/env.py,sha256=m6ZtBhdpwuq89vVeLTWmNT-1NfJZqarC_hsquCdR9bw,3478 +flask_migrate/templates/aioflask/script.py.mako,sha256=8_xgA-gm_OhehnO7CiIijWgnm00ZlszEHtIHrAYFJl0,494 +flask_migrate/templates/flask-multidb/README,sha256=AfiP5foaV2odZxXxuUuSIS6YhkIpR7CsOo2mpuxwHdc,40 +flask_migrate/templates/flask-multidb/__pycache__/env.cpython-310.pyc,, +flask_migrate/templates/flask-multidb/alembic.ini.mako,sha256=SjYEmJKzz6K8QfuZWtLJAJWcCKOdRbfUhsVlpgv8ock,857 +flask_migrate/templates/flask-multidb/env.py,sha256=F44iqsAxLTVBN_zD8CMUkdE7Aub4niHMmo5wl9mY4Uw,6190 +flask_migrate/templates/flask-multidb/script.py.mako,sha256=198VPxVEN3NZ3vHcRuCxSoI4XnOYirGWt01qkbPKoJw,1246 +flask_migrate/templates/flask/README,sha256=JL0NrjOrscPcKgRmQh1R3hlv1_rohDot0TvpmdM27Jk,41 +flask_migrate/templates/flask/__pycache__/env.cpython-310.pyc,, +flask_migrate/templates/flask/alembic.ini.mako,sha256=SjYEmJKzz6K8QfuZWtLJAJWcCKOdRbfUhsVlpgv8ock,857 +flask_migrate/templates/flask/env.py,sha256=ibK1hsdOsOBzXNU2yQoAIza7f_EFzaVSWwON_NSpNzQ,3344 +flask_migrate/templates/flask/script.py.mako,sha256=8_xgA-gm_OhehnO7CiIijWgnm00ZlszEHtIHrAYFJl0,494 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/WHEEL new file mode 100644 index 000000000..505164bc0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (75.8.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/top_level.txt new file mode 100755 index 000000000..0652762c7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/Flask_Migrate-4.1.0.dist-info/top_level.txt @@ -0,0 +1 @@ +flask_migrate diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/LICENSE.rst b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/LICENSE.rst new file mode 100644 index 000000000..9d227a0cc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/METADATA new file mode 100644 index 000000000..dfe37d52d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/METADATA @@ -0,0 +1,93 @@ +Metadata-Version: 2.1 +Name: MarkupSafe +Version: 2.1.5 +Summary: Safely add untrusted strings to HTML/XML markup. +Home-page: https://palletsprojects.com/p/markupsafe/ +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://markupsafe.palletsprojects.com/ +Project-URL: Changes, https://markupsafe.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/markupsafe/ +Project-URL: Issue Tracker, https://github.com/pallets/markupsafe/issues/ +Project-URL: Chat, https://discord.gg/pallets +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst + +MarkupSafe +========== + +MarkupSafe implements a text object that escapes characters so it is +safe to use in HTML and XML. Characters that have special meanings are +replaced so that they display as the actual characters. This mitigates +injection attacks, meaning untrusted user input can safely be displayed +on a page. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + pip install -U MarkupSafe + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +Examples +-------- + +.. code-block:: pycon + + >>> from markupsafe import Markup, escape + + >>> # escape replaces special characters and wraps in Markup + >>> escape("") + Markup('<script>alert(document.cookie);</script>') + + >>> # wrap in Markup to mark text "safe" and prevent escaping + >>> Markup("Hello") + Markup('hello') + + >>> escape(Markup("Hello")) + Markup('hello') + + >>> # Markup is a str subclass + >>> # methods and operators escape their arguments + >>> template = Markup("Hello {name}") + >>> template.format(name='"World"') + Markup('Hello "World"') + + +Donate +------ + +The Pallets organization develops and supports MarkupSafe and other +popular packages. In order to grow the community of contributors and +users, and allow the maintainers to devote more time to the projects, +`please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://markupsafe.palletsprojects.com/ +- Changes: https://markupsafe.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/MarkupSafe/ +- Source Code: https://github.com/pallets/markupsafe/ +- Issue Tracker: https://github.com/pallets/markupsafe/issues/ +- Chat: https://discord.gg/pallets diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/RECORD new file mode 100644 index 000000000..461cf1a9e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/RECORD @@ -0,0 +1,15 @@ +MarkupSafe-2.1.5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +MarkupSafe-2.1.5.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +MarkupSafe-2.1.5.dist-info/METADATA,sha256=2dRDPam6OZLfpX0wg1JN5P3u9arqACxVSfdGmsJU7o8,3003 +MarkupSafe-2.1.5.dist-info/RECORD,, +MarkupSafe-2.1.5.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +MarkupSafe-2.1.5.dist-info/WHEEL,sha256=1FEjxEYgybphwh9S0FO9IcZ0B-NIeM2ko8OzhFZeOeQ,152 +MarkupSafe-2.1.5.dist-info/top_level.txt,sha256=qy0Plje5IJuvsCBjejJyhDCjEAdcDLK_2agVcex8Z6U,11 +markupsafe/__init__.py,sha256=r7VOTjUq7EMQ4v3p4R1LoVOGJg6ysfYRncLr34laRBs,10958 +markupsafe/__pycache__/__init__.cpython-310.pyc,, +markupsafe/__pycache__/_native.cpython-310.pyc,, +markupsafe/_native.py,sha256=GR86Qvo_GcgKmKreA1WmYN9ud17OFwkww8E-fiW-57s,1713 +markupsafe/_speedups.c,sha256=X2XvQVtIdcK4Usz70BvkzoOfjTCmQlDkkjYSn-swE0g,7083 +markupsafe/_speedups.cpython-310-x86_64-linux-gnu.so,sha256=kPt-fhZ_RG7PUbDvwmyC26ZvRJ9DvUlF3hszBIB6_xs,44240 +markupsafe/_speedups.pyi,sha256=vfMCsOgbAXRNLUXkyuyonG8uEWKYU4PDqNuMaDELAYw,229 +markupsafe/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/WHEEL new file mode 100644 index 000000000..1d8125133 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.42.0) +Root-Is-Purelib: false +Tag: cp310-cp310-manylinux_2_17_x86_64 +Tag: cp310-cp310-manylinux2014_x86_64 + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/top_level.txt new file mode 100644 index 000000000..75bf72925 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/MarkupSafe-2.1.5.dist-info/top_level.txt @@ -0,0 +1 @@ +markupsafe diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/AUTHORS.rst b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/AUTHORS.rst new file mode 100644 index 000000000..88e2b6ad7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/AUTHORS.rst @@ -0,0 +1,7 @@ +Authors +======= + +``pyjwt`` is currently written and maintained by `Jose Padilla `_. +Originally written and maintained by `Jeff Lindsay `_. + +A full list of contributors can be found on GitHub’s `overview `_. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/LICENSE new file mode 100644 index 000000000..fd0ecbc88 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2015-2022 José Padilla + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/METADATA new file mode 100644 index 000000000..f31b700e6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/METADATA @@ -0,0 +1,106 @@ +Metadata-Version: 2.1 +Name: PyJWT +Version: 2.10.1 +Summary: JSON Web Token implementation in Python +Author-email: Jose Padilla +License: MIT +Project-URL: Homepage, https://github.com/jpadilla/pyjwt +Keywords: json,jwt,security,signing,token,web +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Classifier: Topic :: Utilities +Requires-Python: >=3.9 +Description-Content-Type: text/x-rst +License-File: LICENSE +License-File: AUTHORS.rst +Provides-Extra: crypto +Requires-Dist: cryptography>=3.4.0; extra == "crypto" +Provides-Extra: dev +Requires-Dist: coverage[toml]==5.0.4; extra == "dev" +Requires-Dist: cryptography>=3.4.0; extra == "dev" +Requires-Dist: pre-commit; extra == "dev" +Requires-Dist: pytest<7.0.0,>=6.0.0; extra == "dev" +Requires-Dist: sphinx; extra == "dev" +Requires-Dist: sphinx-rtd-theme; extra == "dev" +Requires-Dist: zope.interface; extra == "dev" +Provides-Extra: docs +Requires-Dist: sphinx; extra == "docs" +Requires-Dist: sphinx-rtd-theme; extra == "docs" +Requires-Dist: zope.interface; extra == "docs" +Provides-Extra: tests +Requires-Dist: coverage[toml]==5.0.4; extra == "tests" +Requires-Dist: pytest<7.0.0,>=6.0.0; extra == "tests" + +PyJWT +===== + +.. image:: https://github.com/jpadilla/pyjwt/workflows/CI/badge.svg + :target: https://github.com/jpadilla/pyjwt/actions?query=workflow%3ACI + +.. image:: https://img.shields.io/pypi/v/pyjwt.svg + :target: https://pypi.python.org/pypi/pyjwt + +.. image:: https://codecov.io/gh/jpadilla/pyjwt/branch/master/graph/badge.svg + :target: https://codecov.io/gh/jpadilla/pyjwt + +.. image:: https://readthedocs.org/projects/pyjwt/badge/?version=stable + :target: https://pyjwt.readthedocs.io/en/stable/ + +A Python implementation of `RFC 7519 `_. Original implementation was written by `@progrium `_. + +Sponsor +------- + +.. |auth0-logo| image:: https://github.com/user-attachments/assets/ee98379e-ee76-4bcb-943a-e25c4ea6d174 + :width: 160px + ++--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| |auth0-logo| | If you want to quickly add secure token-based authentication to Python projects, feel free to check Auth0's Python SDK and free plan at `auth0.com/signup `_. | ++--------------+-----------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Installing +---------- + +Install with **pip**: + +.. code-block:: console + + $ pip install PyJWT + + +Usage +----- + +.. code-block:: pycon + + >>> import jwt + >>> encoded = jwt.encode({"some": "payload"}, "secret", algorithm="HS256") + >>> print(encoded) + eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb21lIjoicGF5bG9hZCJ9.4twFt5NiznN84AWoo1d7KO1T_yoc0Z6XOpOVswacPZg + >>> jwt.decode(encoded, "secret", algorithms=["HS256"]) + {'some': 'payload'} + +Documentation +------------- + +View the full docs online at https://pyjwt.readthedocs.io/en/stable/ + + +Tests +----- + +You can run tests from the project root after cloning with: + +.. code-block:: console + + $ tox diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/RECORD new file mode 100644 index 000000000..bcc25fd4c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/RECORD @@ -0,0 +1,32 @@ +PyJWT-2.10.1.dist-info/AUTHORS.rst,sha256=klzkNGECnu2_VY7At89_xLBF3vUSDruXk3xwgUBxzwc,322 +PyJWT-2.10.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +PyJWT-2.10.1.dist-info/LICENSE,sha256=eXp6ICMdTEM-nxkR2xcx0GtYKLmPSZgZoDT3wPVvXOU,1085 +PyJWT-2.10.1.dist-info/METADATA,sha256=EkewF6D6KU8SGaaQzVYfxUUU1P_gs_dp1pYTkoYvAx8,3990 +PyJWT-2.10.1.dist-info/RECORD,, +PyJWT-2.10.1.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91 +PyJWT-2.10.1.dist-info/top_level.txt,sha256=RP5DHNyJbMq2ka0FmfTgoSaQzh7e3r5XuCWCO8a00k8,4 +jwt/__init__.py,sha256=VB2vFKuboTjcDGeZ8r-UqK_dz3NsQSQEqySSICby8Xg,1711 +jwt/__pycache__/__init__.cpython-310.pyc,, +jwt/__pycache__/algorithms.cpython-310.pyc,, +jwt/__pycache__/api_jwk.cpython-310.pyc,, +jwt/__pycache__/api_jws.cpython-310.pyc,, +jwt/__pycache__/api_jwt.cpython-310.pyc,, +jwt/__pycache__/exceptions.cpython-310.pyc,, +jwt/__pycache__/help.cpython-310.pyc,, +jwt/__pycache__/jwk_set_cache.cpython-310.pyc,, +jwt/__pycache__/jwks_client.cpython-310.pyc,, +jwt/__pycache__/types.cpython-310.pyc,, +jwt/__pycache__/utils.cpython-310.pyc,, +jwt/__pycache__/warnings.cpython-310.pyc,, +jwt/algorithms.py,sha256=cKr-XEioe0mBtqJMCaHEswqVOA1Z8Purt5Sb3Bi-5BE,30409 +jwt/api_jwk.py,sha256=6F1r7rmm8V5qEnBKA_xMjS9R7VoANe1_BL1oD2FrAjE,4451 +jwt/api_jws.py,sha256=aM8vzqQf6mRrAw7bRy-Moj_pjWsKSVQyYK896AfMjJU,11762 +jwt/api_jwt.py,sha256=OGT4hok1l5A6FH_KdcrU5g6u6EQ8B7em0r9kGM9SYgA,14512 +jwt/exceptions.py,sha256=bUIOJ-v9tjopTLS-FYOTc3kFx5WP5IZt7ksN_HE1G9Q,1211 +jwt/help.py,sha256=vFdNzjQoAch04XCMYpCkyB2blaqHAGAqQrtf9nSPkdk,1808 +jwt/jwk_set_cache.py,sha256=hBKmN-giU7-G37L_XKgc_OZu2ah4wdbj1ZNG_GkoSE8,959 +jwt/jwks_client.py,sha256=p9b-IbQqo2tEge9Zit3oSPBFNePqwho96VLbnUrHUWs,4259 +jwt/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +jwt/types.py,sha256=VnhGv_VFu5a7_mrPoSCB7HaNLrJdhM8Sq1sSfEg0gLU,99 +jwt/utils.py,sha256=hxOjvDBheBYhz-RIPiEz7Q88dSUSTMzEdKE_Ww2VdJw,3640 +jwt/warnings.py,sha256=50XWOnyNsIaqzUJTk6XHNiIDykiL763GYA92MjTKmok,59 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/WHEEL new file mode 100644 index 000000000..ae527e7d6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (75.6.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/top_level.txt new file mode 100644 index 000000000..27ccc9bc3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/PyJWT-2.10.1.dist-info/top_level.txt @@ -0,0 +1 @@ +jwt diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/__init__.py new file mode 100644 index 000000000..f987a5367 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/__init__.py @@ -0,0 +1,222 @@ +# don't import any costly modules +import sys +import os + + +is_pypy = '__pypy__' in sys.builtin_module_names + + +def warn_distutils_present(): + if 'distutils' not in sys.modules: + return + if is_pypy and sys.version_info < (3, 7): + # PyPy for 3.6 unconditionally imports distutils, so bypass the warning + # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250 + return + import warnings + + warnings.warn( + "Distutils was imported before Setuptools, but importing Setuptools " + "also replaces the `distutils` module in `sys.modules`. This may lead " + "to undesirable behaviors or errors. To avoid these issues, avoid " + "using distutils directly, ensure that setuptools is installed in the " + "traditional way (e.g. not an editable install), and/or make sure " + "that setuptools is always imported before distutils." + ) + + +def clear_distutils(): + if 'distutils' not in sys.modules: + return + import warnings + + warnings.warn("Setuptools is replacing distutils.") + mods = [ + name + for name in sys.modules + if name == "distutils" or name.startswith("distutils.") + ] + for name in mods: + del sys.modules[name] + + +def enabled(): + """ + Allow selection of distutils by environment variable. + """ + which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local') + return which == 'local' + + +def ensure_local_distutils(): + import importlib + + clear_distutils() + + # With the DistutilsMetaFinder in place, + # perform an import to cause distutils to be + # loaded from setuptools._distutils. Ref #2906. + with shim(): + importlib.import_module('distutils') + + # check that submodules load as expected + core = importlib.import_module('distutils.core') + assert '_distutils' in core.__file__, core.__file__ + assert 'setuptools._distutils.log' not in sys.modules + + +def do_override(): + """ + Ensure that the local copy of distutils is preferred over stdlib. + + See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401 + for more motivation. + """ + if enabled(): + warn_distutils_present() + ensure_local_distutils() + + +class _TrivialRe: + def __init__(self, *patterns): + self._patterns = patterns + + def match(self, string): + return all(pat in string for pat in self._patterns) + + +class DistutilsMetaFinder: + def find_spec(self, fullname, path, target=None): + # optimization: only consider top level modules and those + # found in the CPython test suite. + if path is not None and not fullname.startswith('test.'): + return + + method_name = 'spec_for_{fullname}'.format(**locals()) + method = getattr(self, method_name, lambda: None) + return method() + + def spec_for_distutils(self): + if self.is_cpython(): + return + + import importlib + import importlib.abc + import importlib.util + + try: + mod = importlib.import_module('setuptools._distutils') + except Exception: + # There are a couple of cases where setuptools._distutils + # may not be present: + # - An older Setuptools without a local distutils is + # taking precedence. Ref #2957. + # - Path manipulation during sitecustomize removes + # setuptools from the path but only after the hook + # has been loaded. Ref #2980. + # In either case, fall back to stdlib behavior. + return + + class DistutilsLoader(importlib.abc.Loader): + def create_module(self, spec): + mod.__name__ = 'distutils' + return mod + + def exec_module(self, module): + pass + + return importlib.util.spec_from_loader( + 'distutils', DistutilsLoader(), origin=mod.__file__ + ) + + @staticmethod + def is_cpython(): + """ + Suppress supplying distutils for CPython (build and tests). + Ref #2965 and #3007. + """ + return os.path.isfile('pybuilddir.txt') + + def spec_for_pip(self): + """ + Ensure stdlib distutils when running under pip. + See pypa/pip#8761 for rationale. + """ + if self.pip_imported_during_build(): + return + clear_distutils() + self.spec_for_distutils = lambda: None + + @classmethod + def pip_imported_during_build(cls): + """ + Detect if pip is being imported in a build script. Ref #2355. + """ + import traceback + + return any( + cls.frame_file_is_setup(frame) for frame, line in traceback.walk_stack(None) + ) + + @staticmethod + def frame_file_is_setup(frame): + """ + Return True if the indicated frame suggests a setup.py file. + """ + # some frames may not have __file__ (#2940) + return frame.f_globals.get('__file__', '').endswith('setup.py') + + def spec_for_sensitive_tests(self): + """ + Ensure stdlib distutils when running select tests under CPython. + + python/cpython#91169 + """ + clear_distutils() + self.spec_for_distutils = lambda: None + + sensitive_tests = ( + [ + 'test.test_distutils', + 'test.test_peg_generator', + 'test.test_importlib', + ] + if sys.version_info < (3, 10) + else [ + 'test.test_distutils', + ] + ) + + +for name in DistutilsMetaFinder.sensitive_tests: + setattr( + DistutilsMetaFinder, + f'spec_for_{name}', + DistutilsMetaFinder.spec_for_sensitive_tests, + ) + + +DISTUTILS_FINDER = DistutilsMetaFinder() + + +def add_shim(): + DISTUTILS_FINDER in sys.meta_path or insert_shim() + + +class shim: + def __enter__(self): + insert_shim() + + def __exit__(self, exc, value, tb): + remove_shim() + + +def insert_shim(): + sys.meta_path.insert(0, DISTUTILS_FINDER) + + +def remove_shim(): + try: + sys.meta_path.remove(DISTUTILS_FINDER) + except ValueError: + pass diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/override.py b/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/override.py new file mode 100644 index 000000000..2cc433a4a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/_distutils_hack/override.py @@ -0,0 +1 @@ +__import__('_distutils_hack').do_override() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/METADATA new file mode 100644 index 000000000..75ab30e69 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/METADATA @@ -0,0 +1,140 @@ +Metadata-Version: 2.4 +Name: alembic +Version: 1.16.2 +Summary: A database migration tool for SQLAlchemy. +Author-email: Mike Bayer +License-Expression: MIT +Project-URL: Homepage, https://alembic.sqlalchemy.org +Project-URL: Documentation, https://alembic.sqlalchemy.org/en/latest/ +Project-URL: Changelog, https://alembic.sqlalchemy.org/en/latest/changelog.html +Project-URL: Source, https://github.com/sqlalchemy/alembic/ +Project-URL: Issue Tracker, https://github.com/sqlalchemy/alembic/issues/ +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Environment :: Console +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Database :: Front-Ends +Requires-Python: >=3.9 +Description-Content-Type: text/x-rst +License-File: LICENSE +Requires-Dist: SQLAlchemy>=1.4.0 +Requires-Dist: Mako +Requires-Dist: typing-extensions>=4.12 +Requires-Dist: tomli; python_version < "3.11" +Provides-Extra: tz +Requires-Dist: tzdata; extra == "tz" +Dynamic: license-file + +Alembic is a database migrations tool written by the author +of `SQLAlchemy `_. A migrations tool +offers the following functionality: + +* Can emit ALTER statements to a database in order to change + the structure of tables and other constructs +* Provides a system whereby "migration scripts" may be constructed; + each script indicates a particular series of steps that can "upgrade" a + target database to a new version, and optionally a series of steps that can + "downgrade" similarly, doing the same steps in reverse. +* Allows the scripts to execute in some sequential manner. + +The goals of Alembic are: + +* Very open ended and transparent configuration and operation. A new + Alembic environment is generated from a set of templates which is selected + among a set of options when setup first occurs. The templates then deposit a + series of scripts that define fully how database connectivity is established + and how migration scripts are invoked; the migration scripts themselves are + generated from a template within that series of scripts. The scripts can + then be further customized to define exactly how databases will be + interacted with and what structure new migration files should take. +* Full support for transactional DDL. The default scripts ensure that all + migrations occur within a transaction - for those databases which support + this (Postgresql, Microsoft SQL Server), migrations can be tested with no + need to manually undo changes upon failure. +* Minimalist script construction. Basic operations like renaming + tables/columns, adding/removing columns, changing column attributes can be + performed through one line commands like alter_column(), rename_table(), + add_constraint(). There is no need to recreate full SQLAlchemy Table + structures for simple operations like these - the functions themselves + generate minimalist schema structures behind the scenes to achieve the given + DDL sequence. +* "auto generation" of migrations. While real world migrations are far more + complex than what can be automatically determined, Alembic can still + eliminate the initial grunt work in generating new migration directives + from an altered schema. The ``--autogenerate`` feature will inspect the + current status of a database using SQLAlchemy's schema inspection + capabilities, compare it to the current state of the database model as + specified in Python, and generate a series of "candidate" migrations, + rendering them into a new migration script as Python directives. The + developer then edits the new file, adding additional directives and data + migrations as needed, to produce a finished migration. Table and column + level changes can be detected, with constraints and indexes to follow as + well. +* Full support for migrations generated as SQL scripts. Those of us who + work in corporate environments know that direct access to DDL commands on a + production database is a rare privilege, and DBAs want textual SQL scripts. + Alembic's usage model and commands are oriented towards being able to run a + series of migrations into a textual output file as easily as it runs them + directly to a database. Care must be taken in this mode to not invoke other + operations that rely upon in-memory SELECTs of rows - Alembic tries to + provide helper constructs like bulk_insert() to help with data-oriented + operations that are compatible with script-based DDL. +* Non-linear, dependency-graph versioning. Scripts are given UUID + identifiers similarly to a DVCS, and the linkage of one script to the next + is achieved via human-editable markers within the scripts themselves. + The structure of a set of migration files is considered as a + directed-acyclic graph, meaning any migration file can be dependent + on any other arbitrary set of migration files, or none at + all. Through this open-ended system, migration files can be organized + into branches, multiple roots, and mergepoints, without restriction. + Commands are provided to produce new branches, roots, and merges of + branches automatically. +* Provide a library of ALTER constructs that can be used by any SQLAlchemy + application. The DDL constructs build upon SQLAlchemy's own DDLElement base + and can be used standalone by any application or script. +* At long last, bring SQLite and its inability to ALTER things into the fold, + but in such a way that SQLite's very special workflow needs are accommodated + in an explicit way that makes the most of a bad situation, through the + concept of a "batch" migration, where multiple changes to a table can + be batched together to form a series of instructions for a single, subsequent + "move-and-copy" workflow. You can even use "move-and-copy" workflow for + other databases, if you want to recreate a table in the background + on a busy system. + +Documentation and status of Alembic is at https://alembic.sqlalchemy.org/ + +The SQLAlchemy Project +====================== + +Alembic is part of the `SQLAlchemy Project `_ and +adheres to the same standards and conventions as the core project. + +Development / Bug reporting / Pull requests +___________________________________________ + +Please refer to the +`SQLAlchemy Community Guide `_ for +guidelines on coding and participating in this project. + +Code of Conduct +_______________ + +Above all, SQLAlchemy places great emphasis on polite, thoughtful, and +constructive communication between users and developers. +Please see our current Code of Conduct at +`Code of Conduct `_. + +License +======= + +Alembic is distributed under the `MIT license +`_. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/RECORD new file mode 100644 index 000000000..a4fd70cc7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/RECORD @@ -0,0 +1,156 @@ +../../../bin/alembic,sha256=BOIIQdoHMxJ9G3-NWiWEeuWTLsNsASiMq142AF6_nC4,293 +alembic-1.16.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +alembic-1.16.2.dist-info/METADATA,sha256=YXkCdjlR2LNMd-B_P4WlNhmvXqgL51wKDdRIDuu5DrA,7265 +alembic-1.16.2.dist-info/RECORD,, +alembic-1.16.2.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91 +alembic-1.16.2.dist-info/entry_points.txt,sha256=aykM30soxwGN0pB7etLc1q0cHJbL9dy46RnK9VX4LLw,48 +alembic-1.16.2.dist-info/licenses/LICENSE,sha256=NeqcNBmyYfrxvkSMT0fZJVKBv2s2tf_qVQUiJ9S6VN4,1059 +alembic-1.16.2.dist-info/top_level.txt,sha256=FwKWd5VsPFC8iQjpu1u9Cn-JnK3-V1RhUCmWqz1cl-s,8 +alembic/__init__.py,sha256=AL1XKsGOkbogUDQ2LozlFjw5wZVjWyEkQq2K8JOzeJI,63 +alembic/__main__.py,sha256=373m7-TBh72JqrSMYviGrxCHZo-cnweM8AGF8A22PmY,78 +alembic/__pycache__/__init__.cpython-310.pyc,, +alembic/__pycache__/__main__.cpython-310.pyc,, +alembic/__pycache__/command.cpython-310.pyc,, +alembic/__pycache__/config.cpython-310.pyc,, +alembic/__pycache__/context.cpython-310.pyc,, +alembic/__pycache__/environment.cpython-310.pyc,, +alembic/__pycache__/migration.cpython-310.pyc,, +alembic/__pycache__/op.cpython-310.pyc,, +alembic/autogenerate/__init__.py,sha256=ntmUTXhjLm4_zmqIwyVaECdpPDn6_u1yM9vYk6-553E,543 +alembic/autogenerate/__pycache__/__init__.cpython-310.pyc,, +alembic/autogenerate/__pycache__/api.cpython-310.pyc,, +alembic/autogenerate/__pycache__/compare.cpython-310.pyc,, +alembic/autogenerate/__pycache__/render.cpython-310.pyc,, +alembic/autogenerate/__pycache__/rewriter.cpython-310.pyc,, +alembic/autogenerate/api.py,sha256=L4qkapSJO1Ypymx8HsjLl0vFFt202agwMYsQbIe6ZtI,22219 +alembic/autogenerate/compare.py,sha256=LRTxNijEBvcTauuUXuJjC6Sg_gUn33FCYBTF0neZFwE,45979 +alembic/autogenerate/render.py,sha256=_VeY05jcszpr7SW1YZxNbnVk40DZD_pIEJkbEj69K_M,36826 +alembic/autogenerate/rewriter.py,sha256=NIASSS-KaNKPmbm1k4pE45aawwjSh1Acf6eZrOwnUGM,7814 +alembic/command.py,sha256=pZPQUGSxCjFu7qy0HMe02HJmByM0LOqoiK2AXKfRO3A,24855 +alembic/config.py,sha256=jcWyXXCM-Uh6uOGmjgnBLhDSD0OkG5P0ZUWYCf5_ek8,33543 +alembic/context.py,sha256=hK1AJOQXJ29Bhn276GYcosxeG7pC5aZRT5E8c4bMJ4Q,195 +alembic/context.pyi,sha256=fdeFNTRc0bUgi7n2eZWVFh6NG-TzIv_0gAcapbfHnKY,31773 +alembic/ddl/__init__.py,sha256=Df8fy4Vn_abP8B7q3x8gyFwEwnLw6hs2Ljt_bV3EZWE,152 +alembic/ddl/__pycache__/__init__.cpython-310.pyc,, +alembic/ddl/__pycache__/_autogen.cpython-310.pyc,, +alembic/ddl/__pycache__/base.cpython-310.pyc,, +alembic/ddl/__pycache__/impl.cpython-310.pyc,, +alembic/ddl/__pycache__/mssql.cpython-310.pyc,, +alembic/ddl/__pycache__/mysql.cpython-310.pyc,, +alembic/ddl/__pycache__/oracle.cpython-310.pyc,, +alembic/ddl/__pycache__/postgresql.cpython-310.pyc,, +alembic/ddl/__pycache__/sqlite.cpython-310.pyc,, +alembic/ddl/_autogen.py,sha256=Blv2RrHNyF4cE6znCQXNXG5T9aO-YmiwD4Fz-qfoaWA,9275 +alembic/ddl/base.py,sha256=A1f89-rCZvqw-hgWmBbIszRqx94lL6gKLFXE9kHettA,10478 +alembic/ddl/impl.py,sha256=UL8-iza7CJk_T73lr5fjDLdhxEL56uD-AEjtmESAbLk,30439 +alembic/ddl/mssql.py,sha256=NzORSIDHUll_g6iH4IyMTXZU1qjKzXrpespKrjWnfLY,14216 +alembic/ddl/mysql.py,sha256=ujY4xDh13KgiFNRe3vUcLquie0ih80MYBUcogCBPdSc,17358 +alembic/ddl/oracle.py,sha256=669YlkcZihlXFbnXhH2krdrvDry8q5pcUGfoqkg_R6Y,6243 +alembic/ddl/postgresql.py,sha256=S7uye2NDSHLwV3w8SJ2Q9DLbcvQIxQfJ3EEK6JqyNag,29950 +alembic/ddl/sqlite.py,sha256=u5tJgRUiY6bzVltl_NWlI6cy23v8XNagk_9gPI6Lnns,8006 +alembic/environment.py,sha256=MM5lPayGT04H3aeng1H7GQ8HEAs3VGX5yy6mDLCPLT4,43 +alembic/migration.py,sha256=MV6Fju6rZtn2fTREKzXrCZM6aIBGII4OMZFix0X-GLs,41 +alembic/op.py,sha256=flHtcsVqOD-ZgZKK2pv-CJ5Cwh-KJ7puMUNXzishxLw,167 +alembic/op.pyi,sha256=PQ4mKNp7EXrjVdIWQRoGiBSVke4PPxTc9I6qF8ZGGZE,50711 +alembic/operations/__init__.py,sha256=e0KQSZAgLpTWvyvreB7DWg7RJV_MWSOPVDgCqsd2FzY,318 +alembic/operations/__pycache__/__init__.cpython-310.pyc,, +alembic/operations/__pycache__/base.cpython-310.pyc,, +alembic/operations/__pycache__/batch.cpython-310.pyc,, +alembic/operations/__pycache__/ops.cpython-310.pyc,, +alembic/operations/__pycache__/schemaobj.cpython-310.pyc,, +alembic/operations/__pycache__/toimpl.cpython-310.pyc,, +alembic/operations/base.py,sha256=npw1iFboTlEsaQS0b7mb2SEHsRDV4GLQqnjhcfma6Nk,75157 +alembic/operations/batch.py,sha256=1UmCFcsFWObinQWFRWoGZkjynl54HKpldbPs67aR4wg,26923 +alembic/operations/ops.py,sha256=ftsFgcZIctxRDiuGgkQsaFHsMlRP7cLq7Dj_seKVBnQ,96276 +alembic/operations/schemaobj.py,sha256=Wp-bBe4a8lXPTvIHJttBY0ejtpVR5Jvtb2kI-U2PztQ,9468 +alembic/operations/toimpl.py,sha256=rgufuSUNwpgrOYzzY3Q3ELW1rQv2fQbQVokXgnIYIrs,7503 +alembic/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +alembic/runtime/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +alembic/runtime/__pycache__/__init__.cpython-310.pyc,, +alembic/runtime/__pycache__/environment.cpython-310.pyc,, +alembic/runtime/__pycache__/migration.cpython-310.pyc,, +alembic/runtime/environment.py,sha256=L6bDW1dvw8L4zwxlTG8KnT0xcCgLXxUfdRpzqlJoFjo,41479 +alembic/runtime/migration.py,sha256=lu9_z_qyWmNzSM52_FgdXP_G52PTmTTeOeMBQAkQTFg,49997 +alembic/script/__init__.py,sha256=lSj06O391Iy5avWAiq8SPs6N8RBgxkSPjP8wpXcNDGg,100 +alembic/script/__pycache__/__init__.cpython-310.pyc,, +alembic/script/__pycache__/base.cpython-310.pyc,, +alembic/script/__pycache__/revision.cpython-310.pyc,, +alembic/script/__pycache__/write_hooks.cpython-310.pyc,, +alembic/script/base.py,sha256=O9xYzMCesjlSoUGD613Co0PhoCcpPBxxJzD3yt1lKK8,36924 +alembic/script/revision.py,sha256=BQcJoMCIXtSJRLCvdasgLOtCx9O7A8wsSym1FsqLW4s,62307 +alembic/script/write_hooks.py,sha256=CAjh9U8m7eXz3W7pfQKVxG4UkHZrRIYKF6AkReakG2c,5015 +alembic/templates/async/README,sha256=ISVtAOvqvKk_5ThM5ioJE-lMkvf9IbknFUFVU_vPma4,58 +alembic/templates/async/__pycache__/env.cpython-310.pyc,, +alembic/templates/async/alembic.ini.mako,sha256=ubJvGp-ai_C3RGJ_k5sedviIcIb2HIeExT1Y1K4X44o,4684 +alembic/templates/async/env.py,sha256=zbOCf3Y7w2lg92hxSwmG1MM_7y56i_oRH4AKp0pQBYo,2389 +alembic/templates/async/script.py.mako,sha256=04kgeBtNMa4cCnG8CfQcKt6P6rnloIfj8wy0u_DBydM,704 +alembic/templates/generic/README,sha256=MVlc9TYmr57RbhXET6QxgyCcwWP7w-vLkEsirENqiIQ,38 +alembic/templates/generic/__pycache__/env.cpython-310.pyc,, +alembic/templates/generic/alembic.ini.mako,sha256=CqVaOZbWULb1NYjC6XTnLELPQ_TA9NPMOHKxJeG0vNY,4684 +alembic/templates/generic/env.py,sha256=TLRWOVW3Xpt_Tpf8JFzlnoPn_qoUu8UV77Y4o9XD6yI,2103 +alembic/templates/generic/script.py.mako,sha256=04kgeBtNMa4cCnG8CfQcKt6P6rnloIfj8wy0u_DBydM,704 +alembic/templates/multidb/README,sha256=dWLDhnBgphA4Nzb7sNlMfCS3_06YqVbHhz-9O5JNqyI,606 +alembic/templates/multidb/__pycache__/env.cpython-310.pyc,, +alembic/templates/multidb/alembic.ini.mako,sha256=V-1FL7zyxHX1K_tBe_6Ax7DmGB_TevQvB77eIUfWvwk,5010 +alembic/templates/multidb/env.py,sha256=6zNjnW8mXGUk7erTsAvrfhvqoczJ-gagjVq1Ypg2YIQ,4230 +alembic/templates/multidb/script.py.mako,sha256=ZbCXMkI5Wj2dwNKcxuVGkKZ7Iav93BNx_bM4zbGi3c8,1235 +alembic/templates/pyproject/README,sha256=dMhIiFoeM7EdeaOXBs3mVQ6zXACMyGXDb_UBB6sGRA0,60 +alembic/templates/pyproject/__pycache__/env.cpython-310.pyc,, +alembic/templates/pyproject/alembic.ini.mako,sha256=bQnEoydnLOUgg9vNbTOys4r5MaW8lmwYFXSrlfdEEkw,782 +alembic/templates/pyproject/env.py,sha256=TLRWOVW3Xpt_Tpf8JFzlnoPn_qoUu8UV77Y4o9XD6yI,2103 +alembic/templates/pyproject/pyproject.toml.mako,sha256=9CzBcdamN6ylIiJ-oGCaw__E84XbonVkNkuET3J7LKI,2645 +alembic/templates/pyproject/script.py.mako,sha256=04kgeBtNMa4cCnG8CfQcKt6P6rnloIfj8wy0u_DBydM,704 +alembic/testing/__init__.py,sha256=JXvXAqIwFZB6-ep-BmeIqIH8xJ92XPt7DWCNkMDuJ-g,1249 +alembic/testing/__pycache__/__init__.cpython-310.pyc,, +alembic/testing/__pycache__/assertions.cpython-310.pyc,, +alembic/testing/__pycache__/env.cpython-310.pyc,, +alembic/testing/__pycache__/fixtures.cpython-310.pyc,, +alembic/testing/__pycache__/requirements.cpython-310.pyc,, +alembic/testing/__pycache__/schemacompare.cpython-310.pyc,, +alembic/testing/__pycache__/util.cpython-310.pyc,, +alembic/testing/__pycache__/warnings.cpython-310.pyc,, +alembic/testing/assertions.py,sha256=qcqf3tRAUe-A12NzuK_yxlksuX9OZKRC5E8pKIdBnPg,5302 +alembic/testing/env.py,sha256=pka7fjwOC8hYL6X0XE4oPkJpy_1WX01bL7iP7gpO_4I,11551 +alembic/testing/fixtures.py,sha256=fOzsRF8SW6CWpAH0sZpUHcgsJjun9EHnp4k2S3Lq5eU,9920 +alembic/testing/plugin/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +alembic/testing/plugin/__pycache__/__init__.cpython-310.pyc,, +alembic/testing/plugin/__pycache__/bootstrap.cpython-310.pyc,, +alembic/testing/plugin/bootstrap.py,sha256=9C6wtjGrIVztZ928w27hsQE0KcjDLIUtUN3dvZKsMVk,50 +alembic/testing/requirements.py,sha256=gNnnvgPCuiqKeHmiNymdQuYIjQ0BrxiPxu_in4eHEsc,4180 +alembic/testing/schemacompare.py,sha256=N5UqSNCOJetIKC4vKhpYzQEpj08XkdgIoqBmEPQ3tlc,4838 +alembic/testing/suite/__init__.py,sha256=MvE7-hwbaVN1q3NM-ztGxORU9dnIelUCINKqNxewn7Y,288 +alembic/testing/suite/__pycache__/__init__.cpython-310.pyc,, +alembic/testing/suite/__pycache__/_autogen_fixtures.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_autogen_comments.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_autogen_computed.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_autogen_diffs.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_autogen_fks.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_autogen_identity.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_environment.cpython-310.pyc,, +alembic/testing/suite/__pycache__/test_op.cpython-310.pyc,, +alembic/testing/suite/_autogen_fixtures.py,sha256=Drrz_FKb9KDjq8hkwxtPkJVY1sCY7Biw-Muzb8kANp8,13480 +alembic/testing/suite/test_autogen_comments.py,sha256=aEGqKUDw4kHjnDk298aoGcQvXJWmZXcIX_2FxH4cJK8,6283 +alembic/testing/suite/test_autogen_computed.py,sha256=-5wran56qXo3afAbSk8cuSDDpbQweyJ61RF-GaVuZbA,4126 +alembic/testing/suite/test_autogen_diffs.py,sha256=T4SR1n_kmcOKYhR4W1-dA0e5sddJ69DSVL2HW96kAkE,8394 +alembic/testing/suite/test_autogen_fks.py,sha256=AqFmb26Buex167HYa9dZWOk8x-JlB1OK3bwcvvjDFaU,32927 +alembic/testing/suite/test_autogen_identity.py,sha256=kcuqngG7qXAKPJDX4U8sRzPKHEJECHuZ0DtuaS6tVkk,5824 +alembic/testing/suite/test_environment.py,sha256=OwD-kpESdLoc4byBrGrXbZHvqtPbzhFCG4W9hJOJXPQ,11877 +alembic/testing/suite/test_op.py,sha256=2XQCdm_NmnPxHGuGj7hmxMzIhKxXNotUsKdACXzE1mM,1343 +alembic/testing/util.py,sha256=CQrcQDA8fs_7ME85z5ydb-Bt70soIIID-qNY1vbR2dg,3350 +alembic/testing/warnings.py,sha256=cDDWzvxNZE6x9dME2ACTXSv01G81JcIbE1GIE_s1kvg,831 +alembic/util/__init__.py,sha256=_Zj_xp6ssKLyoLHUFzmKhnc8mhwXW8D8h7qyX-wO56M,1519 +alembic/util/__pycache__/__init__.cpython-310.pyc,, +alembic/util/__pycache__/compat.cpython-310.pyc,, +alembic/util/__pycache__/editor.cpython-310.pyc,, +alembic/util/__pycache__/exc.cpython-310.pyc,, +alembic/util/__pycache__/langhelpers.cpython-310.pyc,, +alembic/util/__pycache__/messaging.cpython-310.pyc,, +alembic/util/__pycache__/pyfiles.cpython-310.pyc,, +alembic/util/__pycache__/sqla_compat.cpython-310.pyc,, +alembic/util/compat.py,sha256=Vt5xCn5Y675jI4seKNBV4IVnCl9V4wyH3OBI2w7U0EY,4248 +alembic/util/editor.py,sha256=JIz6_BdgV8_oKtnheR6DZoB7qnrHrlRgWjx09AsTsUw,2546 +alembic/util/exc.py,sha256=ZBlTQ8g-Jkb1iYFhFHs9djilRz0SSQ0Foc5SSoENs5o,564 +alembic/util/langhelpers.py,sha256=LpOcovnhMnP45kTt8zNJ4BHpyQrlF40OL6yDXjqKtsE,10026 +alembic/util/messaging.py,sha256=3bEBoDy4EAXETXAvArlYjeMITXDTgPTu6ZoE3ytnzSw,3294 +alembic/util/pyfiles.py,sha256=kOBjZEytRkBKsQl0LAj2sbKJMQazjwQ_5UeMKSIvVFo,4730 +alembic/util/sqla_compat.py,sha256=SWAL4hJck4XLnUpe-oI3AGiH8w9kgDBe1_oakCfdT_Q,14785 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/WHEEL new file mode 100644 index 000000000..e7fa31b6f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (80.9.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/entry_points.txt b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/entry_points.txt new file mode 100644 index 000000000..594526817 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +alembic = alembic.config:main diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/licenses/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/licenses/LICENSE new file mode 100644 index 000000000..ab4bb1616 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/licenses/LICENSE @@ -0,0 +1,19 @@ +Copyright 2009-2025 Michael Bayer. + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +of the Software, and to permit persons to whom the Software is furnished to do +so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/top_level.txt new file mode 100644 index 000000000..b5bd98d32 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic-1.16.2.dist-info/top_level.txt @@ -0,0 +1 @@ +alembic diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__init__.py new file mode 100644 index 000000000..395dc0461 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__init__.py @@ -0,0 +1,4 @@ +from . import context +from . import op + +__version__ = "1.16.2" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__main__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__main__.py new file mode 100644 index 000000000..af1b8e870 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/__main__.py @@ -0,0 +1,4 @@ +from .config import main + +if __name__ == "__main__": + main(prog="alembic") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/__init__.py new file mode 100644 index 000000000..445ddb251 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/__init__.py @@ -0,0 +1,10 @@ +from .api import _render_migration_diffs as _render_migration_diffs +from .api import compare_metadata as compare_metadata +from .api import produce_migrations as produce_migrations +from .api import render_python_code as render_python_code +from .api import RevisionContext as RevisionContext +from .compare import _produce_net_changes as _produce_net_changes +from .compare import comparators as comparators +from .render import render_op_text as render_op_text +from .render import renderers as renderers +from .rewriter import Rewriter as Rewriter diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/api.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/api.py new file mode 100644 index 000000000..811462e82 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/api.py @@ -0,0 +1,650 @@ +from __future__ import annotations + +import contextlib +from typing import Any +from typing import Dict +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Set +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import inspect + +from . import compare +from . import render +from .. import util +from ..operations import ops +from ..util import sqla_compat + +"""Provide the 'autogenerate' feature which can produce migration operations +automatically.""" + +if TYPE_CHECKING: + from sqlalchemy.engine import Connection + from sqlalchemy.engine import Dialect + from sqlalchemy.engine import Inspector + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.sql.schema import Table + + from ..config import Config + from ..operations.ops import DowngradeOps + from ..operations.ops import MigrationScript + from ..operations.ops import UpgradeOps + from ..runtime.environment import NameFilterParentNames + from ..runtime.environment import NameFilterType + from ..runtime.environment import ProcessRevisionDirectiveFn + from ..runtime.environment import RenderItemFn + from ..runtime.migration import MigrationContext + from ..script.base import Script + from ..script.base import ScriptDirectory + from ..script.revision import _GetRevArg + + +def compare_metadata(context: MigrationContext, metadata: MetaData) -> Any: + """Compare a database schema to that given in a + :class:`~sqlalchemy.schema.MetaData` instance. + + The database connection is presented in the context + of a :class:`.MigrationContext` object, which + provides database connectivity as well as optional + comparison functions to use for datatypes and + server defaults - see the "autogenerate" arguments + at :meth:`.EnvironmentContext.configure` + for details on these. + + The return format is a list of "diff" directives, + each representing individual differences:: + + from alembic.migration import MigrationContext + from alembic.autogenerate import compare_metadata + from sqlalchemy import ( + create_engine, + MetaData, + Column, + Integer, + String, + Table, + text, + ) + import pprint + + engine = create_engine("sqlite://") + + with engine.begin() as conn: + conn.execute( + text( + ''' + create table foo ( + id integer not null primary key, + old_data varchar, + x integer + ) + ''' + ) + ) + conn.execute(text("create table bar (data varchar)")) + + metadata = MetaData() + Table( + "foo", + metadata, + Column("id", Integer, primary_key=True), + Column("data", Integer), + Column("x", Integer, nullable=False), + ) + Table("bat", metadata, Column("info", String)) + + mc = MigrationContext.configure(engine.connect()) + + diff = compare_metadata(mc, metadata) + pprint.pprint(diff, indent=2, width=20) + + Output:: + + [ + ( + "add_table", + Table( + "bat", + MetaData(), + Column("info", String(), table=), + schema=None, + ), + ), + ( + "remove_table", + Table( + "bar", + MetaData(), + Column("data", VARCHAR(), table=), + schema=None, + ), + ), + ( + "add_column", + None, + "foo", + Column("data", Integer(), table=), + ), + [ + ( + "modify_nullable", + None, + "foo", + "x", + { + "existing_comment": None, + "existing_server_default": False, + "existing_type": INTEGER(), + }, + True, + False, + ) + ], + ( + "remove_column", + None, + "foo", + Column("old_data", VARCHAR(), table=), + ), + ] + + :param context: a :class:`.MigrationContext` + instance. + :param metadata: a :class:`~sqlalchemy.schema.MetaData` + instance. + + .. seealso:: + + :func:`.produce_migrations` - produces a :class:`.MigrationScript` + structure based on metadata comparison. + + """ + + migration_script = produce_migrations(context, metadata) + assert migration_script.upgrade_ops is not None + return migration_script.upgrade_ops.as_diffs() + + +def produce_migrations( + context: MigrationContext, metadata: MetaData +) -> MigrationScript: + """Produce a :class:`.MigrationScript` structure based on schema + comparison. + + This function does essentially what :func:`.compare_metadata` does, + but then runs the resulting list of diffs to produce the full + :class:`.MigrationScript` object. For an example of what this looks like, + see the example in :ref:`customizing_revision`. + + .. seealso:: + + :func:`.compare_metadata` - returns more fundamental "diff" + data from comparing a schema. + + """ + + autogen_context = AutogenContext(context, metadata=metadata) + + migration_script = ops.MigrationScript( + rev_id=None, + upgrade_ops=ops.UpgradeOps([]), + downgrade_ops=ops.DowngradeOps([]), + ) + + compare._populate_migration_script(autogen_context, migration_script) + + return migration_script + + +def render_python_code( + up_or_down_op: Union[UpgradeOps, DowngradeOps], + sqlalchemy_module_prefix: str = "sa.", + alembic_module_prefix: str = "op.", + render_as_batch: bool = False, + imports: Sequence[str] = (), + render_item: Optional[RenderItemFn] = None, + migration_context: Optional[MigrationContext] = None, + user_module_prefix: Optional[str] = None, +) -> str: + """Render Python code given an :class:`.UpgradeOps` or + :class:`.DowngradeOps` object. + + This is a convenience function that can be used to test the + autogenerate output of a user-defined :class:`.MigrationScript` structure. + + :param up_or_down_op: :class:`.UpgradeOps` or :class:`.DowngradeOps` object + :param sqlalchemy_module_prefix: module prefix for SQLAlchemy objects + :param alembic_module_prefix: module prefix for Alembic constructs + :param render_as_batch: use "batch operations" style for rendering + :param imports: sequence of import symbols to add + :param render_item: callable to render items + :param migration_context: optional :class:`.MigrationContext` + :param user_module_prefix: optional string prefix for user-defined types + + .. versionadded:: 1.11.0 + + """ + opts = { + "sqlalchemy_module_prefix": sqlalchemy_module_prefix, + "alembic_module_prefix": alembic_module_prefix, + "render_item": render_item, + "render_as_batch": render_as_batch, + "user_module_prefix": user_module_prefix, + } + + if migration_context is None: + from ..runtime.migration import MigrationContext + from sqlalchemy.engine.default import DefaultDialect + + migration_context = MigrationContext.configure( + dialect=DefaultDialect() + ) + + autogen_context = AutogenContext(migration_context, opts=opts) + autogen_context.imports = set(imports) + return render._indent( + render._render_cmd_body(up_or_down_op, autogen_context) + ) + + +def _render_migration_diffs( + context: MigrationContext, template_args: Dict[Any, Any] +) -> None: + """legacy, used by test_autogen_composition at the moment""" + + autogen_context = AutogenContext(context) + + upgrade_ops = ops.UpgradeOps([]) + compare._produce_net_changes(autogen_context, upgrade_ops) + + migration_script = ops.MigrationScript( + rev_id=None, + upgrade_ops=upgrade_ops, + downgrade_ops=upgrade_ops.reverse(), + ) + + render._render_python_into_templatevars( + autogen_context, migration_script, template_args + ) + + +class AutogenContext: + """Maintains configuration and state that's specific to an + autogenerate operation.""" + + metadata: Union[MetaData, Sequence[MetaData], None] = None + """The :class:`~sqlalchemy.schema.MetaData` object + representing the destination. + + This object is the one that is passed within ``env.py`` + to the :paramref:`.EnvironmentContext.configure.target_metadata` + parameter. It represents the structure of :class:`.Table` and other + objects as stated in the current database model, and represents the + destination structure for the database being examined. + + While the :class:`~sqlalchemy.schema.MetaData` object is primarily + known as a collection of :class:`~sqlalchemy.schema.Table` objects, + it also has an :attr:`~sqlalchemy.schema.MetaData.info` dictionary + that may be used by end-user schemes to store additional schema-level + objects that are to be compared in custom autogeneration schemes. + + """ + + connection: Optional[Connection] = None + """The :class:`~sqlalchemy.engine.base.Connection` object currently + connected to the database backend being compared. + + This is obtained from the :attr:`.MigrationContext.bind` and is + ultimately set up in the ``env.py`` script. + + """ + + dialect: Optional[Dialect] = None + """The :class:`~sqlalchemy.engine.Dialect` object currently in use. + + This is normally obtained from the + :attr:`~sqlalchemy.engine.base.Connection.dialect` attribute. + + """ + + imports: Set[str] = None # type: ignore[assignment] + """A ``set()`` which contains string Python import directives. + + The directives are to be rendered into the ``${imports}`` section + of a script template. The set is normally empty and can be modified + within hooks such as the + :paramref:`.EnvironmentContext.configure.render_item` hook. + + .. seealso:: + + :ref:`autogen_render_types` + + """ + + migration_context: MigrationContext = None # type: ignore[assignment] + """The :class:`.MigrationContext` established by the ``env.py`` script.""" + + def __init__( + self, + migration_context: MigrationContext, + metadata: Union[MetaData, Sequence[MetaData], None] = None, + opts: Optional[Dict[str, Any]] = None, + autogenerate: bool = True, + ) -> None: + if ( + autogenerate + and migration_context is not None + and migration_context.as_sql + ): + raise util.CommandError( + "autogenerate can't use as_sql=True as it prevents querying " + "the database for schema information" + ) + + if opts is None: + opts = migration_context.opts + + self.metadata = metadata = ( + opts.get("target_metadata", None) if metadata is None else metadata + ) + + if ( + autogenerate + and metadata is None + and migration_context is not None + and migration_context.script is not None + ): + raise util.CommandError( + "Can't proceed with --autogenerate option; environment " + "script %s does not provide " + "a MetaData object or sequence of objects to the context." + % (migration_context.script.env_py_location) + ) + + include_object = opts.get("include_object", None) + include_name = opts.get("include_name", None) + + object_filters = [] + name_filters = [] + if include_object: + object_filters.append(include_object) + if include_name: + name_filters.append(include_name) + + self._object_filters = object_filters + self._name_filters = name_filters + + self.migration_context = migration_context + if self.migration_context is not None: + self.connection = self.migration_context.bind + self.dialect = self.migration_context.dialect + + self.imports = set() + self.opts: Dict[str, Any] = opts + self._has_batch: bool = False + + @util.memoized_property + def inspector(self) -> Inspector: + if self.connection is None: + raise TypeError( + "can't return inspector as this " + "AutogenContext has no database connection" + ) + return inspect(self.connection) + + @contextlib.contextmanager + def _within_batch(self) -> Iterator[None]: + self._has_batch = True + yield + self._has_batch = False + + def run_name_filters( + self, + name: Optional[str], + type_: NameFilterType, + parent_names: NameFilterParentNames, + ) -> bool: + """Run the context's name filters and return True if the targets + should be part of the autogenerate operation. + + This method should be run for every kind of name encountered within the + reflection side of an autogenerate operation, giving the environment + the chance to filter what names should be reflected as database + objects. The filters here are produced directly via the + :paramref:`.EnvironmentContext.configure.include_name` parameter. + + """ + if "schema_name" in parent_names: + if type_ == "table": + table_name = name + else: + table_name = parent_names.get("table_name", None) + if table_name: + schema_name = parent_names["schema_name"] + if schema_name: + parent_names["schema_qualified_table_name"] = "%s.%s" % ( + schema_name, + table_name, + ) + else: + parent_names["schema_qualified_table_name"] = table_name + + for fn in self._name_filters: + if not fn(name, type_, parent_names): + return False + else: + return True + + def run_object_filters( + self, + object_: SchemaItem, + name: sqla_compat._ConstraintName, + type_: NameFilterType, + reflected: bool, + compare_to: Optional[SchemaItem], + ) -> bool: + """Run the context's object filters and return True if the targets + should be part of the autogenerate operation. + + This method should be run for every kind of object encountered within + an autogenerate operation, giving the environment the chance + to filter what objects should be included in the comparison. + The filters here are produced directly via the + :paramref:`.EnvironmentContext.configure.include_object` parameter. + + """ + for fn in self._object_filters: + if not fn(object_, name, type_, reflected, compare_to): + return False + else: + return True + + run_filters = run_object_filters + + @util.memoized_property + def sorted_tables(self) -> List[Table]: + """Return an aggregate of the :attr:`.MetaData.sorted_tables` + collection(s). + + For a sequence of :class:`.MetaData` objects, this + concatenates the :attr:`.MetaData.sorted_tables` collection + for each individual :class:`.MetaData` in the order of the + sequence. It does **not** collate the sorted tables collections. + + """ + result = [] + for m in util.to_list(self.metadata): + result.extend(m.sorted_tables) + return result + + @util.memoized_property + def table_key_to_table(self) -> Dict[str, Table]: + """Return an aggregate of the :attr:`.MetaData.tables` dictionaries. + + The :attr:`.MetaData.tables` collection is a dictionary of table key + to :class:`.Table`; this method aggregates the dictionary across + multiple :class:`.MetaData` objects into one dictionary. + + Duplicate table keys are **not** supported; if two :class:`.MetaData` + objects contain the same table key, an exception is raised. + + """ + result: Dict[str, Table] = {} + for m in util.to_list(self.metadata): + intersect = set(result).intersection(set(m.tables)) + if intersect: + raise ValueError( + "Duplicate table keys across multiple " + "MetaData objects: %s" + % (", ".join('"%s"' % key for key in sorted(intersect))) + ) + + result.update(m.tables) + return result + + +class RevisionContext: + """Maintains configuration and state that's specific to a revision + file generation operation.""" + + generated_revisions: List[MigrationScript] + process_revision_directives: Optional[ProcessRevisionDirectiveFn] + + def __init__( + self, + config: Config, + script_directory: ScriptDirectory, + command_args: Dict[str, Any], + process_revision_directives: Optional[ + ProcessRevisionDirectiveFn + ] = None, + ) -> None: + self.config = config + self.script_directory = script_directory + self.command_args = command_args + self.process_revision_directives = process_revision_directives + self.template_args = { + "config": config # Let templates use config for + # e.g. multiple databases + } + self.generated_revisions = [self._default_revision()] + + def _to_script( + self, migration_script: MigrationScript + ) -> Optional[Script]: + template_args: Dict[str, Any] = self.template_args.copy() + + if getattr(migration_script, "_needs_render", False): + autogen_context = self._last_autogen_context + + # clear out existing imports if we are doing multiple + # renders + autogen_context.imports = set() + if migration_script.imports: + autogen_context.imports.update(migration_script.imports) + render._render_python_into_templatevars( + autogen_context, migration_script, template_args + ) + + assert migration_script.rev_id is not None + return self.script_directory.generate_revision( + migration_script.rev_id, + migration_script.message, + refresh=True, + head=migration_script.head, + splice=migration_script.splice, + branch_labels=migration_script.branch_label, + version_path=migration_script.version_path, + depends_on=migration_script.depends_on, + **template_args, + ) + + def run_autogenerate( + self, rev: _GetRevArg, migration_context: MigrationContext + ) -> None: + self._run_environment(rev, migration_context, True) + + def run_no_autogenerate( + self, rev: _GetRevArg, migration_context: MigrationContext + ) -> None: + self._run_environment(rev, migration_context, False) + + def _run_environment( + self, + rev: _GetRevArg, + migration_context: MigrationContext, + autogenerate: bool, + ) -> None: + if autogenerate: + if self.command_args["sql"]: + raise util.CommandError( + "Using --sql with --autogenerate does not make any sense" + ) + if set(self.script_directory.get_revisions(rev)) != set( + self.script_directory.get_revisions("heads") + ): + raise util.CommandError("Target database is not up to date.") + + upgrade_token = migration_context.opts["upgrade_token"] + downgrade_token = migration_context.opts["downgrade_token"] + + migration_script = self.generated_revisions[-1] + if not getattr(migration_script, "_needs_render", False): + migration_script.upgrade_ops_list[-1].upgrade_token = upgrade_token + migration_script.downgrade_ops_list[-1].downgrade_token = ( + downgrade_token + ) + migration_script._needs_render = True + else: + migration_script._upgrade_ops.append( + ops.UpgradeOps([], upgrade_token=upgrade_token) + ) + migration_script._downgrade_ops.append( + ops.DowngradeOps([], downgrade_token=downgrade_token) + ) + + autogen_context = AutogenContext( + migration_context, autogenerate=autogenerate + ) + self._last_autogen_context: AutogenContext = autogen_context + + if autogenerate: + compare._populate_migration_script( + autogen_context, migration_script + ) + + if self.process_revision_directives: + self.process_revision_directives( + migration_context, rev, self.generated_revisions + ) + + hook = migration_context.opts["process_revision_directives"] + if hook: + hook(migration_context, rev, self.generated_revisions) + + for migration_script in self.generated_revisions: + migration_script._needs_render = True + + def _default_revision(self) -> MigrationScript: + command_args: Dict[str, Any] = self.command_args + op = ops.MigrationScript( + rev_id=command_args["rev_id"] or util.rev_id(), + message=command_args["message"], + upgrade_ops=ops.UpgradeOps([]), + downgrade_ops=ops.DowngradeOps([]), + head=command_args["head"], + splice=command_args["splice"], + branch_label=command_args["branch_label"], + version_path=command_args["version_path"], + depends_on=command_args["depends_on"], + ) + return op + + def generate_scripts(self) -> Iterator[Optional[Script]]: + for generated_revision in self.generated_revisions: + yield self._to_script(generated_revision) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/compare.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/compare.py new file mode 100644 index 000000000..a9adda1cd --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/compare.py @@ -0,0 +1,1370 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import contextlib +import logging +import re +from typing import Any +from typing import cast +from typing import Dict +from typing import Iterator +from typing import Mapping +from typing import Optional +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy import event +from sqlalchemy import inspect +from sqlalchemy import schema as sa_schema +from sqlalchemy import text +from sqlalchemy import types as sqltypes +from sqlalchemy.sql import expression +from sqlalchemy.sql.elements import conv +from sqlalchemy.sql.schema import ForeignKeyConstraint +from sqlalchemy.sql.schema import Index +from sqlalchemy.sql.schema import UniqueConstraint +from sqlalchemy.util import OrderedSet + +from .. import util +from ..ddl._autogen import is_index_sig +from ..ddl._autogen import is_uq_sig +from ..operations import ops +from ..util import sqla_compat + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy.engine.reflection import Inspector + from sqlalchemy.sql.elements import quoted_name + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Table + + from alembic.autogenerate.api import AutogenContext + from alembic.ddl.impl import DefaultImpl + from alembic.operations.ops import AlterColumnOp + from alembic.operations.ops import MigrationScript + from alembic.operations.ops import ModifyTableOps + from alembic.operations.ops import UpgradeOps + from ..ddl._autogen import _constraint_sig + + +log = logging.getLogger(__name__) + + +def _populate_migration_script( + autogen_context: AutogenContext, migration_script: MigrationScript +) -> None: + upgrade_ops = migration_script.upgrade_ops_list[-1] + downgrade_ops = migration_script.downgrade_ops_list[-1] + + _produce_net_changes(autogen_context, upgrade_ops) + upgrade_ops.reverse_into(downgrade_ops) + + +comparators = util.Dispatcher(uselist=True) + + +def _produce_net_changes( + autogen_context: AutogenContext, upgrade_ops: UpgradeOps +) -> None: + connection = autogen_context.connection + assert connection is not None + include_schemas = autogen_context.opts.get("include_schemas", False) + + inspector: Inspector = inspect(connection) + + default_schema = connection.dialect.default_schema_name + schemas: Set[Optional[str]] + if include_schemas: + schemas = set(inspector.get_schema_names()) + # replace default schema name with None + schemas.discard("information_schema") + # replace the "default" schema with None + schemas.discard(default_schema) + schemas.add(None) + else: + schemas = {None} + + schemas = { + s for s in schemas if autogen_context.run_name_filters(s, "schema", {}) + } + + assert autogen_context.dialect is not None + comparators.dispatch("schema", autogen_context.dialect.name)( + autogen_context, upgrade_ops, schemas + ) + + +@comparators.dispatch_for("schema") +def _autogen_for_tables( + autogen_context: AutogenContext, + upgrade_ops: UpgradeOps, + schemas: Union[Set[None], Set[Optional[str]]], +) -> None: + inspector = autogen_context.inspector + + conn_table_names: Set[Tuple[Optional[str], str]] = set() + + version_table_schema = ( + autogen_context.migration_context.version_table_schema + ) + version_table = autogen_context.migration_context.version_table + + for schema_name in schemas: + tables = set(inspector.get_table_names(schema=schema_name)) + if schema_name == version_table_schema: + tables = tables.difference( + [autogen_context.migration_context.version_table] + ) + + conn_table_names.update( + (schema_name, tname) + for tname in tables + if autogen_context.run_name_filters( + tname, "table", {"schema_name": schema_name} + ) + ) + + metadata_table_names = OrderedSet( + [(table.schema, table.name) for table in autogen_context.sorted_tables] + ).difference([(version_table_schema, version_table)]) + + _compare_tables( + conn_table_names, + metadata_table_names, + inspector, + upgrade_ops, + autogen_context, + ) + + +def _compare_tables( + conn_table_names: set, + metadata_table_names: set, + inspector: Inspector, + upgrade_ops: UpgradeOps, + autogen_context: AutogenContext, +) -> None: + default_schema = inspector.bind.dialect.default_schema_name + + # tables coming from the connection will not have "schema" + # set if it matches default_schema_name; so we need a list + # of table names from local metadata that also have "None" if schema + # == default_schema_name. Most setups will be like this anyway but + # some are not (see #170) + metadata_table_names_no_dflt_schema = OrderedSet( + [ + (schema if schema != default_schema else None, tname) + for schema, tname in metadata_table_names + ] + ) + + # to adjust for the MetaData collection storing the tables either + # as "schemaname.tablename" or just "tablename", create a new lookup + # which will match the "non-default-schema" keys to the Table object. + tname_to_table = { + no_dflt_schema: autogen_context.table_key_to_table[ + sa_schema._get_table_key(tname, schema) + ] + for no_dflt_schema, (schema, tname) in zip( + metadata_table_names_no_dflt_schema, metadata_table_names + ) + } + metadata_table_names = metadata_table_names_no_dflt_schema + + for s, tname in metadata_table_names.difference(conn_table_names): + name = "%s.%s" % (s, tname) if s else tname + metadata_table = tname_to_table[(s, tname)] + if autogen_context.run_object_filters( + metadata_table, tname, "table", False, None + ): + upgrade_ops.ops.append( + ops.CreateTableOp.from_table(metadata_table) + ) + log.info("Detected added table %r", name) + modify_table_ops = ops.ModifyTableOps(tname, [], schema=s) + + comparators.dispatch("table")( + autogen_context, + modify_table_ops, + s, + tname, + None, + metadata_table, + ) + if not modify_table_ops.is_empty(): + upgrade_ops.ops.append(modify_table_ops) + + removal_metadata = sa_schema.MetaData() + for s, tname in conn_table_names.difference(metadata_table_names): + name = sa_schema._get_table_key(tname, s) + exists = name in removal_metadata.tables + t = sa_schema.Table(tname, removal_metadata, schema=s) + + if not exists: + event.listen( + t, + "column_reflect", + # fmt: off + autogen_context.migration_context.impl. + _compat_autogen_column_reflect + (inspector), + # fmt: on + ) + _InspectorConv(inspector).reflect_table(t, include_columns=None) + if autogen_context.run_object_filters(t, tname, "table", True, None): + modify_table_ops = ops.ModifyTableOps(tname, [], schema=s) + + comparators.dispatch("table")( + autogen_context, modify_table_ops, s, tname, t, None + ) + if not modify_table_ops.is_empty(): + upgrade_ops.ops.append(modify_table_ops) + + upgrade_ops.ops.append(ops.DropTableOp.from_table(t)) + log.info("Detected removed table %r", name) + + existing_tables = conn_table_names.intersection(metadata_table_names) + + existing_metadata = sa_schema.MetaData() + conn_column_info = {} + for s, tname in existing_tables: + name = sa_schema._get_table_key(tname, s) + exists = name in existing_metadata.tables + t = sa_schema.Table(tname, existing_metadata, schema=s) + if not exists: + event.listen( + t, + "column_reflect", + # fmt: off + autogen_context.migration_context.impl. + _compat_autogen_column_reflect(inspector), + # fmt: on + ) + _InspectorConv(inspector).reflect_table(t, include_columns=None) + + conn_column_info[(s, tname)] = t + + for s, tname in sorted(existing_tables, key=lambda x: (x[0] or "", x[1])): + s = s or None + name = "%s.%s" % (s, tname) if s else tname + metadata_table = tname_to_table[(s, tname)] + conn_table = existing_metadata.tables[name] + + if autogen_context.run_object_filters( + metadata_table, tname, "table", False, conn_table + ): + modify_table_ops = ops.ModifyTableOps(tname, [], schema=s) + with _compare_columns( + s, + tname, + conn_table, + metadata_table, + modify_table_ops, + autogen_context, + inspector, + ): + comparators.dispatch("table")( + autogen_context, + modify_table_ops, + s, + tname, + conn_table, + metadata_table, + ) + + if not modify_table_ops.is_empty(): + upgrade_ops.ops.append(modify_table_ops) + + +_IndexColumnSortingOps: Mapping[str, Any] = util.immutabledict( + { + "asc": expression.asc, + "desc": expression.desc, + "nulls_first": expression.nullsfirst, + "nulls_last": expression.nullslast, + "nullsfirst": expression.nullsfirst, # 1_3 name + "nullslast": expression.nullslast, # 1_3 name + } +) + + +def _make_index( + impl: DefaultImpl, params: Dict[str, Any], conn_table: Table +) -> Optional[Index]: + exprs: list[Union[Column[Any], TextClause]] = [] + sorting = params.get("column_sorting") + + for num, col_name in enumerate(params["column_names"]): + item: Union[Column[Any], TextClause] + if col_name is None: + assert "expressions" in params + name = params["expressions"][num] + item = text(name) + else: + name = col_name + item = conn_table.c[col_name] + if sorting and name in sorting: + for operator in sorting[name]: + if operator in _IndexColumnSortingOps: + item = _IndexColumnSortingOps[operator](item) + exprs.append(item) + ix = sa_schema.Index( + params["name"], + *exprs, + unique=params["unique"], + _table=conn_table, + **impl.adjust_reflected_dialect_options(params, "index"), + ) + if "duplicates_constraint" in params: + ix.info["duplicates_constraint"] = params["duplicates_constraint"] + return ix + + +def _make_unique_constraint( + impl: DefaultImpl, params: Dict[str, Any], conn_table: Table +) -> UniqueConstraint: + uq = sa_schema.UniqueConstraint( + *[conn_table.c[cname] for cname in params["column_names"]], + name=params["name"], + **impl.adjust_reflected_dialect_options(params, "unique_constraint"), + ) + if "duplicates_index" in params: + uq.info["duplicates_index"] = params["duplicates_index"] + + return uq + + +def _make_foreign_key( + params: Dict[str, Any], conn_table: Table +) -> ForeignKeyConstraint: + tname = params["referred_table"] + if params["referred_schema"]: + tname = "%s.%s" % (params["referred_schema"], tname) + + options = params.get("options", {}) + + const = sa_schema.ForeignKeyConstraint( + [conn_table.c[cname] for cname in params["constrained_columns"]], + ["%s.%s" % (tname, n) for n in params["referred_columns"]], + onupdate=options.get("onupdate"), + ondelete=options.get("ondelete"), + deferrable=options.get("deferrable"), + initially=options.get("initially"), + name=params["name"], + ) + # needed by 0.7 + conn_table.append_constraint(const) + return const + + +@contextlib.contextmanager +def _compare_columns( + schema: Optional[str], + tname: Union[quoted_name, str], + conn_table: Table, + metadata_table: Table, + modify_table_ops: ModifyTableOps, + autogen_context: AutogenContext, + inspector: Inspector, +) -> Iterator[None]: + name = "%s.%s" % (schema, tname) if schema else tname + metadata_col_names = OrderedSet( + c.name for c in metadata_table.c if not c.system + ) + metadata_cols_by_name = { + c.name: c for c in metadata_table.c if not c.system + } + + conn_col_names = { + c.name: c + for c in conn_table.c + if autogen_context.run_name_filters( + c.name, "column", {"table_name": tname, "schema_name": schema} + ) + } + + for cname in metadata_col_names.difference(conn_col_names): + if autogen_context.run_object_filters( + metadata_cols_by_name[cname], cname, "column", False, None + ): + modify_table_ops.ops.append( + ops.AddColumnOp.from_column_and_tablename( + schema, tname, metadata_cols_by_name[cname] + ) + ) + log.info("Detected added column '%s.%s'", name, cname) + + for colname in metadata_col_names.intersection(conn_col_names): + metadata_col = metadata_cols_by_name[colname] + conn_col = conn_table.c[colname] + if not autogen_context.run_object_filters( + metadata_col, colname, "column", False, conn_col + ): + continue + alter_column_op = ops.AlterColumnOp(tname, colname, schema=schema) + + comparators.dispatch("column")( + autogen_context, + alter_column_op, + schema, + tname, + colname, + conn_col, + metadata_col, + ) + + if alter_column_op.has_changes(): + modify_table_ops.ops.append(alter_column_op) + + yield + + for cname in set(conn_col_names).difference(metadata_col_names): + if autogen_context.run_object_filters( + conn_table.c[cname], cname, "column", True, None + ): + modify_table_ops.ops.append( + ops.DropColumnOp.from_column_and_tablename( + schema, tname, conn_table.c[cname] + ) + ) + log.info("Detected removed column '%s.%s'", name, cname) + + +_C = TypeVar("_C", bound=Union[UniqueConstraint, ForeignKeyConstraint, Index]) + + +class _InspectorConv: + __slots__ = ("inspector",) + + def __init__(self, inspector): + self.inspector = inspector + + def _apply_reflectinfo_conv(self, consts): + if not consts: + return consts + for const in consts: + if const["name"] is not None and not isinstance( + const["name"], conv + ): + const["name"] = conv(const["name"]) + return consts + + def _apply_constraint_conv(self, consts): + if not consts: + return consts + for const in consts: + if const.name is not None and not isinstance(const.name, conv): + const.name = conv(const.name) + return consts + + def get_indexes(self, *args, **kw): + return self._apply_reflectinfo_conv( + self.inspector.get_indexes(*args, **kw) + ) + + def get_unique_constraints(self, *args, **kw): + return self._apply_reflectinfo_conv( + self.inspector.get_unique_constraints(*args, **kw) + ) + + def get_foreign_keys(self, *args, **kw): + return self._apply_reflectinfo_conv( + self.inspector.get_foreign_keys(*args, **kw) + ) + + def reflect_table(self, table, *, include_columns): + self.inspector.reflect_table(table, include_columns=include_columns) + + # I had a cool version of this using _ReflectInfo, however that doesn't + # work in 1.4 and it's not public API in 2.x. Then this is just a two + # liner. So there's no competition... + self._apply_constraint_conv(table.constraints) + self._apply_constraint_conv(table.indexes) + + +@comparators.dispatch_for("table") +def _compare_indexes_and_uniques( + autogen_context: AutogenContext, + modify_ops: ModifyTableOps, + schema: Optional[str], + tname: Union[quoted_name, str], + conn_table: Optional[Table], + metadata_table: Optional[Table], +) -> None: + inspector = autogen_context.inspector + is_create_table = conn_table is None + is_drop_table = metadata_table is None + impl = autogen_context.migration_context.impl + + # 1a. get raw indexes and unique constraints from metadata ... + if metadata_table is not None: + metadata_unique_constraints = { + uq + for uq in metadata_table.constraints + if isinstance(uq, sa_schema.UniqueConstraint) + } + metadata_indexes = set(metadata_table.indexes) + else: + metadata_unique_constraints = set() + metadata_indexes = set() + + conn_uniques = conn_indexes = frozenset() # type:ignore[var-annotated] + + supports_unique_constraints = False + + unique_constraints_duplicate_unique_indexes = False + + if conn_table is not None: + # 1b. ... and from connection, if the table exists + try: + conn_uniques = _InspectorConv(inspector).get_unique_constraints( + tname, schema=schema + ) + + supports_unique_constraints = True + except NotImplementedError: + pass + except TypeError: + # number of arguments is off for the base + # method in SQLAlchemy due to the cache decorator + # not being present + pass + else: + conn_uniques = [ # type:ignore[assignment] + uq + for uq in conn_uniques + if autogen_context.run_name_filters( + uq["name"], + "unique_constraint", + {"table_name": tname, "schema_name": schema}, + ) + ] + for uq in conn_uniques: + if uq.get("duplicates_index"): + unique_constraints_duplicate_unique_indexes = True + try: + conn_indexes = _InspectorConv(inspector).get_indexes( + tname, schema=schema + ) + except NotImplementedError: + pass + else: + conn_indexes = [ # type:ignore[assignment] + ix + for ix in conn_indexes + if autogen_context.run_name_filters( + ix["name"], + "index", + {"table_name": tname, "schema_name": schema}, + ) + ] + + # 2. convert conn-level objects from raw inspector records + # into schema objects + if is_drop_table: + # for DROP TABLE uniques are inline, don't need them + conn_uniques = set() # type:ignore[assignment] + else: + conn_uniques = { # type:ignore[assignment] + _make_unique_constraint(impl, uq_def, conn_table) + for uq_def in conn_uniques + } + + conn_indexes = { # type:ignore[assignment] + index + for index in ( + _make_index(impl, ix, conn_table) for ix in conn_indexes + ) + if index is not None + } + + # 2a. if the dialect dupes unique indexes as unique constraints + # (mysql and oracle), correct for that + + if unique_constraints_duplicate_unique_indexes: + _correct_for_uq_duplicates_uix( + conn_uniques, + conn_indexes, + metadata_unique_constraints, + metadata_indexes, + autogen_context.dialect, + impl, + ) + + # 3. give the dialect a chance to omit indexes and constraints that + # we know are either added implicitly by the DB or that the DB + # can't accurately report on + impl.correct_for_autogen_constraints( + conn_uniques, # type: ignore[arg-type] + conn_indexes, # type: ignore[arg-type] + metadata_unique_constraints, + metadata_indexes, + ) + + # 4. organize the constraints into "signature" collections, the + # _constraint_sig() objects provide a consistent facade over both + # Index and UniqueConstraint so we can easily work with them + # interchangeably + metadata_unique_constraints_sig = { + impl._create_metadata_constraint_sig(uq) + for uq in metadata_unique_constraints + } + + metadata_indexes_sig = { + impl._create_metadata_constraint_sig(ix) for ix in metadata_indexes + } + + conn_unique_constraints = { + impl._create_reflected_constraint_sig(uq) for uq in conn_uniques + } + + conn_indexes_sig = { + impl._create_reflected_constraint_sig(ix) for ix in conn_indexes + } + + # 5. index things by name, for those objects that have names + metadata_names = { + cast(str, c.md_name_to_sql_name(autogen_context)): c + for c in metadata_unique_constraints_sig.union(metadata_indexes_sig) + if c.is_named + } + + conn_uniques_by_name: Dict[sqla_compat._ConstraintName, _constraint_sig] + conn_indexes_by_name: Dict[sqla_compat._ConstraintName, _constraint_sig] + + conn_uniques_by_name = {c.name: c for c in conn_unique_constraints} + conn_indexes_by_name = {c.name: c for c in conn_indexes_sig} + conn_names = { + c.name: c + for c in conn_unique_constraints.union(conn_indexes_sig) + if sqla_compat.constraint_name_string(c.name) + } + + doubled_constraints = { + name: (conn_uniques_by_name[name], conn_indexes_by_name[name]) + for name in set(conn_uniques_by_name).intersection( + conn_indexes_by_name + ) + } + + # 6. index things by "column signature", to help with unnamed unique + # constraints. + conn_uniques_by_sig = {uq.unnamed: uq for uq in conn_unique_constraints} + metadata_uniques_by_sig = { + uq.unnamed: uq for uq in metadata_unique_constraints_sig + } + unnamed_metadata_uniques = { + uq.unnamed: uq + for uq in metadata_unique_constraints_sig + if not sqla_compat._constraint_is_named( + uq.const, autogen_context.dialect + ) + } + + # assumptions: + # 1. a unique constraint or an index from the connection *always* + # has a name. + # 2. an index on the metadata side *always* has a name. + # 3. a unique constraint on the metadata side *might* have a name. + # 4. The backend may double up indexes as unique constraints and + # vice versa (e.g. MySQL, Postgresql) + + def obj_added(obj: _constraint_sig): + if is_index_sig(obj): + if autogen_context.run_object_filters( + obj.const, obj.name, "index", False, None + ): + modify_ops.ops.append(ops.CreateIndexOp.from_index(obj.const)) + log.info( + "Detected added index %r on '%s'", + obj.name, + obj.column_names, + ) + elif is_uq_sig(obj): + if not supports_unique_constraints: + # can't report unique indexes as added if we don't + # detect them + return + if is_create_table or is_drop_table: + # unique constraints are created inline with table defs + return + if autogen_context.run_object_filters( + obj.const, obj.name, "unique_constraint", False, None + ): + modify_ops.ops.append( + ops.AddConstraintOp.from_constraint(obj.const) + ) + log.info( + "Detected added unique constraint %r on '%s'", + obj.name, + obj.column_names, + ) + else: + assert False + + def obj_removed(obj: _constraint_sig): + if is_index_sig(obj): + if obj.is_unique and not supports_unique_constraints: + # many databases double up unique constraints + # as unique indexes. without that list we can't + # be sure what we're doing here + return + + if autogen_context.run_object_filters( + obj.const, obj.name, "index", True, None + ): + modify_ops.ops.append(ops.DropIndexOp.from_index(obj.const)) + log.info("Detected removed index %r on %r", obj.name, tname) + elif is_uq_sig(obj): + if is_create_table or is_drop_table: + # if the whole table is being dropped, we don't need to + # consider unique constraint separately + return + if autogen_context.run_object_filters( + obj.const, obj.name, "unique_constraint", True, None + ): + modify_ops.ops.append( + ops.DropConstraintOp.from_constraint(obj.const) + ) + log.info( + "Detected removed unique constraint %r on %r", + obj.name, + tname, + ) + else: + assert False + + def obj_changed( + old: _constraint_sig, + new: _constraint_sig, + msg: str, + ): + if is_index_sig(old): + assert is_index_sig(new) + + if autogen_context.run_object_filters( + new.const, new.name, "index", False, old.const + ): + log.info( + "Detected changed index %r on %r: %s", old.name, tname, msg + ) + modify_ops.ops.append(ops.DropIndexOp.from_index(old.const)) + modify_ops.ops.append(ops.CreateIndexOp.from_index(new.const)) + elif is_uq_sig(old): + assert is_uq_sig(new) + + if autogen_context.run_object_filters( + new.const, new.name, "unique_constraint", False, old.const + ): + log.info( + "Detected changed unique constraint %r on %r: %s", + old.name, + tname, + msg, + ) + modify_ops.ops.append( + ops.DropConstraintOp.from_constraint(old.const) + ) + modify_ops.ops.append( + ops.AddConstraintOp.from_constraint(new.const) + ) + else: + assert False + + for removed_name in sorted(set(conn_names).difference(metadata_names)): + conn_obj = conn_names[removed_name] + if ( + is_uq_sig(conn_obj) + and conn_obj.unnamed in unnamed_metadata_uniques + ): + continue + elif removed_name in doubled_constraints: + conn_uq, conn_idx = doubled_constraints[removed_name] + if ( + all( + conn_idx.unnamed != meta_idx.unnamed + for meta_idx in metadata_indexes_sig + ) + and conn_uq.unnamed not in metadata_uniques_by_sig + ): + obj_removed(conn_uq) + obj_removed(conn_idx) + else: + obj_removed(conn_obj) + + for existing_name in sorted(set(metadata_names).intersection(conn_names)): + metadata_obj = metadata_names[existing_name] + + if existing_name in doubled_constraints: + conn_uq, conn_idx = doubled_constraints[existing_name] + if is_index_sig(metadata_obj): + conn_obj = conn_idx + else: + conn_obj = conn_uq + else: + conn_obj = conn_names[existing_name] + + if type(conn_obj) != type(metadata_obj): + obj_removed(conn_obj) + obj_added(metadata_obj) + else: + comparison = metadata_obj.compare_to_reflected(conn_obj) + + if comparison.is_different: + # constraint are different + obj_changed(conn_obj, metadata_obj, comparison.message) + elif comparison.is_skip: + # constraint cannot be compared, skip them + thing = ( + "index" if is_index_sig(conn_obj) else "unique constraint" + ) + log.info( + "Cannot compare %s %r, assuming equal and skipping. %s", + thing, + conn_obj.name, + comparison.message, + ) + else: + # constraint are equal + assert comparison.is_equal + + for added_name in sorted(set(metadata_names).difference(conn_names)): + obj = metadata_names[added_name] + obj_added(obj) + + for uq_sig in unnamed_metadata_uniques: + if uq_sig not in conn_uniques_by_sig: + obj_added(unnamed_metadata_uniques[uq_sig]) + + +def _correct_for_uq_duplicates_uix( + conn_unique_constraints, + conn_indexes, + metadata_unique_constraints, + metadata_indexes, + dialect, + impl, +): + # dedupe unique indexes vs. constraints, since MySQL / Oracle + # doesn't really have unique constraints as a separate construct. + # but look in the metadata and try to maintain constructs + # that already seem to be defined one way or the other + # on that side. This logic was formerly local to MySQL dialect, + # generalized to Oracle and others. See #276 + + # resolve final rendered name for unique constraints defined in the + # metadata. this includes truncation of long names. naming convention + # names currently should already be set as cons.name, however leave this + # to the sqla_compat to decide. + metadata_cons_names = [ + (sqla_compat._get_constraint_final_name(cons, dialect), cons) + for cons in metadata_unique_constraints + ] + + metadata_uq_names = { + name for name, cons in metadata_cons_names if name is not None + } + + unnamed_metadata_uqs = { + impl._create_metadata_constraint_sig(cons).unnamed + for name, cons in metadata_cons_names + if name is None + } + + metadata_ix_names = { + sqla_compat._get_constraint_final_name(cons, dialect) + for cons in metadata_indexes + if cons.unique + } + + # for reflection side, names are in their final database form + # already since they're from the database + conn_ix_names = {cons.name: cons for cons in conn_indexes if cons.unique} + + uqs_dupe_indexes = { + cons.name: cons + for cons in conn_unique_constraints + if cons.info["duplicates_index"] + } + + for overlap in uqs_dupe_indexes: + if overlap not in metadata_uq_names: + if ( + impl._create_reflected_constraint_sig( + uqs_dupe_indexes[overlap] + ).unnamed + not in unnamed_metadata_uqs + ): + conn_unique_constraints.discard(uqs_dupe_indexes[overlap]) + elif overlap not in metadata_ix_names: + conn_indexes.discard(conn_ix_names[overlap]) + + +@comparators.dispatch_for("column") +def _compare_nullable( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: Union[quoted_name, str], + cname: Union[quoted_name, str], + conn_col: Column[Any], + metadata_col: Column[Any], +) -> None: + metadata_col_nullable = metadata_col.nullable + conn_col_nullable = conn_col.nullable + alter_column_op.existing_nullable = conn_col_nullable + + if conn_col_nullable is not metadata_col_nullable: + if ( + sqla_compat._server_default_is_computed( + metadata_col.server_default, conn_col.server_default + ) + and sqla_compat._nullability_might_be_unset(metadata_col) + or ( + sqla_compat._server_default_is_identity( + metadata_col.server_default, conn_col.server_default + ) + ) + ): + log.info( + "Ignoring nullable change on identity column '%s.%s'", + tname, + cname, + ) + else: + alter_column_op.modify_nullable = metadata_col_nullable + log.info( + "Detected %s on column '%s.%s'", + "NULL" if metadata_col_nullable else "NOT NULL", + tname, + cname, + ) + + +@comparators.dispatch_for("column") +def _setup_autoincrement( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: Union[quoted_name, str], + cname: quoted_name, + conn_col: Column[Any], + metadata_col: Column[Any], +) -> None: + if metadata_col.table._autoincrement_column is metadata_col: + alter_column_op.kw["autoincrement"] = True + elif metadata_col.autoincrement is True: + alter_column_op.kw["autoincrement"] = True + elif metadata_col.autoincrement is False: + alter_column_op.kw["autoincrement"] = False + + +@comparators.dispatch_for("column") +def _compare_type( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: Union[quoted_name, str], + cname: Union[quoted_name, str], + conn_col: Column[Any], + metadata_col: Column[Any], +) -> None: + conn_type = conn_col.type + alter_column_op.existing_type = conn_type + metadata_type = metadata_col.type + if conn_type._type_affinity is sqltypes.NullType: + log.info( + "Couldn't determine database type " "for column '%s.%s'", + tname, + cname, + ) + return + if metadata_type._type_affinity is sqltypes.NullType: + log.info( + "Column '%s.%s' has no type within " "the model; can't compare", + tname, + cname, + ) + return + + isdiff = autogen_context.migration_context._compare_type( + conn_col, metadata_col + ) + + if isdiff: + alter_column_op.modify_type = metadata_type + log.info( + "Detected type change from %r to %r on '%s.%s'", + conn_type, + metadata_type, + tname, + cname, + ) + + +def _render_server_default_for_compare( + metadata_default: Optional[Any], autogen_context: AutogenContext +) -> Optional[str]: + if isinstance(metadata_default, sa_schema.DefaultClause): + if isinstance(metadata_default.arg, str): + metadata_default = metadata_default.arg + else: + metadata_default = str( + metadata_default.arg.compile( + dialect=autogen_context.dialect, + compile_kwargs={"literal_binds": True}, + ) + ) + if isinstance(metadata_default, str): + return metadata_default + else: + return None + + +def _normalize_computed_default(sqltext: str) -> str: + """we want to warn if a computed sql expression has changed. however + we don't want false positives and the warning is not that critical. + so filter out most forms of variability from the SQL text. + + """ + + return re.sub(r"[ \(\)'\"`\[\]\t\r\n]", "", sqltext).lower() + + +def _compare_computed_default( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: str, + cname: str, + conn_col: Column[Any], + metadata_col: Column[Any], +) -> None: + rendered_metadata_default = str( + cast(sa_schema.Computed, metadata_col.server_default).sqltext.compile( + dialect=autogen_context.dialect, + compile_kwargs={"literal_binds": True}, + ) + ) + + # since we cannot change computed columns, we do only a crude comparison + # here where we try to eliminate syntactical differences in order to + # get a minimal comparison just to emit a warning. + + rendered_metadata_default = _normalize_computed_default( + rendered_metadata_default + ) + + if isinstance(conn_col.server_default, sa_schema.Computed): + rendered_conn_default = str( + conn_col.server_default.sqltext.compile( + dialect=autogen_context.dialect, + compile_kwargs={"literal_binds": True}, + ) + ) + if rendered_conn_default is None: + rendered_conn_default = "" + else: + rendered_conn_default = _normalize_computed_default( + rendered_conn_default + ) + else: + rendered_conn_default = "" + + if rendered_metadata_default != rendered_conn_default: + _warn_computed_not_supported(tname, cname) + + +def _warn_computed_not_supported(tname: str, cname: str) -> None: + util.warn("Computed default on %s.%s cannot be modified" % (tname, cname)) + + +def _compare_identity_default( + autogen_context, + alter_column_op, + schema, + tname, + cname, + conn_col, + metadata_col, +): + impl = autogen_context.migration_context.impl + diff, ignored_attr, is_alter = impl._compare_identity_default( + metadata_col.server_default, conn_col.server_default + ) + + return diff, is_alter + + +@comparators.dispatch_for("column") +def _compare_server_default( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: Union[quoted_name, str], + cname: Union[quoted_name, str], + conn_col: Column[Any], + metadata_col: Column[Any], +) -> Optional[bool]: + metadata_default = metadata_col.server_default + conn_col_default = conn_col.server_default + if conn_col_default is None and metadata_default is None: + return False + + if sqla_compat._server_default_is_computed(metadata_default): + return _compare_computed_default( # type:ignore[func-returns-value] + autogen_context, + alter_column_op, + schema, + tname, + cname, + conn_col, + metadata_col, + ) + if sqla_compat._server_default_is_computed(conn_col_default): + _warn_computed_not_supported(tname, cname) + return False + + if sqla_compat._server_default_is_identity( + metadata_default, conn_col_default + ): + alter_column_op.existing_server_default = conn_col_default + diff, is_alter = _compare_identity_default( + autogen_context, + alter_column_op, + schema, + tname, + cname, + conn_col, + metadata_col, + ) + if is_alter: + alter_column_op.modify_server_default = metadata_default + if diff: + log.info( + "Detected server default on column '%s.%s': " + "identity options attributes %s", + tname, + cname, + sorted(diff), + ) + else: + rendered_metadata_default = _render_server_default_for_compare( + metadata_default, autogen_context + ) + + rendered_conn_default = ( + cast(Any, conn_col_default).arg.text if conn_col_default else None + ) + + alter_column_op.existing_server_default = conn_col_default + + is_diff = autogen_context.migration_context._compare_server_default( + conn_col, + metadata_col, + rendered_metadata_default, + rendered_conn_default, + ) + if is_diff: + alter_column_op.modify_server_default = metadata_default + log.info("Detected server default on column '%s.%s'", tname, cname) + + return None + + +@comparators.dispatch_for("column") +def _compare_column_comment( + autogen_context: AutogenContext, + alter_column_op: AlterColumnOp, + schema: Optional[str], + tname: Union[quoted_name, str], + cname: quoted_name, + conn_col: Column[Any], + metadata_col: Column[Any], +) -> Optional[Literal[False]]: + assert autogen_context.dialect is not None + if not autogen_context.dialect.supports_comments: + return None + + metadata_comment = metadata_col.comment + conn_col_comment = conn_col.comment + if conn_col_comment is None and metadata_comment is None: + return False + + alter_column_op.existing_comment = conn_col_comment + + if conn_col_comment != metadata_comment: + alter_column_op.modify_comment = metadata_comment + log.info("Detected column comment '%s.%s'", tname, cname) + + return None + + +@comparators.dispatch_for("table") +def _compare_foreign_keys( + autogen_context: AutogenContext, + modify_table_ops: ModifyTableOps, + schema: Optional[str], + tname: Union[quoted_name, str], + conn_table: Table, + metadata_table: Table, +) -> None: + # if we're doing CREATE TABLE, all FKs are created + # inline within the table def + if conn_table is None or metadata_table is None: + return + + inspector = autogen_context.inspector + metadata_fks = { + fk + for fk in metadata_table.constraints + if isinstance(fk, sa_schema.ForeignKeyConstraint) + } + + conn_fks_list = [ + fk + for fk in _InspectorConv(inspector).get_foreign_keys( + tname, schema=schema + ) + if autogen_context.run_name_filters( + fk["name"], + "foreign_key_constraint", + {"table_name": tname, "schema_name": schema}, + ) + ] + + conn_fks = { + _make_foreign_key(const, conn_table) for const in conn_fks_list + } + + impl = autogen_context.migration_context.impl + + # give the dialect a chance to correct the FKs to match more + # closely + autogen_context.migration_context.impl.correct_for_autogen_foreignkeys( + conn_fks, metadata_fks + ) + + metadata_fks_sig = { + impl._create_metadata_constraint_sig(fk) for fk in metadata_fks + } + + conn_fks_sig = { + impl._create_reflected_constraint_sig(fk) for fk in conn_fks + } + + # check if reflected FKs include options, indicating the backend + # can reflect FK options + if conn_fks_list and "options" in conn_fks_list[0]: + conn_fks_by_sig = {c.unnamed: c for c in conn_fks_sig} + metadata_fks_by_sig = {c.unnamed: c for c in metadata_fks_sig} + else: + # otherwise compare by sig without options added + conn_fks_by_sig = {c.unnamed_no_options: c for c in conn_fks_sig} + metadata_fks_by_sig = { + c.unnamed_no_options: c for c in metadata_fks_sig + } + + metadata_fks_by_name = { + c.name: c for c in metadata_fks_sig if c.name is not None + } + conn_fks_by_name = {c.name: c for c in conn_fks_sig if c.name is not None} + + def _add_fk(obj, compare_to): + if autogen_context.run_object_filters( + obj.const, obj.name, "foreign_key_constraint", False, compare_to + ): + modify_table_ops.ops.append( + ops.CreateForeignKeyOp.from_constraint(const.const) + ) + + log.info( + "Detected added foreign key (%s)(%s) on table %s%s", + ", ".join(obj.source_columns), + ", ".join(obj.target_columns), + "%s." % obj.source_schema if obj.source_schema else "", + obj.source_table, + ) + + def _remove_fk(obj, compare_to): + if autogen_context.run_object_filters( + obj.const, obj.name, "foreign_key_constraint", True, compare_to + ): + modify_table_ops.ops.append( + ops.DropConstraintOp.from_constraint(obj.const) + ) + log.info( + "Detected removed foreign key (%s)(%s) on table %s%s", + ", ".join(obj.source_columns), + ", ".join(obj.target_columns), + "%s." % obj.source_schema if obj.source_schema else "", + obj.source_table, + ) + + # so far it appears we don't need to do this by name at all. + # SQLite doesn't preserve constraint names anyway + + for removed_sig in set(conn_fks_by_sig).difference(metadata_fks_by_sig): + const = conn_fks_by_sig[removed_sig] + if removed_sig not in metadata_fks_by_sig: + compare_to = ( + metadata_fks_by_name[const.name].const + if const.name in metadata_fks_by_name + else None + ) + _remove_fk(const, compare_to) + + for added_sig in set(metadata_fks_by_sig).difference(conn_fks_by_sig): + const = metadata_fks_by_sig[added_sig] + if added_sig not in conn_fks_by_sig: + compare_to = ( + conn_fks_by_name[const.name].const + if const.name in conn_fks_by_name + else None + ) + _add_fk(const, compare_to) + + +@comparators.dispatch_for("table") +def _compare_table_comment( + autogen_context: AutogenContext, + modify_table_ops: ModifyTableOps, + schema: Optional[str], + tname: Union[quoted_name, str], + conn_table: Optional[Table], + metadata_table: Optional[Table], +) -> None: + assert autogen_context.dialect is not None + if not autogen_context.dialect.supports_comments: + return + + # if we're doing CREATE TABLE, comments will be created inline + # with the create_table op. + if conn_table is None or metadata_table is None: + return + + if conn_table.comment is None and metadata_table.comment is None: + return + + if metadata_table.comment is None and conn_table.comment is not None: + modify_table_ops.ops.append( + ops.DropTableCommentOp( + tname, existing_comment=conn_table.comment, schema=schema + ) + ) + elif metadata_table.comment != conn_table.comment: + modify_table_ops.ops.append( + ops.CreateTableCommentOp( + tname, + metadata_table.comment, + existing_comment=conn_table.comment, + schema=schema, + ) + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/render.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/render.py new file mode 100644 index 000000000..bd20ced8d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/render.py @@ -0,0 +1,1159 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +from io import StringIO +import re +from typing import Any +from typing import cast +from typing import Dict +from typing import List +from typing import Optional +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from mako.pygen import PythonPrinter +from sqlalchemy import schema as sa_schema +from sqlalchemy import sql +from sqlalchemy import types as sqltypes +from sqlalchemy.sql.base import _DialectArgView +from sqlalchemy.sql.elements import conv +from sqlalchemy.sql.elements import Label +from sqlalchemy.sql.elements import quoted_name + +from .. import util +from ..operations import ops +from ..util import sqla_compat + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy import Computed + from sqlalchemy import Identity + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.schema import CheckConstraint + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.schema import FetchedValue + from sqlalchemy.sql.schema import ForeignKey + from sqlalchemy.sql.schema import ForeignKeyConstraint + from sqlalchemy.sql.schema import Index + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import PrimaryKeyConstraint + from sqlalchemy.sql.schema import UniqueConstraint + from sqlalchemy.sql.sqltypes import ARRAY + from sqlalchemy.sql.type_api import TypeEngine + + from alembic.autogenerate.api import AutogenContext + from alembic.config import Config + from alembic.operations.ops import MigrationScript + from alembic.operations.ops import ModifyTableOps + + +MAX_PYTHON_ARGS = 255 + + +def _render_gen_name( + autogen_context: AutogenContext, + name: sqla_compat._ConstraintName, +) -> Optional[Union[quoted_name, str, _f_name]]: + if isinstance(name, conv): + return _f_name(_alembic_autogenerate_prefix(autogen_context), name) + else: + return sqla_compat.constraint_name_or_none(name) + + +def _indent(text: str) -> str: + text = re.compile(r"^", re.M).sub(" ", text).strip() + text = re.compile(r" +$", re.M).sub("", text) + return text + + +def _render_python_into_templatevars( + autogen_context: AutogenContext, + migration_script: MigrationScript, + template_args: Dict[str, Union[str, Config]], +) -> None: + imports = autogen_context.imports + + for upgrade_ops, downgrade_ops in zip( + migration_script.upgrade_ops_list, migration_script.downgrade_ops_list + ): + template_args[upgrade_ops.upgrade_token] = _indent( + _render_cmd_body(upgrade_ops, autogen_context) + ) + template_args[downgrade_ops.downgrade_token] = _indent( + _render_cmd_body(downgrade_ops, autogen_context) + ) + template_args["imports"] = "\n".join(sorted(imports)) + + +default_renderers = renderers = util.Dispatcher() + + +def _render_cmd_body( + op_container: ops.OpContainer, + autogen_context: AutogenContext, +) -> str: + buf = StringIO() + printer = PythonPrinter(buf) + + printer.writeline( + "# ### commands auto generated by Alembic - please adjust! ###" + ) + + has_lines = False + for op in op_container.ops: + lines = render_op(autogen_context, op) + has_lines = has_lines or bool(lines) + + for line in lines: + printer.writeline(line) + + if not has_lines: + printer.writeline("pass") + + printer.writeline("# ### end Alembic commands ###") + + return buf.getvalue() + + +def render_op( + autogen_context: AutogenContext, op: ops.MigrateOperation +) -> List[str]: + renderer = renderers.dispatch(op) + lines = util.to_list(renderer(autogen_context, op)) + return lines + + +def render_op_text( + autogen_context: AutogenContext, op: ops.MigrateOperation +) -> str: + return "\n".join(render_op(autogen_context, op)) + + +@renderers.dispatch_for(ops.ModifyTableOps) +def _render_modify_table( + autogen_context: AutogenContext, op: ModifyTableOps +) -> List[str]: + opts = autogen_context.opts + render_as_batch = opts.get("render_as_batch", False) + + if op.ops: + lines = [] + if render_as_batch: + with autogen_context._within_batch(): + lines.append( + "with op.batch_alter_table(%r, schema=%r) as batch_op:" + % (op.table_name, op.schema) + ) + for t_op in op.ops: + t_lines = render_op(autogen_context, t_op) + lines.extend(t_lines) + lines.append("") + else: + for t_op in op.ops: + t_lines = render_op(autogen_context, t_op) + lines.extend(t_lines) + + return lines + else: + return [] + + +@renderers.dispatch_for(ops.CreateTableCommentOp) +def _render_create_table_comment( + autogen_context: AutogenContext, op: ops.CreateTableCommentOp +) -> str: + if autogen_context._has_batch: + templ = ( + "{prefix}create_table_comment(\n" + "{indent}{comment},\n" + "{indent}existing_comment={existing}\n" + ")" + ) + else: + templ = ( + "{prefix}create_table_comment(\n" + "{indent}'{tname}',\n" + "{indent}{comment},\n" + "{indent}existing_comment={existing},\n" + "{indent}schema={schema}\n" + ")" + ) + return templ.format( + prefix=_alembic_autogenerate_prefix(autogen_context), + tname=op.table_name, + comment="%r" % op.comment if op.comment is not None else None, + existing=( + "%r" % op.existing_comment + if op.existing_comment is not None + else None + ), + schema="'%s'" % op.schema if op.schema is not None else None, + indent=" ", + ) + + +@renderers.dispatch_for(ops.DropTableCommentOp) +def _render_drop_table_comment( + autogen_context: AutogenContext, op: ops.DropTableCommentOp +) -> str: + if autogen_context._has_batch: + templ = ( + "{prefix}drop_table_comment(\n" + "{indent}existing_comment={existing}\n" + ")" + ) + else: + templ = ( + "{prefix}drop_table_comment(\n" + "{indent}'{tname}',\n" + "{indent}existing_comment={existing},\n" + "{indent}schema={schema}\n" + ")" + ) + return templ.format( + prefix=_alembic_autogenerate_prefix(autogen_context), + tname=op.table_name, + existing=( + "%r" % op.existing_comment + if op.existing_comment is not None + else None + ), + schema="'%s'" % op.schema if op.schema is not None else None, + indent=" ", + ) + + +@renderers.dispatch_for(ops.CreateTableOp) +def _add_table(autogen_context: AutogenContext, op: ops.CreateTableOp) -> str: + table = op.to_table() + + args = [ + col + for col in [ + _render_column(col, autogen_context) for col in table.columns + ] + if col + ] + sorted( + [ + rcons + for rcons in [ + _render_constraint( + cons, autogen_context, op._namespace_metadata + ) + for cons in table.constraints + ] + if rcons is not None + ] + ) + + if len(args) > MAX_PYTHON_ARGS: + args_str = "*[" + ",\n".join(args) + "]" + else: + args_str = ",\n".join(args) + + text = "%(prefix)screate_table(%(tablename)r,\n%(args)s" % { + "tablename": _ident(op.table_name), + "prefix": _alembic_autogenerate_prefix(autogen_context), + "args": args_str, + } + if op.schema: + text += ",\nschema=%r" % _ident(op.schema) + + comment = table.comment + if comment: + text += ",\ncomment=%r" % _ident(comment) + + info = table.info + if info: + text += f",\ninfo={info!r}" + + for k in sorted(op.kw): + text += ",\n%s=%r" % (k.replace(" ", "_"), op.kw[k]) + + if table._prefixes: + prefixes = ", ".join("'%s'" % p for p in table._prefixes) + text += ",\nprefixes=[%s]" % prefixes + + if op.if_not_exists is not None: + text += ",\nif_not_exists=%r" % bool(op.if_not_exists) + + text += "\n)" + return text + + +@renderers.dispatch_for(ops.DropTableOp) +def _drop_table(autogen_context: AutogenContext, op: ops.DropTableOp) -> str: + text = "%(prefix)sdrop_table(%(tname)r" % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "tname": _ident(op.table_name), + } + if op.schema: + text += ", schema=%r" % _ident(op.schema) + + if op.if_exists is not None: + text += ", if_exists=%r" % bool(op.if_exists) + + text += ")" + return text + + +def _render_dialect_kwargs_items( + autogen_context: AutogenContext, dialect_kwargs: _DialectArgView +) -> list[str]: + return [ + f"{key}={_render_potential_expr(val, autogen_context)}" + for key, val in dialect_kwargs.items() + ] + + +@renderers.dispatch_for(ops.CreateIndexOp) +def _add_index(autogen_context: AutogenContext, op: ops.CreateIndexOp) -> str: + index = op.to_index() + + has_batch = autogen_context._has_batch + + if has_batch: + tmpl = ( + "%(prefix)screate_index(%(name)r, [%(columns)s], " + "unique=%(unique)r%(kwargs)s)" + ) + else: + tmpl = ( + "%(prefix)screate_index(%(name)r, %(table)r, [%(columns)s], " + "unique=%(unique)r%(schema)s%(kwargs)s)" + ) + + assert index.table is not None + + opts = _render_dialect_kwargs_items(autogen_context, index.dialect_kwargs) + if op.if_not_exists is not None: + opts.append("if_not_exists=%r" % bool(op.if_not_exists)) + text = tmpl % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "name": _render_gen_name(autogen_context, index.name), + "table": _ident(index.table.name), + "columns": ", ".join( + _get_index_rendered_expressions(index, autogen_context) + ), + "unique": index.unique or False, + "schema": ( + (", schema=%r" % _ident(index.table.schema)) + if index.table.schema + else "" + ), + "kwargs": ", " + ", ".join(opts) if opts else "", + } + return text + + +@renderers.dispatch_for(ops.DropIndexOp) +def _drop_index(autogen_context: AutogenContext, op: ops.DropIndexOp) -> str: + index = op.to_index() + + has_batch = autogen_context._has_batch + + if has_batch: + tmpl = "%(prefix)sdrop_index(%(name)r%(kwargs)s)" + else: + tmpl = ( + "%(prefix)sdrop_index(%(name)r, " + "table_name=%(table_name)r%(schema)s%(kwargs)s)" + ) + opts = _render_dialect_kwargs_items(autogen_context, index.dialect_kwargs) + if op.if_exists is not None: + opts.append("if_exists=%r" % bool(op.if_exists)) + text = tmpl % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "name": _render_gen_name(autogen_context, op.index_name), + "table_name": _ident(op.table_name), + "schema": ((", schema=%r" % _ident(op.schema)) if op.schema else ""), + "kwargs": ", " + ", ".join(opts) if opts else "", + } + return text + + +@renderers.dispatch_for(ops.CreateUniqueConstraintOp) +def _add_unique_constraint( + autogen_context: AutogenContext, op: ops.CreateUniqueConstraintOp +) -> List[str]: + return [_uq_constraint(op.to_constraint(), autogen_context, True)] + + +@renderers.dispatch_for(ops.CreateForeignKeyOp) +def _add_fk_constraint( + autogen_context: AutogenContext, op: ops.CreateForeignKeyOp +) -> str: + constraint = op.to_constraint() + args = [repr(_render_gen_name(autogen_context, op.constraint_name))] + if not autogen_context._has_batch: + args.append(repr(_ident(op.source_table))) + + args.extend( + [ + repr(_ident(op.referent_table)), + repr([_ident(col) for col in op.local_cols]), + repr([_ident(col) for col in op.remote_cols]), + ] + ) + kwargs = [ + "referent_schema", + "onupdate", + "ondelete", + "initially", + "deferrable", + "use_alter", + "match", + ] + if not autogen_context._has_batch: + kwargs.insert(0, "source_schema") + + for k in kwargs: + if k in op.kw: + value = op.kw[k] + if value is not None: + args.append("%s=%r" % (k, value)) + + dialect_kwargs = _render_dialect_kwargs_items( + autogen_context, constraint.dialect_kwargs + ) + + return "%(prefix)screate_foreign_key(%(args)s%(dialect_kwargs)s)" % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "args": ", ".join(args), + "dialect_kwargs": ( + ", " + ", ".join(dialect_kwargs) if dialect_kwargs else "" + ), + } + + +@renderers.dispatch_for(ops.CreatePrimaryKeyOp) +def _add_pk_constraint(constraint, autogen_context): + raise NotImplementedError() + + +@renderers.dispatch_for(ops.CreateCheckConstraintOp) +def _add_check_constraint(constraint, autogen_context): + raise NotImplementedError() + + +@renderers.dispatch_for(ops.DropConstraintOp) +def _drop_constraint( + autogen_context: AutogenContext, op: ops.DropConstraintOp +) -> str: + prefix = _alembic_autogenerate_prefix(autogen_context) + name = _render_gen_name(autogen_context, op.constraint_name) + schema = _ident(op.schema) if op.schema else None + type_ = _ident(op.constraint_type) if op.constraint_type else None + if_exists = op.if_exists + params_strs = [] + params_strs.append(repr(name)) + if not autogen_context._has_batch: + params_strs.append(repr(_ident(op.table_name))) + if schema is not None: + params_strs.append(f"schema={schema!r}") + if type_ is not None: + params_strs.append(f"type_={type_!r}") + if if_exists is not None: + params_strs.append(f"if_exists={if_exists}") + + return f"{prefix}drop_constraint({', '.join(params_strs)})" + + +@renderers.dispatch_for(ops.AddColumnOp) +def _add_column(autogen_context: AutogenContext, op: ops.AddColumnOp) -> str: + schema, tname, column, if_not_exists = ( + op.schema, + op.table_name, + op.column, + op.if_not_exists, + ) + if autogen_context._has_batch: + template = "%(prefix)sadd_column(%(column)s)" + else: + template = "%(prefix)sadd_column(%(tname)r, %(column)s" + if schema: + template += ", schema=%(schema)r" + if if_not_exists is not None: + template += ", if_not_exists=%(if_not_exists)r" + template += ")" + text = template % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "tname": tname, + "column": _render_column(column, autogen_context), + "schema": schema, + "if_not_exists": if_not_exists, + } + return text + + +@renderers.dispatch_for(ops.DropColumnOp) +def _drop_column(autogen_context: AutogenContext, op: ops.DropColumnOp) -> str: + schema, tname, column_name, if_exists = ( + op.schema, + op.table_name, + op.column_name, + op.if_exists, + ) + + if autogen_context._has_batch: + template = "%(prefix)sdrop_column(%(cname)r)" + else: + template = "%(prefix)sdrop_column(%(tname)r, %(cname)r" + if schema: + template += ", schema=%(schema)r" + if if_exists is not None: + template += ", if_exists=%(if_exists)r" + template += ")" + + text = template % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "tname": _ident(tname), + "cname": _ident(column_name), + "schema": _ident(schema), + "if_exists": if_exists, + } + return text + + +@renderers.dispatch_for(ops.AlterColumnOp) +def _alter_column( + autogen_context: AutogenContext, op: ops.AlterColumnOp +) -> str: + tname = op.table_name + cname = op.column_name + server_default = op.modify_server_default + type_ = op.modify_type + nullable = op.modify_nullable + comment = op.modify_comment + newname = op.modify_name + autoincrement = op.kw.get("autoincrement", None) + existing_type = op.existing_type + existing_nullable = op.existing_nullable + existing_comment = op.existing_comment + existing_server_default = op.existing_server_default + schema = op.schema + + indent = " " * 11 + + if autogen_context._has_batch: + template = "%(prefix)salter_column(%(cname)r" + else: + template = "%(prefix)salter_column(%(tname)r, %(cname)r" + + text = template % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "tname": tname, + "cname": cname, + } + if existing_type is not None: + text += ",\n%sexisting_type=%s" % ( + indent, + _repr_type(existing_type, autogen_context), + ) + if server_default is not False: + rendered = _render_server_default(server_default, autogen_context) + text += ",\n%sserver_default=%s" % (indent, rendered) + + if newname is not None: + text += ",\n%snew_column_name=%r" % (indent, newname) + if type_ is not None: + text += ",\n%stype_=%s" % (indent, _repr_type(type_, autogen_context)) + if nullable is not None: + text += ",\n%snullable=%r" % (indent, nullable) + if comment is not False: + text += ",\n%scomment=%r" % (indent, comment) + if existing_comment is not None: + text += ",\n%sexisting_comment=%r" % (indent, existing_comment) + if nullable is None and existing_nullable is not None: + text += ",\n%sexisting_nullable=%r" % (indent, existing_nullable) + if autoincrement is not None: + text += ",\n%sautoincrement=%r" % (indent, autoincrement) + if server_default is False and existing_server_default: + rendered = _render_server_default( + existing_server_default, autogen_context + ) + text += ",\n%sexisting_server_default=%s" % (indent, rendered) + if schema and not autogen_context._has_batch: + text += ",\n%sschema=%r" % (indent, schema) + text += ")" + return text + + +class _f_name: + def __init__(self, prefix: str, name: conv) -> None: + self.prefix = prefix + self.name = name + + def __repr__(self) -> str: + return "%sf(%r)" % (self.prefix, _ident(self.name)) + + +def _ident(name: Optional[Union[quoted_name, str]]) -> Optional[str]: + """produce a __repr__() object for a string identifier that may + use quoted_name() in SQLAlchemy 0.9 and greater. + + The issue worked around here is that quoted_name() doesn't have + very good repr() behavior by itself when unicode is involved. + + """ + if name is None: + return name + elif isinstance(name, quoted_name): + return str(name) + elif isinstance(name, str): + return name + + +def _render_potential_expr( + value: Any, + autogen_context: AutogenContext, + *, + wrap_in_element: bool = True, + is_server_default: bool = False, + is_index: bool = False, +) -> str: + if isinstance(value, sql.ClauseElement): + sql_text = autogen_context.migration_context.impl.render_ddl_sql_expr( + value, is_server_default=is_server_default, is_index=is_index + ) + if wrap_in_element: + prefix = _sqlalchemy_autogenerate_prefix(autogen_context) + element = "literal_column" if is_index else "text" + value_str = f"{prefix}{element}({sql_text!r})" + if ( + is_index + and isinstance(value, Label) + and type(value.name) is str + ): + return value_str + f".label({value.name!r})" + else: + return value_str + else: + return repr(sql_text) + else: + return repr(value) + + +def _get_index_rendered_expressions( + idx: Index, autogen_context: AutogenContext +) -> List[str]: + return [ + ( + repr(_ident(getattr(exp, "name", None))) + if isinstance(exp, sa_schema.Column) + else _render_potential_expr(exp, autogen_context, is_index=True) + ) + for exp in idx.expressions + ] + + +def _uq_constraint( + constraint: UniqueConstraint, + autogen_context: AutogenContext, + alter: bool, +) -> str: + opts: List[Tuple[str, Any]] = [] + + has_batch = autogen_context._has_batch + + if constraint.deferrable: + opts.append(("deferrable", constraint.deferrable)) + if constraint.initially: + opts.append(("initially", constraint.initially)) + if not has_batch and alter and constraint.table.schema: + opts.append(("schema", _ident(constraint.table.schema))) + if not alter and constraint.name: + opts.append( + ("name", _render_gen_name(autogen_context, constraint.name)) + ) + dialect_options = _render_dialect_kwargs_items( + autogen_context, constraint.dialect_kwargs + ) + + if alter: + args = [repr(_render_gen_name(autogen_context, constraint.name))] + if not has_batch: + args += [repr(_ident(constraint.table.name))] + args.append(repr([_ident(col.name) for col in constraint.columns])) + args.extend(["%s=%r" % (k, v) for k, v in opts]) + args.extend(dialect_options) + return "%(prefix)screate_unique_constraint(%(args)s)" % { + "prefix": _alembic_autogenerate_prefix(autogen_context), + "args": ", ".join(args), + } + else: + args = [repr(_ident(col.name)) for col in constraint.columns] + args.extend(["%s=%r" % (k, v) for k, v in opts]) + args.extend(dialect_options) + return "%(prefix)sUniqueConstraint(%(args)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "args": ", ".join(args), + } + + +def _user_autogenerate_prefix(autogen_context, target): + prefix = autogen_context.opts["user_module_prefix"] + if prefix is None: + return "%s." % target.__module__ + else: + return prefix + + +def _sqlalchemy_autogenerate_prefix(autogen_context: AutogenContext) -> str: + return autogen_context.opts["sqlalchemy_module_prefix"] or "" + + +def _alembic_autogenerate_prefix(autogen_context: AutogenContext) -> str: + if autogen_context._has_batch: + return "batch_op." + else: + return autogen_context.opts["alembic_module_prefix"] or "" + + +def _user_defined_render( + type_: str, object_: Any, autogen_context: AutogenContext +) -> Union[str, Literal[False]]: + if "render_item" in autogen_context.opts: + render = autogen_context.opts["render_item"] + if render: + rendered = render(type_, object_, autogen_context) + if rendered is not False: + return rendered + return False + + +def _render_column( + column: Column[Any], autogen_context: AutogenContext +) -> str: + rendered = _user_defined_render("column", column, autogen_context) + if rendered is not False: + return rendered + + args: List[str] = [] + opts: List[Tuple[str, Any]] = [] + + if column.server_default: + rendered = _render_server_default( # type:ignore[assignment] + column.server_default, autogen_context + ) + if rendered: + if _should_render_server_default_positionally( + column.server_default + ): + args.append(rendered) + else: + opts.append(("server_default", rendered)) + + if ( + column.autoincrement is not None + and column.autoincrement != sqla_compat.AUTOINCREMENT_DEFAULT + ): + opts.append(("autoincrement", column.autoincrement)) + + if column.nullable is not None: + opts.append(("nullable", column.nullable)) + + if column.system: + opts.append(("system", column.system)) + + comment = column.comment + if comment: + opts.append(("comment", "%r" % comment)) + + # TODO: for non-ascii colname, assign a "key" + return "%(prefix)sColumn(%(name)r, %(type)s, %(args)s%(kwargs)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "name": _ident(column.name), + "type": _repr_type(column.type, autogen_context), + "args": ", ".join([str(arg) for arg in args]) + ", " if args else "", + "kwargs": ( + ", ".join( + ["%s=%s" % (kwname, val) for kwname, val in opts] + + [ + "%s=%s" + % (key, _render_potential_expr(val, autogen_context)) + for key, val in column.kwargs.items() + ] + ) + ), + } + + +def _should_render_server_default_positionally(server_default: Any) -> bool: + return sqla_compat._server_default_is_computed( + server_default + ) or sqla_compat._server_default_is_identity(server_default) + + +def _render_server_default( + default: Optional[ + Union[FetchedValue, str, TextClause, ColumnElement[Any]] + ], + autogen_context: AutogenContext, + repr_: bool = True, +) -> Optional[str]: + rendered = _user_defined_render("server_default", default, autogen_context) + if rendered is not False: + return rendered + + if sqla_compat._server_default_is_computed(default): + return _render_computed(cast("Computed", default), autogen_context) + elif sqla_compat._server_default_is_identity(default): + return _render_identity(cast("Identity", default), autogen_context) + elif isinstance(default, sa_schema.DefaultClause): + if isinstance(default.arg, str): + default = default.arg + else: + return _render_potential_expr( + default.arg, autogen_context, is_server_default=True + ) + + if isinstance(default, str) and repr_: + default = repr(re.sub(r"^'|'$", "", default)) + + return cast(str, default) + + +def _render_computed( + computed: Computed, autogen_context: AutogenContext +) -> str: + text = _render_potential_expr( + computed.sqltext, autogen_context, wrap_in_element=False + ) + + kwargs = {} + if computed.persisted is not None: + kwargs["persisted"] = computed.persisted + return "%(prefix)sComputed(%(text)s, %(kwargs)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "text": text, + "kwargs": (", ".join("%s=%s" % pair for pair in kwargs.items())), + } + + +def _render_identity( + identity: Identity, autogen_context: AutogenContext +) -> str: + kwargs = sqla_compat._get_identity_options_dict( + identity, dialect_kwargs=True + ) + + return "%(prefix)sIdentity(%(kwargs)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "kwargs": (", ".join("%s=%s" % pair for pair in kwargs.items())), + } + + +def _repr_type( + type_: TypeEngine, + autogen_context: AutogenContext, + _skip_variants: bool = False, +) -> str: + rendered = _user_defined_render("type", type_, autogen_context) + if rendered is not False: + return rendered + + if hasattr(autogen_context.migration_context, "impl"): + impl_rt = autogen_context.migration_context.impl.render_type( + type_, autogen_context + ) + else: + impl_rt = None + + mod = type(type_).__module__ + imports = autogen_context.imports + + if not _skip_variants and sqla_compat._type_has_variants(type_): + return _render_Variant_type(type_, autogen_context) + elif mod.startswith("sqlalchemy.dialects"): + match = re.match(r"sqlalchemy\.dialects\.(\w+)", mod) + assert match is not None + dname = match.group(1) + if imports is not None: + imports.add("from sqlalchemy.dialects import %s" % dname) + if impl_rt: + return impl_rt + else: + return "%s.%r" % (dname, type_) + elif impl_rt: + return impl_rt + elif mod.startswith("sqlalchemy."): + if "_render_%s_type" % type_.__visit_name__ in globals(): + fn = globals()["_render_%s_type" % type_.__visit_name__] + return fn(type_, autogen_context) + else: + prefix = _sqlalchemy_autogenerate_prefix(autogen_context) + return "%s%r" % (prefix, type_) + else: + prefix = _user_autogenerate_prefix(autogen_context, type_) + return "%s%r" % (prefix, type_) + + +def _render_ARRAY_type(type_: ARRAY, autogen_context: AutogenContext) -> str: + return cast( + str, + _render_type_w_subtype( + type_, autogen_context, "item_type", r"(.+?\()" + ), + ) + + +def _render_Variant_type( + type_: TypeEngine, autogen_context: AutogenContext +) -> str: + base_type, variant_mapping = sqla_compat._get_variant_mapping(type_) + base = _repr_type(base_type, autogen_context, _skip_variants=True) + assert base is not None and base is not False # type: ignore[comparison-overlap] # noqa:E501 + for dialect in sorted(variant_mapping): + typ = variant_mapping[dialect] + base += ".with_variant(%s, %r)" % ( + _repr_type(typ, autogen_context, _skip_variants=True), + dialect, + ) + return base + + +def _render_type_w_subtype( + type_: TypeEngine, + autogen_context: AutogenContext, + attrname: str, + regexp: str, + prefix: Optional[str] = None, +) -> Union[Optional[str], Literal[False]]: + outer_repr = repr(type_) + inner_type = getattr(type_, attrname, None) + if inner_type is None: + return False + + inner_repr = repr(inner_type) + + inner_repr = re.sub(r"([\(\)])", r"\\\1", inner_repr) + sub_type = _repr_type(getattr(type_, attrname), autogen_context) + outer_type = re.sub(regexp + inner_repr, r"\1%s" % sub_type, outer_repr) + + if prefix: + return "%s%s" % (prefix, outer_type) + + mod = type(type_).__module__ + if mod.startswith("sqlalchemy.dialects"): + match = re.match(r"sqlalchemy\.dialects\.(\w+)", mod) + assert match is not None + dname = match.group(1) + return "%s.%s" % (dname, outer_type) + elif mod.startswith("sqlalchemy"): + prefix = _sqlalchemy_autogenerate_prefix(autogen_context) + return "%s%s" % (prefix, outer_type) + else: + return None + + +_constraint_renderers = util.Dispatcher() + + +def _render_constraint( + constraint: Constraint, + autogen_context: AutogenContext, + namespace_metadata: Optional[MetaData], +) -> Optional[str]: + try: + renderer = _constraint_renderers.dispatch(constraint) + except ValueError: + util.warn("No renderer is established for object %r" % constraint) + return "[Unknown Python object %r]" % constraint + else: + return renderer(constraint, autogen_context, namespace_metadata) + + +@_constraint_renderers.dispatch_for(sa_schema.PrimaryKeyConstraint) +def _render_primary_key( + constraint: PrimaryKeyConstraint, + autogen_context: AutogenContext, + namespace_metadata: Optional[MetaData], +) -> Optional[str]: + rendered = _user_defined_render("primary_key", constraint, autogen_context) + if rendered is not False: + return rendered + + if not constraint.columns: + return None + + opts = [] + if constraint.name: + opts.append( + ("name", repr(_render_gen_name(autogen_context, constraint.name))) + ) + return "%(prefix)sPrimaryKeyConstraint(%(args)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "args": ", ".join( + [repr(c.name) for c in constraint.columns] + + ["%s=%s" % (kwname, val) for kwname, val in opts] + ), + } + + +def _fk_colspec( + fk: ForeignKey, + metadata_schema: Optional[str], + namespace_metadata: MetaData, +) -> str: + """Implement a 'safe' version of ForeignKey._get_colspec() that + won't fail if the remote table can't be resolved. + + """ + colspec = fk._get_colspec() + tokens = colspec.split(".") + tname, colname = tokens[-2:] + + if metadata_schema is not None and len(tokens) == 2: + table_fullname = "%s.%s" % (metadata_schema, tname) + else: + table_fullname = ".".join(tokens[0:-1]) + + if ( + not fk.link_to_name + and fk.parent is not None + and fk.parent.table is not None + ): + # try to resolve the remote table in order to adjust for column.key. + # the FK constraint needs to be rendered in terms of the column + # name. + + if table_fullname in namespace_metadata.tables: + col = namespace_metadata.tables[table_fullname].c.get(colname) + if col is not None: + colname = _ident(col.name) # type: ignore[assignment] + + colspec = "%s.%s" % (table_fullname, colname) + + return colspec + + +def _populate_render_fk_opts( + constraint: ForeignKeyConstraint, opts: List[Tuple[str, str]] +) -> None: + if constraint.onupdate: + opts.append(("onupdate", repr(constraint.onupdate))) + if constraint.ondelete: + opts.append(("ondelete", repr(constraint.ondelete))) + if constraint.initially: + opts.append(("initially", repr(constraint.initially))) + if constraint.deferrable: + opts.append(("deferrable", repr(constraint.deferrable))) + if constraint.use_alter: + opts.append(("use_alter", repr(constraint.use_alter))) + if constraint.match: + opts.append(("match", repr(constraint.match))) + + +@_constraint_renderers.dispatch_for(sa_schema.ForeignKeyConstraint) +def _render_foreign_key( + constraint: ForeignKeyConstraint, + autogen_context: AutogenContext, + namespace_metadata: MetaData, +) -> Optional[str]: + rendered = _user_defined_render("foreign_key", constraint, autogen_context) + if rendered is not False: + return rendered + + opts = [] + if constraint.name: + opts.append( + ("name", repr(_render_gen_name(autogen_context, constraint.name))) + ) + + _populate_render_fk_opts(constraint, opts) + + apply_metadata_schema = namespace_metadata.schema + return ( + "%(prefix)sForeignKeyConstraint([%(cols)s], " + "[%(refcols)s], %(args)s)" + % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "cols": ", ".join( + repr(_ident(f.parent.name)) for f in constraint.elements + ), + "refcols": ", ".join( + repr(_fk_colspec(f, apply_metadata_schema, namespace_metadata)) + for f in constraint.elements + ), + "args": ", ".join( + ["%s=%s" % (kwname, val) for kwname, val in opts] + ), + } + ) + + +@_constraint_renderers.dispatch_for(sa_schema.UniqueConstraint) +def _render_unique_constraint( + constraint: UniqueConstraint, + autogen_context: AutogenContext, + namespace_metadata: Optional[MetaData], +) -> str: + rendered = _user_defined_render("unique", constraint, autogen_context) + if rendered is not False: + return rendered + + return _uq_constraint(constraint, autogen_context, False) + + +@_constraint_renderers.dispatch_for(sa_schema.CheckConstraint) +def _render_check_constraint( + constraint: CheckConstraint, + autogen_context: AutogenContext, + namespace_metadata: Optional[MetaData], +) -> Optional[str]: + rendered = _user_defined_render("check", constraint, autogen_context) + if rendered is not False: + return rendered + + # detect the constraint being part of + # a parent type which is probably in the Table already. + # ideally SQLAlchemy would give us more of a first class + # way to detect this. + if ( + constraint._create_rule + and hasattr(constraint._create_rule, "target") + and isinstance( + constraint._create_rule.target, + sqltypes.TypeEngine, + ) + ): + return None + opts = [] + if constraint.name: + opts.append( + ("name", repr(_render_gen_name(autogen_context, constraint.name))) + ) + return "%(prefix)sCheckConstraint(%(sqltext)s%(opts)s)" % { + "prefix": _sqlalchemy_autogenerate_prefix(autogen_context), + "opts": ( + ", " + (", ".join("%s=%s" % (k, v) for k, v in opts)) + if opts + else "" + ), + "sqltext": _render_potential_expr( + constraint.sqltext, autogen_context, wrap_in_element=False + ), + } + + +@renderers.dispatch_for(ops.ExecuteSQLOp) +def _execute_sql(autogen_context: AutogenContext, op: ops.ExecuteSQLOp) -> str: + if not isinstance(op.sqltext, str): + raise NotImplementedError( + "Autogenerate rendering of SQL Expression language constructs " + "not supported here; please use a plain SQL string" + ) + return "{prefix}execute({sqltext!r})".format( + prefix=_alembic_autogenerate_prefix(autogen_context), + sqltext=op.sqltext, + ) + + +renderers = default_renderers.branch() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/rewriter.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/rewriter.py new file mode 100644 index 000000000..1d44b5c34 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/autogenerate/rewriter.py @@ -0,0 +1,240 @@ +from __future__ import annotations + +from typing import Any +from typing import Callable +from typing import Iterator +from typing import List +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +from .. import util +from ..operations import ops + +if TYPE_CHECKING: + from ..operations.ops import AddColumnOp + from ..operations.ops import AlterColumnOp + from ..operations.ops import CreateTableOp + from ..operations.ops import DowngradeOps + from ..operations.ops import MigrateOperation + from ..operations.ops import MigrationScript + from ..operations.ops import ModifyTableOps + from ..operations.ops import OpContainer + from ..operations.ops import UpgradeOps + from ..runtime.migration import MigrationContext + from ..script.revision import _GetRevArg + +ProcessRevisionDirectiveFn = Callable[ + ["MigrationContext", "_GetRevArg", List["MigrationScript"]], None +] + + +class Rewriter: + """A helper object that allows easy 'rewriting' of ops streams. + + The :class:`.Rewriter` object is intended to be passed along + to the + :paramref:`.EnvironmentContext.configure.process_revision_directives` + parameter in an ``env.py`` script. Once constructed, any number + of "rewrites" functions can be associated with it, which will be given + the opportunity to modify the structure without having to have explicit + knowledge of the overall structure. + + The function is passed the :class:`.MigrationContext` object and + ``revision`` tuple that are passed to the :paramref:`.Environment + Context.configure.process_revision_directives` function normally, + and the third argument is an individual directive of the type + noted in the decorator. The function has the choice of returning + a single op directive, which normally can be the directive that + was actually passed, or a new directive to replace it, or a list + of zero or more directives to replace it. + + .. seealso:: + + :ref:`autogen_rewriter` - usage example + + """ + + _traverse = util.Dispatcher() + + _chained: Tuple[Union[ProcessRevisionDirectiveFn, Rewriter], ...] = () + + def __init__(self) -> None: + self.dispatch = util.Dispatcher() + + def chain( + self, + other: Union[ + ProcessRevisionDirectiveFn, + Rewriter, + ], + ) -> Rewriter: + """Produce a "chain" of this :class:`.Rewriter` to another. + + This allows two or more rewriters to operate serially on a stream, + e.g.:: + + writer1 = autogenerate.Rewriter() + writer2 = autogenerate.Rewriter() + + + @writer1.rewrites(ops.AddColumnOp) + def add_column_nullable(context, revision, op): + op.column.nullable = True + return op + + + @writer2.rewrites(ops.AddColumnOp) + def add_column_idx(context, revision, op): + idx_op = ops.CreateIndexOp( + "ixc", op.table_name, [op.column.name] + ) + return [op, idx_op] + + writer = writer1.chain(writer2) + + :param other: a :class:`.Rewriter` instance + :return: a new :class:`.Rewriter` that will run the operations + of this writer, then the "other" writer, in succession. + + """ + wr = self.__class__.__new__(self.__class__) + wr.__dict__.update(self.__dict__) + wr._chained += (other,) + return wr + + def rewrites( + self, + operator: Union[ + Type[AddColumnOp], + Type[MigrateOperation], + Type[AlterColumnOp], + Type[CreateTableOp], + Type[ModifyTableOps], + ], + ) -> Callable[..., Any]: + """Register a function as rewriter for a given type. + + The function should receive three arguments, which are + the :class:`.MigrationContext`, a ``revision`` tuple, and + an op directive of the type indicated. E.g.:: + + @writer1.rewrites(ops.AddColumnOp) + def add_column_nullable(context, revision, op): + op.column.nullable = True + return op + + """ + return self.dispatch.dispatch_for(operator) + + def _rewrite( + self, + context: MigrationContext, + revision: _GetRevArg, + directive: MigrateOperation, + ) -> Iterator[MigrateOperation]: + try: + _rewriter = self.dispatch.dispatch(directive) + except ValueError: + _rewriter = None + yield directive + else: + if self in directive._mutations: + yield directive + else: + for r_directive in util.to_list( + _rewriter(context, revision, directive), [] + ): + r_directive._mutations = r_directive._mutations.union( + [self] + ) + yield r_directive + + def __call__( + self, + context: MigrationContext, + revision: _GetRevArg, + directives: List[MigrationScript], + ) -> None: + self.process_revision_directives(context, revision, directives) + for process_revision_directives in self._chained: + process_revision_directives(context, revision, directives) + + @_traverse.dispatch_for(ops.MigrationScript) + def _traverse_script( + self, + context: MigrationContext, + revision: _GetRevArg, + directive: MigrationScript, + ) -> None: + upgrade_ops_list: List[UpgradeOps] = [] + for upgrade_ops in directive.upgrade_ops_list: + ret = self._traverse_for(context, revision, upgrade_ops) + if len(ret) != 1: + raise ValueError( + "Can only return single object for UpgradeOps traverse" + ) + upgrade_ops_list.append(ret[0]) + + directive.upgrade_ops = upgrade_ops_list + + downgrade_ops_list: List[DowngradeOps] = [] + for downgrade_ops in directive.downgrade_ops_list: + ret = self._traverse_for(context, revision, downgrade_ops) + if len(ret) != 1: + raise ValueError( + "Can only return single object for DowngradeOps traverse" + ) + downgrade_ops_list.append(ret[0]) + directive.downgrade_ops = downgrade_ops_list + + @_traverse.dispatch_for(ops.OpContainer) + def _traverse_op_container( + self, + context: MigrationContext, + revision: _GetRevArg, + directive: OpContainer, + ) -> None: + self._traverse_list(context, revision, directive.ops) + + @_traverse.dispatch_for(ops.MigrateOperation) + def _traverse_any_directive( + self, + context: MigrationContext, + revision: _GetRevArg, + directive: MigrateOperation, + ) -> None: + pass + + def _traverse_for( + self, + context: MigrationContext, + revision: _GetRevArg, + directive: MigrateOperation, + ) -> Any: + directives = list(self._rewrite(context, revision, directive)) + for directive in directives: + traverser = self._traverse.dispatch(directive) + traverser(self, context, revision, directive) + return directives + + def _traverse_list( + self, + context: MigrationContext, + revision: _GetRevArg, + directives: Any, + ) -> None: + dest = [] + for directive in directives: + dest.extend(self._traverse_for(context, revision, directive)) + + directives[:] = dest + + def process_revision_directives( + self, + context: MigrationContext, + revision: _GetRevArg, + directives: List[MigrationScript], + ) -> None: + self._traverse_list(context, revision, directives) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/command.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/command.py new file mode 100644 index 000000000..8e4854744 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/command.py @@ -0,0 +1,835 @@ +# mypy: allow-untyped-defs, allow-untyped-calls + +from __future__ import annotations + +import os +import pathlib +from typing import List +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from . import autogenerate as autogen +from . import util +from .runtime.environment import EnvironmentContext +from .script import ScriptDirectory +from .util import compat + +if TYPE_CHECKING: + from alembic.config import Config + from alembic.script.base import Script + from alembic.script.revision import _RevIdType + from .runtime.environment import ProcessRevisionDirectiveFn + + +def list_templates(config: Config) -> None: + """List available templates. + + :param config: a :class:`.Config` object. + + """ + + config.print_stdout("Available templates:\n") + for tempname in config._get_template_path().iterdir(): + with (tempname / "README").open() as readme: + synopsis = next(readme).rstrip() + config.print_stdout("%s - %s", tempname.name, synopsis) + + config.print_stdout("\nTemplates are used via the 'init' command, e.g.:") + config.print_stdout("\n alembic init --template generic ./scripts") + + +def init( + config: Config, + directory: str, + template: str = "generic", + package: bool = False, +) -> None: + """Initialize a new scripts directory. + + :param config: a :class:`.Config` object. + + :param directory: string path of the target directory. + + :param template: string name of the migration environment template to + use. + + :param package: when True, write ``__init__.py`` files into the + environment location as well as the versions/ location. + + """ + + directory_path = pathlib.Path(directory) + if directory_path.exists() and list(directory_path.iterdir()): + raise util.CommandError( + "Directory %s already exists and is not empty" % directory_path + ) + + template_path = config._get_template_path() / template + + if not template_path.exists(): + raise util.CommandError(f"No such template {template_path}") + + # left as os.access() to suit unit test mocking + if not os.access(directory_path, os.F_OK): + with util.status( + f"Creating directory {directory_path.absolute()}", + **config.messaging_opts, + ): + os.makedirs(directory_path) + + versions = directory_path / "versions" + with util.status( + f"Creating directory {versions.absolute()}", + **config.messaging_opts, + ): + os.makedirs(versions) + + if not directory_path.is_absolute(): + # for non-absolute path, state config file in .ini / pyproject + # as relative to the %(here)s token, which is where the config + # file itself would be + + if config._config_file_path is not None: + rel_dir = compat.path_relative_to( + directory_path.absolute(), + config._config_file_path.absolute().parent, + walk_up=True, + ) + ini_script_location_directory = ("%(here)s" / rel_dir).as_posix() + if config._toml_file_path is not None: + rel_dir = compat.path_relative_to( + directory_path.absolute(), + config._toml_file_path.absolute().parent, + walk_up=True, + ) + toml_script_location_directory = ("%(here)s" / rel_dir).as_posix() + + else: + ini_script_location_directory = directory_path.as_posix() + toml_script_location_directory = directory_path.as_posix() + + script = ScriptDirectory(directory_path) + + has_toml = False + + config_file: pathlib.Path | None = None + + for file_path in template_path.iterdir(): + file_ = file_path.name + if file_ == "alembic.ini.mako": + assert config.config_file_name is not None + config_file = pathlib.Path(config.config_file_name).absolute() + if config_file.exists(): + util.msg( + f"File {config_file} already exists, skipping", + **config.messaging_opts, + ) + else: + script._generate_template( + file_path, + config_file, + script_location=ini_script_location_directory, + ) + elif file_ == "pyproject.toml.mako": + has_toml = True + assert config._toml_file_path is not None + toml_path = config._toml_file_path.absolute() + + if toml_path.exists(): + # left as open() to suit unit test mocking + with open(toml_path, "rb") as f: + toml_data = compat.tomllib.load(f) + if "tool" in toml_data and "alembic" in toml_data["tool"]: + + util.msg( + f"File {toml_path} already exists " + "and already has a [tool.alembic] section, " + "skipping", + ) + continue + script._append_template( + file_path, + toml_path, + script_location=toml_script_location_directory, + ) + else: + script._generate_template( + file_path, + toml_path, + script_location=toml_script_location_directory, + ) + + elif file_path.is_file(): + output_file = directory_path / file_ + script._copy_file(file_path, output_file) + + if package: + for path in [ + directory_path.absolute() / "__init__.py", + versions.absolute() / "__init__.py", + ]: + with util.status(f"Adding {path!s}", **config.messaging_opts): + # left as open() to suit unit test mocking + with open(path, "w"): + pass + + assert config_file is not None + + if has_toml: + util.msg( + f"Please edit configuration settings in {toml_path} and " + "configuration/connection/logging " + f"settings in {config_file} before proceeding.", + **config.messaging_opts, + ) + else: + util.msg( + "Please edit configuration/connection/logging " + f"settings in {config_file} before proceeding.", + **config.messaging_opts, + ) + + +def revision( + config: Config, + message: Optional[str] = None, + autogenerate: bool = False, + sql: bool = False, + head: str = "head", + splice: bool = False, + branch_label: Optional[_RevIdType] = None, + version_path: Union[str, os.PathLike[str], None] = None, + rev_id: Optional[str] = None, + depends_on: Optional[str] = None, + process_revision_directives: Optional[ProcessRevisionDirectiveFn] = None, +) -> Union[Optional[Script], List[Optional[Script]]]: + """Create a new revision file. + + :param config: a :class:`.Config` object. + + :param message: string message to apply to the revision; this is the + ``-m`` option to ``alembic revision``. + + :param autogenerate: whether or not to autogenerate the script from + the database; this is the ``--autogenerate`` option to + ``alembic revision``. + + :param sql: whether to dump the script out as a SQL string; when specified, + the script is dumped to stdout. This is the ``--sql`` option to + ``alembic revision``. + + :param head: head revision to build the new revision upon as a parent; + this is the ``--head`` option to ``alembic revision``. + + :param splice: whether or not the new revision should be made into a + new head of its own; is required when the given ``head`` is not itself + a head. This is the ``--splice`` option to ``alembic revision``. + + :param branch_label: string label to apply to the branch; this is the + ``--branch-label`` option to ``alembic revision``. + + :param version_path: string symbol identifying a specific version path + from the configuration; this is the ``--version-path`` option to + ``alembic revision``. + + :param rev_id: optional revision identifier to use instead of having + one generated; this is the ``--rev-id`` option to ``alembic revision``. + + :param depends_on: optional list of "depends on" identifiers; this is the + ``--depends-on`` option to ``alembic revision``. + + :param process_revision_directives: this is a callable that takes the + same form as the callable described at + :paramref:`.EnvironmentContext.configure.process_revision_directives`; + will be applied to the structure generated by the revision process + where it can be altered programmatically. Note that unlike all + the other parameters, this option is only available via programmatic + use of :func:`.command.revision`. + + """ + + script_directory = ScriptDirectory.from_config(config) + + command_args = dict( + message=message, + autogenerate=autogenerate, + sql=sql, + head=head, + splice=splice, + branch_label=branch_label, + version_path=version_path, + rev_id=rev_id, + depends_on=depends_on, + ) + revision_context = autogen.RevisionContext( + config, + script_directory, + command_args, + process_revision_directives=process_revision_directives, + ) + + environment = util.asbool( + config.get_alembic_option("revision_environment") + ) + + if autogenerate: + environment = True + + if sql: + raise util.CommandError( + "Using --sql with --autogenerate does not make any sense" + ) + + def retrieve_migrations(rev, context): + revision_context.run_autogenerate(rev, context) + return [] + + elif environment: + + def retrieve_migrations(rev, context): + revision_context.run_no_autogenerate(rev, context) + return [] + + elif sql: + raise util.CommandError( + "Using --sql with the revision command when " + "revision_environment is not configured does not make any sense" + ) + + if environment: + with EnvironmentContext( + config, + script_directory, + fn=retrieve_migrations, + as_sql=sql, + template_args=revision_context.template_args, + revision_context=revision_context, + ): + script_directory.run_env() + + # the revision_context now has MigrationScript structure(s) present. + # these could theoretically be further processed / rewritten *here*, + # in addition to the hooks present within each run_migrations() call, + # or at the end of env.py run_migrations_online(). + + scripts = [script for script in revision_context.generate_scripts()] + if len(scripts) == 1: + return scripts[0] + else: + return scripts + + +def check(config: "Config") -> None: + """Check if revision command with autogenerate has pending upgrade ops. + + :param config: a :class:`.Config` object. + + .. versionadded:: 1.9.0 + + """ + + script_directory = ScriptDirectory.from_config(config) + + command_args = dict( + message=None, + autogenerate=True, + sql=False, + head="head", + splice=False, + branch_label=None, + version_path=None, + rev_id=None, + depends_on=None, + ) + revision_context = autogen.RevisionContext( + config, + script_directory, + command_args, + ) + + def retrieve_migrations(rev, context): + revision_context.run_autogenerate(rev, context) + return [] + + with EnvironmentContext( + config, + script_directory, + fn=retrieve_migrations, + as_sql=False, + template_args=revision_context.template_args, + revision_context=revision_context, + ): + script_directory.run_env() + + # the revision_context now has MigrationScript structure(s) present. + + migration_script = revision_context.generated_revisions[-1] + diffs = [] + for upgrade_ops in migration_script.upgrade_ops_list: + diffs.extend(upgrade_ops.as_diffs()) + + if diffs: + raise util.AutogenerateDiffsDetected( + f"New upgrade operations detected: {diffs}", + revision_context=revision_context, + diffs=diffs, + ) + else: + config.print_stdout("No new upgrade operations detected.") + + +def merge( + config: Config, + revisions: _RevIdType, + message: Optional[str] = None, + branch_label: Optional[_RevIdType] = None, + rev_id: Optional[str] = None, +) -> Optional[Script]: + """Merge two revisions together. Creates a new migration file. + + :param config: a :class:`.Config` instance + + :param revisions: The revisions to merge. + + :param message: string message to apply to the revision. + + :param branch_label: string label name to apply to the new revision. + + :param rev_id: hardcoded revision identifier instead of generating a new + one. + + .. seealso:: + + :ref:`branches` + + """ + + script = ScriptDirectory.from_config(config) + template_args = { + "config": config # Let templates use config for + # e.g. multiple databases + } + + environment = util.asbool( + config.get_alembic_option("revision_environment") + ) + + if environment: + + def nothing(rev, context): + return [] + + with EnvironmentContext( + config, + script, + fn=nothing, + as_sql=False, + template_args=template_args, + ): + script.run_env() + + return script.generate_revision( + rev_id or util.rev_id(), + message, + refresh=True, + head=revisions, + branch_labels=branch_label, + **template_args, # type:ignore[arg-type] + ) + + +def upgrade( + config: Config, + revision: str, + sql: bool = False, + tag: Optional[str] = None, +) -> None: + """Upgrade to a later version. + + :param config: a :class:`.Config` instance. + + :param revision: string revision target or range for --sql mode. May be + ``"heads"`` to target the most recent revision(s). + + :param sql: if True, use ``--sql`` mode. + + :param tag: an arbitrary "tag" that can be intercepted by custom + ``env.py`` scripts via the :meth:`.EnvironmentContext.get_tag_argument` + method. + + """ + + script = ScriptDirectory.from_config(config) + + starting_rev = None + if ":" in revision: + if not sql: + raise util.CommandError("Range revision not allowed") + starting_rev, revision = revision.split(":", 2) + + def upgrade(rev, context): + return script._upgrade_revs(revision, rev) + + with EnvironmentContext( + config, + script, + fn=upgrade, + as_sql=sql, + starting_rev=starting_rev, + destination_rev=revision, + tag=tag, + ): + script.run_env() + + +def downgrade( + config: Config, + revision: str, + sql: bool = False, + tag: Optional[str] = None, +) -> None: + """Revert to a previous version. + + :param config: a :class:`.Config` instance. + + :param revision: string revision target or range for --sql mode. May + be ``"base"`` to target the first revision. + + :param sql: if True, use ``--sql`` mode. + + :param tag: an arbitrary "tag" that can be intercepted by custom + ``env.py`` scripts via the :meth:`.EnvironmentContext.get_tag_argument` + method. + + """ + + script = ScriptDirectory.from_config(config) + starting_rev = None + if ":" in revision: + if not sql: + raise util.CommandError("Range revision not allowed") + starting_rev, revision = revision.split(":", 2) + elif sql: + raise util.CommandError( + "downgrade with --sql requires :" + ) + + def downgrade(rev, context): + return script._downgrade_revs(revision, rev) + + with EnvironmentContext( + config, + script, + fn=downgrade, + as_sql=sql, + starting_rev=starting_rev, + destination_rev=revision, + tag=tag, + ): + script.run_env() + + +def show(config: Config, rev: str) -> None: + """Show the revision(s) denoted by the given symbol. + + :param config: a :class:`.Config` instance. + + :param rev: string revision target. May be ``"current"`` to show the + revision(s) currently applied in the database. + + """ + + script = ScriptDirectory.from_config(config) + + if rev == "current": + + def show_current(rev, context): + for sc in script.get_revisions(rev): + config.print_stdout(sc.log_entry) + return [] + + with EnvironmentContext(config, script, fn=show_current): + script.run_env() + else: + for sc in script.get_revisions(rev): + config.print_stdout(sc.log_entry) + + +def history( + config: Config, + rev_range: Optional[str] = None, + verbose: bool = False, + indicate_current: bool = False, +) -> None: + """List changeset scripts in chronological order. + + :param config: a :class:`.Config` instance. + + :param rev_range: string revision range. + + :param verbose: output in verbose mode. + + :param indicate_current: indicate current revision. + + """ + base: Optional[str] + head: Optional[str] + script = ScriptDirectory.from_config(config) + if rev_range is not None: + if ":" not in rev_range: + raise util.CommandError( + "History range requires [start]:[end], " "[start]:, or :[end]" + ) + base, head = rev_range.strip().split(":") + else: + base = head = None + + environment = ( + util.asbool(config.get_alembic_option("revision_environment")) + or indicate_current + ) + + def _display_history(config, script, base, head, currents=()): + for sc in script.walk_revisions( + base=base or "base", head=head or "heads" + ): + if indicate_current: + sc._db_current_indicator = sc.revision in currents + + config.print_stdout( + sc.cmd_format( + verbose=verbose, + include_branches=True, + include_doc=True, + include_parents=True, + ) + ) + + def _display_history_w_current(config, script, base, head): + def _display_current_history(rev, context): + if head == "current": + _display_history(config, script, base, rev, rev) + elif base == "current": + _display_history(config, script, rev, head, rev) + else: + _display_history(config, script, base, head, rev) + return [] + + with EnvironmentContext(config, script, fn=_display_current_history): + script.run_env() + + if base == "current" or head == "current" or environment: + _display_history_w_current(config, script, base, head) + else: + _display_history(config, script, base, head) + + +def heads( + config: Config, verbose: bool = False, resolve_dependencies: bool = False +) -> None: + """Show current available heads in the script directory. + + :param config: a :class:`.Config` instance. + + :param verbose: output in verbose mode. + + :param resolve_dependencies: treat dependency version as down revisions. + + """ + + script = ScriptDirectory.from_config(config) + if resolve_dependencies: + heads = script.get_revisions("heads") + else: + heads = script.get_revisions(script.get_heads()) + + for rev in heads: + config.print_stdout( + rev.cmd_format( + verbose, include_branches=True, tree_indicators=False + ) + ) + + +def branches(config: Config, verbose: bool = False) -> None: + """Show current branch points. + + :param config: a :class:`.Config` instance. + + :param verbose: output in verbose mode. + + """ + script = ScriptDirectory.from_config(config) + for sc in script.walk_revisions(): + if sc.is_branch_point: + config.print_stdout( + "%s\n%s\n", + sc.cmd_format(verbose, include_branches=True), + "\n".join( + "%s -> %s" + % ( + " " * len(str(sc.revision)), + rev_obj.cmd_format( + False, include_branches=True, include_doc=verbose + ), + ) + for rev_obj in ( + script.get_revision(rev) for rev in sc.nextrev + ) + ), + ) + + +def current(config: Config, verbose: bool = False) -> None: + """Display the current revision for a database. + + :param config: a :class:`.Config` instance. + + :param verbose: output in verbose mode. + + """ + + script = ScriptDirectory.from_config(config) + + def display_version(rev, context): + if verbose: + config.print_stdout( + "Current revision(s) for %s:", + util.obfuscate_url_pw(context.connection.engine.url), + ) + for rev in script.get_all_current(rev): + config.print_stdout(rev.cmd_format(verbose)) + + return [] + + with EnvironmentContext( + config, script, fn=display_version, dont_mutate=True + ): + script.run_env() + + +def stamp( + config: Config, + revision: _RevIdType, + sql: bool = False, + tag: Optional[str] = None, + purge: bool = False, +) -> None: + """'stamp' the revision table with the given revision; don't + run any migrations. + + :param config: a :class:`.Config` instance. + + :param revision: target revision or list of revisions. May be a list + to indicate stamping of multiple branch heads; may be ``"base"`` + to remove all revisions from the table or ``"heads"`` to stamp the + most recent revision(s). + + .. note:: this parameter is called "revisions" in the command line + interface. + + :param sql: use ``--sql`` mode + + :param tag: an arbitrary "tag" that can be intercepted by custom + ``env.py`` scripts via the :class:`.EnvironmentContext.get_tag_argument` + method. + + :param purge: delete all entries in the version table before stamping. + + """ + + script = ScriptDirectory.from_config(config) + + if sql: + destination_revs = [] + starting_rev = None + for _revision in util.to_list(revision): + if ":" in _revision: + srev, _revision = _revision.split(":", 2) + + if starting_rev != srev: + if starting_rev is None: + starting_rev = srev + else: + raise util.CommandError( + "Stamp operation with --sql only supports a " + "single starting revision at a time" + ) + destination_revs.append(_revision) + else: + destination_revs = util.to_list(revision) + + def do_stamp(rev, context): + return script._stamp_revs(util.to_tuple(destination_revs), rev) + + with EnvironmentContext( + config, + script, + fn=do_stamp, + as_sql=sql, + starting_rev=starting_rev if sql else None, + destination_rev=util.to_tuple(destination_revs), + tag=tag, + purge=purge, + ): + script.run_env() + + +def edit(config: Config, rev: str) -> None: + """Edit revision script(s) using $EDITOR. + + :param config: a :class:`.Config` instance. + + :param rev: target revision. + + """ + + script = ScriptDirectory.from_config(config) + + if rev == "current": + + def edit_current(rev, context): + if not rev: + raise util.CommandError("No current revisions") + for sc in script.get_revisions(rev): + util.open_in_editor(sc.path) + return [] + + with EnvironmentContext(config, script, fn=edit_current): + script.run_env() + else: + revs = script.get_revisions(rev) + if not revs: + raise util.CommandError( + "No revision files indicated by symbol '%s'" % rev + ) + for sc in revs: + assert sc + util.open_in_editor(sc.path) + + +def ensure_version(config: Config, sql: bool = False) -> None: + """Create the alembic version table if it doesn't exist already . + + :param config: a :class:`.Config` instance. + + :param sql: use ``--sql`` mode. + + .. versionadded:: 1.7.6 + + """ + + script = ScriptDirectory.from_config(config) + + def do_ensure_version(rev, context): + context._ensure_version_table() + return [] + + with EnvironmentContext( + config, + script, + fn=do_ensure_version, + as_sql=sql, + ): + script.run_env() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/config.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/config.py new file mode 100644 index 000000000..c1196657c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/config.py @@ -0,0 +1,1005 @@ +from __future__ import annotations + +from argparse import ArgumentParser +from argparse import Namespace +from configparser import ConfigParser +import inspect +import os +from pathlib import Path +import re +import sys +from typing import Any +from typing import cast +from typing import Dict +from typing import Mapping +from typing import Optional +from typing import overload +from typing import Protocol +from typing import Sequence +from typing import TextIO +from typing import Union + +from typing_extensions import TypedDict + +from . import __version__ +from . import command +from . import util +from .util import compat +from .util.pyfiles import _preserving_path_as_str + + +class Config: + r"""Represent an Alembic configuration. + + Within an ``env.py`` script, this is available + via the :attr:`.EnvironmentContext.config` attribute, + which in turn is available at ``alembic.context``:: + + from alembic import context + + some_param = context.config.get_main_option("my option") + + When invoking Alembic programmatically, a new + :class:`.Config` can be created by passing + the name of an .ini file to the constructor:: + + from alembic.config import Config + alembic_cfg = Config("/path/to/yourapp/alembic.ini") + + With a :class:`.Config` object, you can then + run Alembic commands programmatically using the directives + in :mod:`alembic.command`. + + The :class:`.Config` object can also be constructed without + a filename. Values can be set programmatically, and + new sections will be created as needed:: + + from alembic.config import Config + alembic_cfg = Config() + alembic_cfg.set_main_option("script_location", "myapp:migrations") + alembic_cfg.set_main_option("sqlalchemy.url", "postgresql://foo/bar") + alembic_cfg.set_section_option("mysection", "foo", "bar") + + .. warning:: + + When using programmatic configuration, make sure the + ``env.py`` file in use is compatible with the target configuration; + including that the call to Python ``logging.fileConfig()`` is + omitted if the programmatic configuration doesn't actually include + logging directives. + + For passing non-string values to environments, such as connections and + engines, use the :attr:`.Config.attributes` dictionary:: + + with engine.begin() as connection: + alembic_cfg.attributes['connection'] = connection + command.upgrade(alembic_cfg, "head") + + :param file\_: name of the .ini file to open if an ``alembic.ini`` is + to be used. This should refer to the ``alembic.ini`` file, either as + a filename or a full path to the file. This filename if passed must refer + to an **ini file in ConfigParser format** only. + + :param toml\_file: name of the pyproject.toml file to open if a + ``pyproject.toml`` file is to be used. This should refer to the + ``pyproject.toml`` file, either as a filename or a full path to the file. + This file must be in toml format. Both :paramref:`.Config.file\_` and + :paramref:`.Config.toml\_file` may be passed simultaneously, or + exclusively. + + .. versionadded:: 1.16.0 + + :param ini_section: name of the main Alembic section within the + .ini file + :param output_buffer: optional file-like input buffer which + will be passed to the :class:`.MigrationContext` - used to redirect + the output of "offline generation" when using Alembic programmatically. + :param stdout: buffer where the "print" output of commands will be sent. + Defaults to ``sys.stdout``. + + :param config_args: A dictionary of keys and values that will be used + for substitution in the alembic config file, as well as the pyproject.toml + file, depending on which / both are used. The dictionary as given is + **copied** to two new, independent dictionaries, stored locally under the + attributes ``.config_args`` and ``.toml_args``. Both of these + dictionaries will also be populated with the replacement variable + ``%(here)s``, which refers to the location of the .ini and/or .toml file + as appropriate. + + :param attributes: optional dictionary of arbitrary Python keys/values, + which will be populated into the :attr:`.Config.attributes` dictionary. + + .. seealso:: + + :ref:`connection_sharing` + + """ + + def __init__( + self, + file_: Union[str, os.PathLike[str], None] = None, + toml_file: Union[str, os.PathLike[str], None] = None, + ini_section: str = "alembic", + output_buffer: Optional[TextIO] = None, + stdout: TextIO = sys.stdout, + cmd_opts: Optional[Namespace] = None, + config_args: Mapping[str, Any] = util.immutabledict(), + attributes: Optional[Dict[str, Any]] = None, + ) -> None: + """Construct a new :class:`.Config`""" + self.config_file_name = ( + _preserving_path_as_str(file_) if file_ else None + ) + self.toml_file_name = ( + _preserving_path_as_str(toml_file) if toml_file else None + ) + self.config_ini_section = ini_section + self.output_buffer = output_buffer + self.stdout = stdout + self.cmd_opts = cmd_opts + self.config_args = dict(config_args) + self.toml_args = dict(config_args) + if attributes: + self.attributes.update(attributes) + + cmd_opts: Optional[Namespace] = None + """The command-line options passed to the ``alembic`` script. + + Within an ``env.py`` script this can be accessed via the + :attr:`.EnvironmentContext.config` attribute. + + .. seealso:: + + :meth:`.EnvironmentContext.get_x_argument` + + """ + + config_file_name: Optional[str] = None + """Filesystem path to the .ini file in use.""" + + toml_file_name: Optional[str] = None + """Filesystem path to the pyproject.toml file in use. + + .. versionadded:: 1.16.0 + + """ + + @property + def _config_file_path(self) -> Optional[Path]: + if self.config_file_name is None: + return None + return Path(self.config_file_name) + + @property + def _toml_file_path(self) -> Optional[Path]: + if self.toml_file_name is None: + return None + return Path(self.toml_file_name) + + config_ini_section: str = None # type:ignore[assignment] + """Name of the config file section to read basic configuration + from. Defaults to ``alembic``, that is the ``[alembic]`` section + of the .ini file. This value is modified using the ``-n/--name`` + option to the Alembic runner. + + """ + + @util.memoized_property + def attributes(self) -> Dict[str, Any]: + """A Python dictionary for storage of additional state. + + + This is a utility dictionary which can include not just strings but + engines, connections, schema objects, or anything else. + Use this to pass objects into an env.py script, such as passing + a :class:`sqlalchemy.engine.base.Connection` when calling + commands from :mod:`alembic.command` programmatically. + + .. seealso:: + + :ref:`connection_sharing` + + :paramref:`.Config.attributes` + + """ + return {} + + def print_stdout(self, text: str, *arg: Any) -> None: + """Render a message to standard out. + + When :meth:`.Config.print_stdout` is called with additional args + those arguments will formatted against the provided text, + otherwise we simply output the provided text verbatim. + + This is a no-op when the``quiet`` messaging option is enabled. + + e.g.:: + + >>> config.print_stdout('Some text %s', 'arg') + Some Text arg + + """ + + if arg: + output = str(text) % arg + else: + output = str(text) + + util.write_outstream(self.stdout, output, "\n", **self.messaging_opts) + + @util.memoized_property + def file_config(self) -> ConfigParser: + """Return the underlying ``ConfigParser`` object. + + Dir*-ect access to the .ini file is available here, + though the :meth:`.Config.get_section` and + :meth:`.Config.get_main_option` + methods provide a possibly simpler interface. + + """ + + if self._config_file_path: + here = self._config_file_path.absolute().parent + else: + here = Path() + self.config_args["here"] = here.as_posix() + file_config = ConfigParser(self.config_args) + if self._config_file_path: + compat.read_config_parser(file_config, [self._config_file_path]) + else: + file_config.add_section(self.config_ini_section) + return file_config + + @util.memoized_property + def toml_alembic_config(self) -> Mapping[str, Any]: + """Return a dictionary of the [tool.alembic] section from + pyproject.toml""" + + if self._toml_file_path and self._toml_file_path.exists(): + + here = self._toml_file_path.absolute().parent + self.toml_args["here"] = here.as_posix() + + with open(self._toml_file_path, "rb") as f: + toml_data = compat.tomllib.load(f) + data = toml_data.get("tool", {}).get("alembic", {}) + if not isinstance(data, dict): + raise util.CommandError("Incorrect TOML format") + return data + + else: + return {} + + def get_template_directory(self) -> str: + """Return the directory where Alembic setup templates are found. + + This method is used by the alembic ``init`` and ``list_templates`` + commands. + + """ + import alembic + + package_dir = Path(alembic.__file__).absolute().parent + return str(package_dir / "templates") + + def _get_template_path(self) -> Path: + """Return the directory where Alembic setup templates are found. + + This method is used by the alembic ``init`` and ``list_templates`` + commands. + + .. versionadded:: 1.16.0 + + """ + return Path(self.get_template_directory()) + + @overload + def get_section( + self, name: str, default: None = ... + ) -> Optional[Dict[str, str]]: ... + + # "default" here could also be a TypeVar + # _MT = TypeVar("_MT", bound=Mapping[str, str]), + # however mypy wasn't handling that correctly (pyright was) + @overload + def get_section( + self, name: str, default: Dict[str, str] + ) -> Dict[str, str]: ... + + @overload + def get_section( + self, name: str, default: Mapping[str, str] + ) -> Union[Dict[str, str], Mapping[str, str]]: ... + + def get_section( + self, name: str, default: Optional[Mapping[str, str]] = None + ) -> Optional[Mapping[str, str]]: + """Return all the configuration options from a given .ini file section + as a dictionary. + + If the given section does not exist, the value of ``default`` + is returned, which is expected to be a dictionary or other mapping. + + """ + if not self.file_config.has_section(name): + return default + + return dict(self.file_config.items(name)) + + def set_main_option(self, name: str, value: str) -> None: + """Set an option programmatically within the 'main' section. + + This overrides whatever was in the .ini file. + + :param name: name of the value + + :param value: the value. Note that this value is passed to + ``ConfigParser.set``, which supports variable interpolation using + pyformat (e.g. ``%(some_value)s``). A raw percent sign not part of + an interpolation symbol must therefore be escaped, e.g. ``%%``. + The given value may refer to another value already in the file + using the interpolation format. + + """ + self.set_section_option(self.config_ini_section, name, value) + + def remove_main_option(self, name: str) -> None: + self.file_config.remove_option(self.config_ini_section, name) + + def set_section_option(self, section: str, name: str, value: str) -> None: + """Set an option programmatically within the given section. + + The section is created if it doesn't exist already. + The value here will override whatever was in the .ini + file. + + Does **NOT** consume from the pyproject.toml file. + + .. seealso:: + + :meth:`.Config.get_alembic_option` - includes pyproject support + + :param section: name of the section + + :param name: name of the value + + :param value: the value. Note that this value is passed to + ``ConfigParser.set``, which supports variable interpolation using + pyformat (e.g. ``%(some_value)s``). A raw percent sign not part of + an interpolation symbol must therefore be escaped, e.g. ``%%``. + The given value may refer to another value already in the file + using the interpolation format. + + """ + + if not self.file_config.has_section(section): + self.file_config.add_section(section) + self.file_config.set(section, name, value) + + def get_section_option( + self, section: str, name: str, default: Optional[str] = None + ) -> Optional[str]: + """Return an option from the given section of the .ini file.""" + if not self.file_config.has_section(section): + raise util.CommandError( + "No config file %r found, or file has no " + "'[%s]' section" % (self.config_file_name, section) + ) + if self.file_config.has_option(section, name): + return self.file_config.get(section, name) + else: + return default + + @overload + def get_main_option(self, name: str, default: str) -> str: ... + + @overload + def get_main_option( + self, name: str, default: Optional[str] = None + ) -> Optional[str]: ... + + def get_main_option( + self, name: str, default: Optional[str] = None + ) -> Optional[str]: + """Return an option from the 'main' section of the .ini file. + + This defaults to being a key from the ``[alembic]`` + section, unless the ``-n/--name`` flag were used to + indicate a different section. + + Does **NOT** consume from the pyproject.toml file. + + .. seealso:: + + :meth:`.Config.get_alembic_option` - includes pyproject support + + """ + return self.get_section_option(self.config_ini_section, name, default) + + @overload + def get_alembic_option(self, name: str, default: str) -> str: ... + + @overload + def get_alembic_option( + self, name: str, default: Optional[str] = None + ) -> Optional[str]: ... + + def get_alembic_option( + self, name: str, default: Optional[str] = None + ) -> Union[None, str, list[str], dict[str, str], list[dict[str, str]]]: + """Return an option from the "[alembic]" or "[tool.alembic]" section + of the configparser-parsed .ini file (e.g. ``alembic.ini``) or + toml-parsed ``pyproject.toml`` file. + + The value returned is expected to be None, string, list of strings, + or dictionary of strings. Within each type of string value, the + ``%(here)s`` token is substituted out with the absolute path of the + ``pyproject.toml`` file, as are other tokens which are extracted from + the :paramref:`.Config.config_args` dictionary. + + Searches always prioritize the configparser namespace first, before + searching in the toml namespace. + + If Alembic was run using the ``-n/--name`` flag to indicate an + alternate main section name, this is taken into account **only** for + the configparser-parsed .ini file. The section name in toml is always + ``[tool.alembic]``. + + + .. versionadded:: 1.16.0 + + """ + + if self.file_config.has_option(self.config_ini_section, name): + return self.file_config.get(self.config_ini_section, name) + else: + return self._get_toml_config_value(name, default=default) + + def _get_toml_config_value( + self, name: str, default: Optional[Any] = None + ) -> Union[None, str, list[str], dict[str, str], list[dict[str, str]]]: + USE_DEFAULT = object() + value: Union[None, str, list[str], dict[str, str]] = ( + self.toml_alembic_config.get(name, USE_DEFAULT) + ) + if value is USE_DEFAULT: + return default + if value is not None: + if isinstance(value, str): + value = value % (self.toml_args) + elif isinstance(value, list): + if value and isinstance(value[0], dict): + value = [ + {k: v % (self.toml_args) for k, v in dv.items()} + for dv in value + ] + else: + value = cast( + "list[str]", [v % (self.toml_args) for v in value] + ) + elif isinstance(value, dict): + value = cast( + "dict[str, str]", + {k: v % (self.toml_args) for k, v in value.items()}, + ) + else: + raise util.CommandError("unsupported TOML value type") + return value + + @util.memoized_property + def messaging_opts(self) -> MessagingOptions: + """The messaging options.""" + return cast( + MessagingOptions, + util.immutabledict( + {"quiet": getattr(self.cmd_opts, "quiet", False)} + ), + ) + + def _get_file_separator_char(self, *names: str) -> Optional[str]: + for name in names: + separator = self.get_main_option(name) + if separator is not None: + break + else: + return None + + split_on_path = { + "space": " ", + "newline": "\n", + "os": os.pathsep, + ":": ":", + ";": ";", + } + + try: + sep = split_on_path[separator] + except KeyError as ke: + raise ValueError( + "'%s' is not a valid value for %s; " + "expected 'space', 'newline', 'os', ':', ';'" + % (separator, name) + ) from ke + else: + if name == "version_path_separator": + util.warn_deprecated( + "The version_path_separator configuration parameter " + "is deprecated; please use path_separator" + ) + return sep + + def get_version_locations_list(self) -> Optional[list[str]]: + + version_locations_str = self.file_config.get( + self.config_ini_section, "version_locations", fallback=None + ) + + if version_locations_str: + split_char = self._get_file_separator_char( + "path_separator", "version_path_separator" + ) + + if split_char is None: + + # legacy behaviour for backwards compatibility + util.warn_deprecated( + "No path_separator found in configuration; " + "falling back to legacy splitting on spaces/commas " + "for version_locations. Consider adding " + "path_separator=os to Alembic config." + ) + + _split_on_space_comma = re.compile(r", *|(?: +)") + return _split_on_space_comma.split(version_locations_str) + else: + return [ + x.strip() + for x in version_locations_str.split(split_char) + if x + ] + else: + return cast( + "list[str]", + self._get_toml_config_value("version_locations", None), + ) + + def get_prepend_sys_paths_list(self) -> Optional[list[str]]: + prepend_sys_path_str = self.file_config.get( + self.config_ini_section, "prepend_sys_path", fallback=None + ) + + if prepend_sys_path_str: + split_char = self._get_file_separator_char("path_separator") + + if split_char is None: + + # legacy behaviour for backwards compatibility + util.warn_deprecated( + "No path_separator found in configuration; " + "falling back to legacy splitting on spaces, commas, " + "and colons for prepend_sys_path. Consider adding " + "path_separator=os to Alembic config." + ) + + _split_on_space_comma_colon = re.compile(r", *|(?: +)|\:") + return _split_on_space_comma_colon.split(prepend_sys_path_str) + else: + return [ + x.strip() + for x in prepend_sys_path_str.split(split_char) + if x + ] + else: + return cast( + "list[str]", + self._get_toml_config_value("prepend_sys_path", None), + ) + + def get_hooks_list(self) -> list[PostWriteHookConfig]: + + hooks: list[PostWriteHookConfig] = [] + + if not self.file_config.has_section("post_write_hooks"): + toml_hook_config = cast( + "list[dict[str, str]]", + self._get_toml_config_value("post_write_hooks", []), + ) + for cfg in toml_hook_config: + opts = dict(cfg) + opts["_hook_name"] = opts.pop("name") + hooks.append(opts) + + else: + _split_on_space_comma = re.compile(r", *|(?: +)") + ini_hook_config = self.get_section("post_write_hooks", {}) + names = _split_on_space_comma.split( + ini_hook_config.get("hooks", "") + ) + + for name in names: + if not name: + continue + opts = { + key[len(name) + 1 :]: ini_hook_config[key] + for key in ini_hook_config + if key.startswith(name + ".") + } + + opts["_hook_name"] = name + hooks.append(opts) + + return hooks + + +PostWriteHookConfig = Mapping[str, str] + + +class MessagingOptions(TypedDict, total=False): + quiet: bool + + +class CommandFunction(Protocol): + """A function that may be registered in the CLI as an alembic command. + It must be a named function and it must accept a :class:`.Config` object + as the first argument. + + .. versionadded:: 1.15.3 + + """ + + __name__: str + + def __call__(self, config: Config, *args: Any, **kwargs: Any) -> Any: ... + + +class CommandLine: + """Provides the command line interface to Alembic.""" + + def __init__(self, prog: Optional[str] = None) -> None: + self._generate_args(prog) + + _KWARGS_OPTS = { + "template": ( + "-t", + "--template", + dict( + default="generic", + type=str, + help="Setup template for use with 'init'", + ), + ), + "message": ( + "-m", + "--message", + dict(type=str, help="Message string to use with 'revision'"), + ), + "sql": ( + "--sql", + dict( + action="store_true", + help="Don't emit SQL to database - dump to " + "standard output/file instead. See docs on " + "offline mode.", + ), + ), + "tag": ( + "--tag", + dict( + type=str, + help="Arbitrary 'tag' name - can be used by " + "custom env.py scripts.", + ), + ), + "head": ( + "--head", + dict( + type=str, + help="Specify head revision or @head " + "to base new revision on.", + ), + ), + "splice": ( + "--splice", + dict( + action="store_true", + help="Allow a non-head revision as the 'head' to splice onto", + ), + ), + "depends_on": ( + "--depends-on", + dict( + action="append", + help="Specify one or more revision identifiers " + "which this revision should depend on.", + ), + ), + "rev_id": ( + "--rev-id", + dict( + type=str, + help="Specify a hardcoded revision id instead of " + "generating one", + ), + ), + "version_path": ( + "--version-path", + dict( + type=str, + help="Specify specific path from config for version file", + ), + ), + "branch_label": ( + "--branch-label", + dict( + type=str, + help="Specify a branch label to apply to the new revision", + ), + ), + "verbose": ( + "-v", + "--verbose", + dict(action="store_true", help="Use more verbose output"), + ), + "resolve_dependencies": ( + "--resolve-dependencies", + dict( + action="store_true", + help="Treat dependency versions as down revisions", + ), + ), + "autogenerate": ( + "--autogenerate", + dict( + action="store_true", + help="Populate revision script with candidate " + "migration operations, based on comparison " + "of database to model.", + ), + ), + "rev_range": ( + "-r", + "--rev-range", + dict( + action="store", + help="Specify a revision range; format is [start]:[end]", + ), + ), + "indicate_current": ( + "-i", + "--indicate-current", + dict( + action="store_true", + help="Indicate the current revision", + ), + ), + "purge": ( + "--purge", + dict( + action="store_true", + help="Unconditionally erase the version table before stamping", + ), + ), + "package": ( + "--package", + dict( + action="store_true", + help="Write empty __init__.py files to the " + "environment and version locations", + ), + ), + } + _POSITIONAL_OPTS = { + "directory": dict(help="location of scripts directory"), + "revision": dict( + help="revision identifier", + ), + "revisions": dict( + nargs="+", + help="one or more revisions, or 'heads' for all heads", + ), + } + _POSITIONAL_TRANSLATIONS: dict[Any, dict[str, str]] = { + command.stamp: {"revision": "revisions"} + } + + def _generate_args(self, prog: Optional[str]) -> None: + parser = ArgumentParser(prog=prog) + + parser.add_argument( + "--version", action="version", version="%%(prog)s %s" % __version__ + ) + parser.add_argument( + "-c", + "--config", + action="append", + help="Alternate config file; defaults to value of " + 'ALEMBIC_CONFIG environment variable, or "alembic.ini". ' + "May also refer to pyproject.toml file. May be specified twice " + "to reference both files separately", + ) + parser.add_argument( + "-n", + "--name", + type=str, + default="alembic", + help="Name of section in .ini file to use for Alembic config " + "(only applies to configparser config, not toml)", + ) + parser.add_argument( + "-x", + action="append", + help="Additional arguments consumed by " + "custom env.py scripts, e.g. -x " + "setting1=somesetting -x setting2=somesetting", + ) + parser.add_argument( + "--raiseerr", + action="store_true", + help="Raise a full stack trace on error", + ) + parser.add_argument( + "-q", + "--quiet", + action="store_true", + help="Do not log to std output.", + ) + + self.subparsers = parser.add_subparsers() + alembic_commands = ( + cast(CommandFunction, fn) + for fn in (getattr(command, name) for name in dir(command)) + if ( + inspect.isfunction(fn) + and fn.__name__[0] != "_" + and fn.__module__ == "alembic.command" + ) + ) + + for fn in alembic_commands: + self.register_command(fn) + + self.parser = parser + + def register_command(self, fn: CommandFunction) -> None: + """Registers a function as a CLI subcommand. The subcommand name + matches the function name, the arguments are extracted from the + signature and the help text is read from the docstring. + + .. versionadded:: 1.15.3 + + .. seealso:: + + :ref:`custom_commandline` + """ + + positional, kwarg, help_text = self._inspect_function(fn) + + subparser = self.subparsers.add_parser(fn.__name__, help=help_text) + subparser.set_defaults(cmd=(fn, positional, kwarg)) + + for arg in kwarg: + if arg in self._KWARGS_OPTS: + kwarg_opt = self._KWARGS_OPTS[arg] + args, opts = kwarg_opt[0:-1], kwarg_opt[-1] + subparser.add_argument(*args, **opts) # type:ignore + + for arg in positional: + opts = self._POSITIONAL_OPTS.get(arg, {}) + subparser.add_argument(arg, **opts) # type:ignore + + def _inspect_function(self, fn: CommandFunction) -> tuple[Any, Any, str]: + spec = compat.inspect_getfullargspec(fn) + if spec[3] is not None: + positional = spec[0][1 : -len(spec[3])] + kwarg = spec[0][-len(spec[3]) :] + else: + positional = spec[0][1:] + kwarg = [] + + if fn in self._POSITIONAL_TRANSLATIONS: + positional = [ + self._POSITIONAL_TRANSLATIONS[fn].get(name, name) + for name in positional + ] + + # parse first line(s) of helptext without a line break + help_ = fn.__doc__ + if help_: + help_lines = [] + for line in help_.split("\n"): + if not line.strip(): + break + else: + help_lines.append(line.strip()) + else: + help_lines = [] + + help_text = " ".join(help_lines) + + return positional, kwarg, help_text + + def run_cmd(self, config: Config, options: Namespace) -> None: + fn, positional, kwarg = options.cmd + + try: + fn( + config, + *[getattr(options, k, None) for k in positional], + **{k: getattr(options, k, None) for k in kwarg}, + ) + except util.CommandError as e: + if options.raiseerr: + raise + else: + util.err(str(e), **config.messaging_opts) + + def _inis_from_config(self, options: Namespace) -> tuple[str, str]: + names = options.config + + alembic_config_env = os.environ.get("ALEMBIC_CONFIG") + if ( + alembic_config_env + and os.path.basename(alembic_config_env) == "pyproject.toml" + ): + default_pyproject_toml = alembic_config_env + default_alembic_config = "alembic.ini" + elif alembic_config_env: + default_pyproject_toml = "pyproject.toml" + default_alembic_config = alembic_config_env + else: + default_alembic_config = "alembic.ini" + default_pyproject_toml = "pyproject.toml" + + if not names: + return default_pyproject_toml, default_alembic_config + + toml = ini = None + + for name in names: + if os.path.basename(name) == "pyproject.toml": + if toml is not None: + raise util.CommandError( + "pyproject.toml indicated more than once" + ) + toml = name + else: + if ini is not None: + raise util.CommandError( + "only one ini file may be indicated" + ) + ini = name + + return toml if toml else default_pyproject_toml, ( + ini if ini else default_alembic_config + ) + + def main(self, argv: Optional[Sequence[str]] = None) -> None: + """Executes the command line with the provided arguments.""" + options = self.parser.parse_args(argv) + if not hasattr(options, "cmd"): + # see http://bugs.python.org/issue9253, argparse + # behavior changed incompatibly in py3.3 + self.parser.error("too few arguments") + else: + toml, ini = self._inis_from_config(options) + cfg = Config( + file_=ini, + toml_file=toml, + ini_section=options.name, + cmd_opts=options, + ) + self.run_cmd(cfg, options) + + +def main( + argv: Optional[Sequence[str]] = None, + prog: Optional[str] = None, + **kwargs: Any, +) -> None: + """The console runner function for Alembic.""" + + CommandLine(prog=prog).main(argv=argv) + + +if __name__ == "__main__": + main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.py new file mode 100644 index 000000000..758fca875 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.py @@ -0,0 +1,5 @@ +from .runtime.environment import EnvironmentContext + +# create proxy functions for +# each method on the EnvironmentContext class. +EnvironmentContext.create_module_class_proxy(globals(), locals()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.pyi b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.pyi new file mode 100644 index 000000000..9117c31e8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/context.pyi @@ -0,0 +1,856 @@ +# ### this file stubs are generated by tools/write_pyi.py - do not edit ### +# ### imports are manually managed +from __future__ import annotations + +from typing import Any +from typing import Callable +from typing import Collection +from typing import Dict +from typing import Iterable +from typing import List +from typing import Literal +from typing import Mapping +from typing import MutableMapping +from typing import Optional +from typing import overload +from typing import Sequence +from typing import TextIO +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from typing_extensions import ContextManager + +if TYPE_CHECKING: + from sqlalchemy.engine.base import Connection + from sqlalchemy.engine.url import URL + from sqlalchemy.sql import Executable + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import FetchedValue + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.sql.type_api import TypeEngine + + from .autogenerate.api import AutogenContext + from .config import Config + from .operations.ops import MigrationScript + from .runtime.migration import _ProxyTransaction + from .runtime.migration import MigrationContext + from .runtime.migration import MigrationInfo + from .script import ScriptDirectory + +### end imports ### + +def begin_transaction() -> ( + Union[_ProxyTransaction, ContextManager[None, Optional[bool]]] +): + """Return a context manager that will + enclose an operation within a "transaction", + as defined by the environment's offline + and transactional DDL settings. + + e.g.:: + + with context.begin_transaction(): + context.run_migrations() + + :meth:`.begin_transaction` is intended to + "do the right thing" regardless of + calling context: + + * If :meth:`.is_transactional_ddl` is ``False``, + returns a "do nothing" context manager + which otherwise produces no transactional + state or directives. + * If :meth:`.is_offline_mode` is ``True``, + returns a context manager that will + invoke the :meth:`.DefaultImpl.emit_begin` + and :meth:`.DefaultImpl.emit_commit` + methods, which will produce the string + directives ``BEGIN`` and ``COMMIT`` on + the output stream, as rendered by the + target backend (e.g. SQL Server would + emit ``BEGIN TRANSACTION``). + * Otherwise, calls :meth:`sqlalchemy.engine.Connection.begin` + on the current online connection, which + returns a :class:`sqlalchemy.engine.Transaction` + object. This object demarcates a real + transaction and is itself a context manager, + which will roll back if an exception + is raised. + + Note that a custom ``env.py`` script which + has more specific transactional needs can of course + manipulate the :class:`~sqlalchemy.engine.Connection` + directly to produce transactional state in "online" + mode. + + """ + +config: Config + +def configure( + connection: Optional[Connection] = None, + url: Union[str, URL, None] = None, + dialect_name: Optional[str] = None, + dialect_opts: Optional[Dict[str, Any]] = None, + transactional_ddl: Optional[bool] = None, + transaction_per_migration: bool = False, + output_buffer: Optional[TextIO] = None, + starting_rev: Optional[str] = None, + tag: Optional[str] = None, + template_args: Optional[Dict[str, Any]] = None, + render_as_batch: bool = False, + target_metadata: Union[MetaData, Sequence[MetaData], None] = None, + include_name: Optional[ + Callable[ + [ + Optional[str], + Literal[ + "schema", + "table", + "column", + "index", + "unique_constraint", + "foreign_key_constraint", + ], + MutableMapping[ + Literal[ + "schema_name", + "table_name", + "schema_qualified_table_name", + ], + Optional[str], + ], + ], + bool, + ] + ] = None, + include_object: Optional[ + Callable[ + [ + SchemaItem, + Optional[str], + Literal[ + "schema", + "table", + "column", + "index", + "unique_constraint", + "foreign_key_constraint", + ], + bool, + Optional[SchemaItem], + ], + bool, + ] + ] = None, + include_schemas: bool = False, + process_revision_directives: Optional[ + Callable[ + [ + MigrationContext, + Union[str, Iterable[Optional[str]], Iterable[str]], + List[MigrationScript], + ], + None, + ] + ] = None, + compare_type: Union[ + bool, + Callable[ + [ + MigrationContext, + Column[Any], + Column[Any], + TypeEngine[Any], + TypeEngine[Any], + ], + Optional[bool], + ], + ] = True, + compare_server_default: Union[ + bool, + Callable[ + [ + MigrationContext, + Column[Any], + Column[Any], + Optional[str], + Optional[FetchedValue], + Optional[str], + ], + Optional[bool], + ], + ] = False, + render_item: Optional[ + Callable[[str, Any, AutogenContext], Union[str, Literal[False]]] + ] = None, + literal_binds: bool = False, + upgrade_token: str = "upgrades", + downgrade_token: str = "downgrades", + alembic_module_prefix: str = "op.", + sqlalchemy_module_prefix: str = "sa.", + user_module_prefix: Optional[str] = None, + on_version_apply: Optional[ + Callable[ + [ + MigrationContext, + MigrationInfo, + Collection[Any], + Mapping[str, Any], + ], + None, + ] + ] = None, + **kw: Any, +) -> None: + """Configure a :class:`.MigrationContext` within this + :class:`.EnvironmentContext` which will provide database + connectivity and other configuration to a series of + migration scripts. + + Many methods on :class:`.EnvironmentContext` require that + this method has been called in order to function, as they + ultimately need to have database access or at least access + to the dialect in use. Those which do are documented as such. + + The important thing needed by :meth:`.configure` is a + means to determine what kind of database dialect is in use. + An actual connection to that database is needed only if + the :class:`.MigrationContext` is to be used in + "online" mode. + + If the :meth:`.is_offline_mode` function returns ``True``, + then no connection is needed here. Otherwise, the + ``connection`` parameter should be present as an + instance of :class:`sqlalchemy.engine.Connection`. + + This function is typically called from the ``env.py`` + script within a migration environment. It can be called + multiple times for an invocation. The most recent + :class:`~sqlalchemy.engine.Connection` + for which it was called is the one that will be operated upon + by the next call to :meth:`.run_migrations`. + + General parameters: + + :param connection: a :class:`~sqlalchemy.engine.Connection` + to use + for SQL execution in "online" mode. When present, is also + used to determine the type of dialect in use. + :param url: a string database url, or a + :class:`sqlalchemy.engine.url.URL` object. + The type of dialect to be used will be derived from this if + ``connection`` is not passed. + :param dialect_name: string name of a dialect, such as + "postgresql", "mssql", etc. + The type of dialect to be used will be derived from this if + ``connection`` and ``url`` are not passed. + :param dialect_opts: dictionary of options to be passed to dialect + constructor. + :param transactional_ddl: Force the usage of "transactional" + DDL on or off; + this otherwise defaults to whether or not the dialect in + use supports it. + :param transaction_per_migration: if True, nest each migration script + in a transaction rather than the full series of migrations to + run. + :param output_buffer: a file-like object that will be used + for textual output + when the ``--sql`` option is used to generate SQL scripts. + Defaults to + ``sys.stdout`` if not passed here and also not present on + the :class:`.Config` + object. The value here overrides that of the :class:`.Config` + object. + :param output_encoding: when using ``--sql`` to generate SQL + scripts, apply this encoding to the string output. + :param literal_binds: when using ``--sql`` to generate SQL + scripts, pass through the ``literal_binds`` flag to the compiler + so that any literal values that would ordinarily be bound + parameters are converted to plain strings. + + .. warning:: Dialects can typically only handle simple datatypes + like strings and numbers for auto-literal generation. Datatypes + like dates, intervals, and others may still require manual + formatting, typically using :meth:`.Operations.inline_literal`. + + .. note:: the ``literal_binds`` flag is ignored on SQLAlchemy + versions prior to 0.8 where this feature is not supported. + + .. seealso:: + + :meth:`.Operations.inline_literal` + + :param starting_rev: Override the "starting revision" argument + when using ``--sql`` mode. + :param tag: a string tag for usage by custom ``env.py`` scripts. + Set via the ``--tag`` option, can be overridden here. + :param template_args: dictionary of template arguments which + will be added to the template argument environment when + running the "revision" command. Note that the script environment + is only run within the "revision" command if the --autogenerate + option is used, or if the option "revision_environment=true" + is present in the alembic.ini file. + + :param version_table: The name of the Alembic version table. + The default is ``'alembic_version'``. + :param version_table_schema: Optional schema to place version + table within. + :param version_table_pk: boolean, whether the Alembic version table + should use a primary key constraint for the "value" column; this + only takes effect when the table is first created. + Defaults to True; setting to False should not be necessary and is + here for backwards compatibility reasons. + :param on_version_apply: a callable or collection of callables to be + run for each migration step. + The callables will be run in the order they are given, once for + each migration step, after the respective operation has been + applied but before its transaction is finalized. + Each callable accepts no positional arguments and the following + keyword arguments: + + * ``ctx``: the :class:`.MigrationContext` running the migration, + * ``step``: a :class:`.MigrationInfo` representing the + step currently being applied, + * ``heads``: a collection of version strings representing the + current heads, + * ``run_args``: the ``**kwargs`` passed to :meth:`.run_migrations`. + + Parameters specific to the autogenerate feature, when + ``alembic revision`` is run with the ``--autogenerate`` feature: + + :param target_metadata: a :class:`sqlalchemy.schema.MetaData` + object, or a sequence of :class:`~sqlalchemy.schema.MetaData` + objects, that will be consulted during autogeneration. + The tables present in each :class:`~sqlalchemy.schema.MetaData` + will be compared against + what is locally available on the target + :class:`~sqlalchemy.engine.Connection` + to produce candidate upgrade/downgrade operations. + :param compare_type: Indicates type comparison behavior during + an autogenerate + operation. Defaults to ``True`` turning on type comparison, which + has good accuracy on most backends. See :ref:`compare_types` + for an example as well as information on other type + comparison options. Set to ``False`` which disables type + comparison. A callable can also be passed to provide custom type + comparison, see :ref:`compare_types` for additional details. + + .. versionchanged:: 1.12.0 The default value of + :paramref:`.EnvironmentContext.configure.compare_type` has been + changed to ``True``. + + .. seealso:: + + :ref:`compare_types` + + :paramref:`.EnvironmentContext.configure.compare_server_default` + + :param compare_server_default: Indicates server default comparison + behavior during + an autogenerate operation. Defaults to ``False`` which disables + server default + comparison. Set to ``True`` to turn on server default comparison, + which has + varied accuracy depending on backend. + + To customize server default comparison behavior, a callable may + be specified + which can filter server default comparisons during an + autogenerate operation. + defaults during an autogenerate operation. The format of this + callable is:: + + def my_compare_server_default(context, inspected_column, + metadata_column, inspected_default, metadata_default, + rendered_metadata_default): + # return True if the defaults are different, + # False if not, or None to allow the default implementation + # to compare these defaults + return None + + context.configure( + # ... + compare_server_default = my_compare_server_default + ) + + ``inspected_column`` is a dictionary structure as returned by + :meth:`sqlalchemy.engine.reflection.Inspector.get_columns`, whereas + ``metadata_column`` is a :class:`sqlalchemy.schema.Column` from + the local model environment. + + A return value of ``None`` indicates to allow default server default + comparison + to proceed. Note that some backends such as Postgresql actually + execute + the two defaults on the database side to compare for equivalence. + + .. seealso:: + + :paramref:`.EnvironmentContext.configure.compare_type` + + :param include_name: A callable function which is given + the chance to return ``True`` or ``False`` for any database reflected + object based on its name, including database schema names when + the :paramref:`.EnvironmentContext.configure.include_schemas` flag + is set to ``True``. + + The function accepts the following positional arguments: + + * ``name``: the name of the object, such as schema name or table name. + Will be ``None`` when indicating the default schema name of the + database connection. + * ``type``: a string describing the type of object; currently + ``"schema"``, ``"table"``, ``"column"``, ``"index"``, + ``"unique_constraint"``, or ``"foreign_key_constraint"`` + * ``parent_names``: a dictionary of "parent" object names, that are + relative to the name being given. Keys in this dictionary may + include: ``"schema_name"``, ``"table_name"`` or + ``"schema_qualified_table_name"``. + + E.g.:: + + def include_name(name, type_, parent_names): + if type_ == "schema": + return name in ["schema_one", "schema_two"] + else: + return True + + context.configure( + # ... + include_schemas = True, + include_name = include_name + ) + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_object` + + :paramref:`.EnvironmentContext.configure.include_schemas` + + + :param include_object: A callable function which is given + the chance to return ``True`` or ``False`` for any object, + indicating if the given object should be considered in the + autogenerate sweep. + + The function accepts the following positional arguments: + + * ``object``: a :class:`~sqlalchemy.schema.SchemaItem` object such + as a :class:`~sqlalchemy.schema.Table`, + :class:`~sqlalchemy.schema.Column`, + :class:`~sqlalchemy.schema.Index` + :class:`~sqlalchemy.schema.UniqueConstraint`, + or :class:`~sqlalchemy.schema.ForeignKeyConstraint` object + * ``name``: the name of the object. This is typically available + via ``object.name``. + * ``type``: a string describing the type of object; currently + ``"table"``, ``"column"``, ``"index"``, ``"unique_constraint"``, + or ``"foreign_key_constraint"`` + * ``reflected``: ``True`` if the given object was produced based on + table reflection, ``False`` if it's from a local :class:`.MetaData` + object. + * ``compare_to``: the object being compared against, if available, + else ``None``. + + E.g.:: + + def include_object(object, name, type_, reflected, compare_to): + if (type_ == "column" and + not reflected and + object.info.get("skip_autogenerate", False)): + return False + else: + return True + + context.configure( + # ... + include_object = include_object + ) + + For the use case of omitting specific schemas from a target database + when :paramref:`.EnvironmentContext.configure.include_schemas` is + set to ``True``, the :attr:`~sqlalchemy.schema.Table.schema` + attribute can be checked for each :class:`~sqlalchemy.schema.Table` + object passed to the hook, however it is much more efficient + to filter on schemas before reflection of objects takes place + using the :paramref:`.EnvironmentContext.configure.include_name` + hook. + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_name` + + :paramref:`.EnvironmentContext.configure.include_schemas` + + :param render_as_batch: if True, commands which alter elements + within a table will be placed under a ``with batch_alter_table():`` + directive, so that batch migrations will take place. + + .. seealso:: + + :ref:`batch_migrations` + + :param include_schemas: If True, autogenerate will scan across + all schemas located by the SQLAlchemy + :meth:`~sqlalchemy.engine.reflection.Inspector.get_schema_names` + method, and include all differences in tables found across all + those schemas. When using this option, you may want to also + use the :paramref:`.EnvironmentContext.configure.include_name` + parameter to specify a callable which + can filter the tables/schemas that get included. + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_name` + + :paramref:`.EnvironmentContext.configure.include_object` + + :param render_item: Callable that can be used to override how + any schema item, i.e. column, constraint, type, + etc., is rendered for autogenerate. The callable receives a + string describing the type of object, the object, and + the autogen context. If it returns False, the + default rendering method will be used. If it returns None, + the item will not be rendered in the context of a Table + construct, that is, can be used to skip columns or constraints + within op.create_table():: + + def my_render_column(type_, col, autogen_context): + if type_ == "column" and isinstance(col, MySpecialCol): + return repr(col) + else: + return False + + context.configure( + # ... + render_item = my_render_column + ) + + Available values for the type string include: ``"column"``, + ``"primary_key"``, ``"foreign_key"``, ``"unique"``, ``"check"``, + ``"type"``, ``"server_default"``. + + .. seealso:: + + :ref:`autogen_render_types` + + :param upgrade_token: When autogenerate completes, the text of the + candidate upgrade operations will be present in this template + variable when ``script.py.mako`` is rendered. Defaults to + ``upgrades``. + :param downgrade_token: When autogenerate completes, the text of the + candidate downgrade operations will be present in this + template variable when ``script.py.mako`` is rendered. Defaults to + ``downgrades``. + + :param alembic_module_prefix: When autogenerate refers to Alembic + :mod:`alembic.operations` constructs, this prefix will be used + (i.e. ``op.create_table``) Defaults to "``op.``". + Can be ``None`` to indicate no prefix. + + :param sqlalchemy_module_prefix: When autogenerate refers to + SQLAlchemy + :class:`~sqlalchemy.schema.Column` or type classes, this prefix + will be used + (i.e. ``sa.Column("somename", sa.Integer)``) Defaults to "``sa.``". + Can be ``None`` to indicate no prefix. + Note that when dialect-specific types are rendered, autogenerate + will render them using the dialect module name, i.e. ``mssql.BIT()``, + ``postgresql.UUID()``. + + :param user_module_prefix: When autogenerate refers to a SQLAlchemy + type (e.g. :class:`.TypeEngine`) where the module name is not + under the ``sqlalchemy`` namespace, this prefix will be used + within autogenerate. If left at its default of + ``None``, the ``__module__`` attribute of the type is used to + render the import module. It's a good practice to set this + and to have all custom types be available from a fixed module space, + in order to future-proof migration files against reorganizations + in modules. + + .. seealso:: + + :ref:`autogen_module_prefix` + + :param process_revision_directives: a callable function that will + be passed a structure representing the end result of an autogenerate + or plain "revision" operation, which can be manipulated to affect + how the ``alembic revision`` command ultimately outputs new + revision scripts. The structure of the callable is:: + + def process_revision_directives(context, revision, directives): + pass + + The ``directives`` parameter is a Python list containing + a single :class:`.MigrationScript` directive, which represents + the revision file to be generated. This list as well as its + contents may be freely modified to produce any set of commands. + The section :ref:`customizing_revision` shows an example of + doing this. The ``context`` parameter is the + :class:`.MigrationContext` in use, + and ``revision`` is a tuple of revision identifiers representing the + current revision of the database. + + The callable is invoked at all times when the ``--autogenerate`` + option is passed to ``alembic revision``. If ``--autogenerate`` + is not passed, the callable is invoked only if the + ``revision_environment`` variable is set to True in the Alembic + configuration, in which case the given ``directives`` collection + will contain empty :class:`.UpgradeOps` and :class:`.DowngradeOps` + collections for ``.upgrade_ops`` and ``.downgrade_ops``. The + ``--autogenerate`` option itself can be inferred by inspecting + ``context.config.cmd_opts.autogenerate``. + + The callable function may optionally be an instance of + a :class:`.Rewriter` object. This is a helper object that + assists in the production of autogenerate-stream rewriter functions. + + .. seealso:: + + :ref:`customizing_revision` + + :ref:`autogen_rewriter` + + :paramref:`.command.revision.process_revision_directives` + + Parameters specific to individual backends: + + :param mssql_batch_separator: The "batch separator" which will + be placed between each statement when generating offline SQL Server + migrations. Defaults to ``GO``. Note this is in addition to the + customary semicolon ``;`` at the end of each statement; SQL Server + considers the "batch separator" to denote the end of an + individual statement execution, and cannot group certain + dependent operations in one step. + :param oracle_batch_separator: The "batch separator" which will + be placed between each statement when generating offline + Oracle migrations. Defaults to ``/``. Oracle doesn't add a + semicolon between statements like most other backends. + + """ + +def execute( + sql: Union[Executable, str], + execution_options: Optional[Dict[str, Any]] = None, +) -> None: + """Execute the given SQL using the current change context. + + The behavior of :meth:`.execute` is the same + as that of :meth:`.Operations.execute`. Please see that + function's documentation for full detail including + caveats and limitations. + + This function requires that a :class:`.MigrationContext` has + first been made available via :meth:`.configure`. + + """ + +def get_bind() -> Connection: + """Return the current 'bind'. + + In "online" mode, this is the + :class:`sqlalchemy.engine.Connection` currently being used + to emit SQL to the database. + + This function requires that a :class:`.MigrationContext` + has first been made available via :meth:`.configure`. + + """ + +def get_context() -> MigrationContext: + """Return the current :class:`.MigrationContext` object. + + If :meth:`.EnvironmentContext.configure` has not been + called yet, raises an exception. + + """ + +def get_head_revision() -> Union[str, Tuple[str, ...], None]: + """Return the hex identifier of the 'head' script revision. + + If the script directory has multiple heads, this + method raises a :class:`.CommandError`; + :meth:`.EnvironmentContext.get_head_revisions` should be preferred. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: :meth:`.EnvironmentContext.get_head_revisions` + + """ + +def get_head_revisions() -> Union[str, Tuple[str, ...], None]: + """Return the hex identifier of the 'heads' script revision(s). + + This returns a tuple containing the version number of all + heads in the script directory. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + +def get_revision_argument() -> Union[str, Tuple[str, ...], None]: + """Get the 'destination' revision argument. + + This is typically the argument passed to the + ``upgrade`` or ``downgrade`` command. + + If it was specified as ``head``, the actual + version number is returned; if specified + as ``base``, ``None`` is returned. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + +def get_starting_revision_argument() -> Union[str, Tuple[str, ...], None]: + """Return the 'starting revision' argument, + if the revision was passed using ``start:end``. + + This is only meaningful in "offline" mode. + Returns ``None`` if no value is available + or was configured. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + +def get_tag_argument() -> Optional[str]: + """Return the value passed for the ``--tag`` argument, if any. + + The ``--tag`` argument is not used directly by Alembic, + but is available for custom ``env.py`` configurations that + wish to use it; particularly for offline generation scripts + that wish to generate tagged filenames. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: + + :meth:`.EnvironmentContext.get_x_argument` - a newer and more + open ended system of extending ``env.py`` scripts via the command + line. + + """ + +@overload +def get_x_argument(as_dictionary: Literal[False]) -> List[str]: ... +@overload +def get_x_argument(as_dictionary: Literal[True]) -> Dict[str, str]: ... +@overload +def get_x_argument( + as_dictionary: bool = ..., +) -> Union[List[str], Dict[str, str]]: + """Return the value(s) passed for the ``-x`` argument, if any. + + The ``-x`` argument is an open ended flag that allows any user-defined + value or values to be passed on the command line, then available + here for consumption by a custom ``env.py`` script. + + The return value is a list, returned directly from the ``argparse`` + structure. If ``as_dictionary=True`` is passed, the ``x`` arguments + are parsed using ``key=value`` format into a dictionary that is + then returned. If there is no ``=`` in the argument, value is an empty + string. + + .. versionchanged:: 1.13.1 Support ``as_dictionary=True`` when + arguments are passed without the ``=`` symbol. + + For example, to support passing a database URL on the command line, + the standard ``env.py`` script can be modified like this:: + + cmd_line_url = context.get_x_argument( + as_dictionary=True).get('dbname') + if cmd_line_url: + engine = create_engine(cmd_line_url) + else: + engine = engine_from_config( + config.get_section(config.config_ini_section), + prefix='sqlalchemy.', + poolclass=pool.NullPool) + + This then takes effect by running the ``alembic`` script as:: + + alembic -x dbname=postgresql://user:pass@host/dbname upgrade head + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: + + :meth:`.EnvironmentContext.get_tag_argument` + + :attr:`.Config.cmd_opts` + + """ + +def is_offline_mode() -> bool: + """Return True if the current migrations environment + is running in "offline mode". + + This is ``True`` or ``False`` depending + on the ``--sql`` flag passed. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + +def is_transactional_ddl() -> bool: + """Return True if the context is configured to expect a + transactional DDL capable backend. + + This defaults to the type of database in use, and + can be overridden by the ``transactional_ddl`` argument + to :meth:`.configure` + + This function requires that a :class:`.MigrationContext` + has first been made available via :meth:`.configure`. + + """ + +def run_migrations(**kw: Any) -> None: + """Run migrations as determined by the current command line + configuration + as well as versioning information present (or not) in the current + database connection (if one is present). + + The function accepts optional ``**kw`` arguments. If these are + passed, they are sent directly to the ``upgrade()`` and + ``downgrade()`` + functions within each target revision file. By modifying the + ``script.py.mako`` file so that the ``upgrade()`` and ``downgrade()`` + functions accept arguments, parameters can be passed here so that + contextual information, usually information to identify a particular + database in use, can be passed from a custom ``env.py`` script + to the migration functions. + + This function requires that a :class:`.MigrationContext` has + first been made available via :meth:`.configure`. + + """ + +script: ScriptDirectory + +def static_output(text: str) -> None: + """Emit text directly to the "offline" SQL stream. + + Typically this is for emitting comments that + start with --. The statement is not treated + as a SQL execution, no ; or batch separator + is added, etc. + + """ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/__init__.py new file mode 100644 index 000000000..f2f72b3dd --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/__init__.py @@ -0,0 +1,6 @@ +from . import mssql +from . import mysql +from . import oracle +from . import postgresql +from . import sqlite +from .impl import DefaultImpl as DefaultImpl diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/_autogen.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/_autogen.py new file mode 100644 index 000000000..74715b18a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/_autogen.py @@ -0,0 +1,329 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +from typing import Any +from typing import ClassVar +from typing import Dict +from typing import Generic +from typing import NamedTuple +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy.sql.schema import Constraint +from sqlalchemy.sql.schema import ForeignKeyConstraint +from sqlalchemy.sql.schema import Index +from sqlalchemy.sql.schema import UniqueConstraint +from typing_extensions import TypeGuard + +from .. import util +from ..util import sqla_compat + +if TYPE_CHECKING: + from typing import Literal + + from alembic.autogenerate.api import AutogenContext + from alembic.ddl.impl import DefaultImpl + +CompareConstraintType = Union[Constraint, Index] + +_C = TypeVar("_C", bound=CompareConstraintType) + +_clsreg: Dict[str, Type[_constraint_sig]] = {} + + +class ComparisonResult(NamedTuple): + status: Literal["equal", "different", "skip"] + message: str + + @property + def is_equal(self) -> bool: + return self.status == "equal" + + @property + def is_different(self) -> bool: + return self.status == "different" + + @property + def is_skip(self) -> bool: + return self.status == "skip" + + @classmethod + def Equal(cls) -> ComparisonResult: + """the constraints are equal.""" + return cls("equal", "The two constraints are equal") + + @classmethod + def Different(cls, reason: Union[str, Sequence[str]]) -> ComparisonResult: + """the constraints are different for the provided reason(s).""" + return cls("different", ", ".join(util.to_list(reason))) + + @classmethod + def Skip(cls, reason: Union[str, Sequence[str]]) -> ComparisonResult: + """the constraint cannot be compared for the provided reason(s). + + The message is logged, but the constraints will be otherwise + considered equal, meaning that no migration command will be + generated. + """ + return cls("skip", ", ".join(util.to_list(reason))) + + +class _constraint_sig(Generic[_C]): + const: _C + + _sig: Tuple[Any, ...] + name: Optional[sqla_compat._ConstraintNameDefined] + + impl: DefaultImpl + + _is_index: ClassVar[bool] = False + _is_fk: ClassVar[bool] = False + _is_uq: ClassVar[bool] = False + + _is_metadata: bool + + def __init_subclass__(cls) -> None: + cls._register() + + @classmethod + def _register(cls): + raise NotImplementedError() + + def __init__( + self, is_metadata: bool, impl: DefaultImpl, const: _C + ) -> None: + raise NotImplementedError() + + def compare_to_reflected( + self, other: _constraint_sig[Any] + ) -> ComparisonResult: + assert self.impl is other.impl + assert self._is_metadata + assert not other._is_metadata + + return self._compare_to_reflected(other) + + def _compare_to_reflected( + self, other: _constraint_sig[_C] + ) -> ComparisonResult: + raise NotImplementedError() + + @classmethod + def from_constraint( + cls, is_metadata: bool, impl: DefaultImpl, constraint: _C + ) -> _constraint_sig[_C]: + # these could be cached by constraint/impl, however, if the + # constraint is modified in place, then the sig is wrong. the mysql + # impl currently does this, and if we fixed that we can't be sure + # someone else might do it too, so play it safe. + sig = _clsreg[constraint.__visit_name__](is_metadata, impl, constraint) + return sig + + def md_name_to_sql_name(self, context: AutogenContext) -> Optional[str]: + return sqla_compat._get_constraint_final_name( + self.const, context.dialect + ) + + @util.memoized_property + def is_named(self): + return sqla_compat._constraint_is_named(self.const, self.impl.dialect) + + @util.memoized_property + def unnamed(self) -> Tuple[Any, ...]: + return self._sig + + @util.memoized_property + def unnamed_no_options(self) -> Tuple[Any, ...]: + raise NotImplementedError() + + @util.memoized_property + def _full_sig(self) -> Tuple[Any, ...]: + return (self.name,) + self.unnamed + + def __eq__(self, other) -> bool: + return self._full_sig == other._full_sig + + def __ne__(self, other) -> bool: + return self._full_sig != other._full_sig + + def __hash__(self) -> int: + return hash(self._full_sig) + + +class _uq_constraint_sig(_constraint_sig[UniqueConstraint]): + _is_uq = True + + @classmethod + def _register(cls) -> None: + _clsreg["unique_constraint"] = cls + + is_unique = True + + def __init__( + self, + is_metadata: bool, + impl: DefaultImpl, + const: UniqueConstraint, + ) -> None: + self.impl = impl + self.const = const + self.name = sqla_compat.constraint_name_or_none(const.name) + self._sig = tuple(sorted([col.name for col in const.columns])) + self._is_metadata = is_metadata + + @property + def column_names(self) -> Tuple[str, ...]: + return tuple([col.name for col in self.const.columns]) + + def _compare_to_reflected( + self, other: _constraint_sig[_C] + ) -> ComparisonResult: + assert self._is_metadata + metadata_obj = self + conn_obj = other + + assert is_uq_sig(conn_obj) + return self.impl.compare_unique_constraint( + metadata_obj.const, conn_obj.const + ) + + +class _ix_constraint_sig(_constraint_sig[Index]): + _is_index = True + + name: sqla_compat._ConstraintName + + @classmethod + def _register(cls) -> None: + _clsreg["index"] = cls + + def __init__( + self, is_metadata: bool, impl: DefaultImpl, const: Index + ) -> None: + self.impl = impl + self.const = const + self.name = const.name + self.is_unique = bool(const.unique) + self._is_metadata = is_metadata + + def _compare_to_reflected( + self, other: _constraint_sig[_C] + ) -> ComparisonResult: + assert self._is_metadata + metadata_obj = self + conn_obj = other + + assert is_index_sig(conn_obj) + return self.impl.compare_indexes(metadata_obj.const, conn_obj.const) + + @util.memoized_property + def has_expressions(self): + return sqla_compat.is_expression_index(self.const) + + @util.memoized_property + def column_names(self) -> Tuple[str, ...]: + return tuple([col.name for col in self.const.columns]) + + @util.memoized_property + def column_names_optional(self) -> Tuple[Optional[str], ...]: + return tuple( + [getattr(col, "name", None) for col in self.const.expressions] + ) + + @util.memoized_property + def is_named(self): + return True + + @util.memoized_property + def unnamed(self): + return (self.is_unique,) + self.column_names_optional + + +class _fk_constraint_sig(_constraint_sig[ForeignKeyConstraint]): + _is_fk = True + + @classmethod + def _register(cls) -> None: + _clsreg["foreign_key_constraint"] = cls + + def __init__( + self, + is_metadata: bool, + impl: DefaultImpl, + const: ForeignKeyConstraint, + ) -> None: + self._is_metadata = is_metadata + + self.impl = impl + self.const = const + + self.name = sqla_compat.constraint_name_or_none(const.name) + + ( + self.source_schema, + self.source_table, + self.source_columns, + self.target_schema, + self.target_table, + self.target_columns, + onupdate, + ondelete, + deferrable, + initially, + ) = sqla_compat._fk_spec(const) + + self._sig: Tuple[Any, ...] = ( + self.source_schema, + self.source_table, + tuple(self.source_columns), + self.target_schema, + self.target_table, + tuple(self.target_columns), + ) + ( + ( + (None if onupdate.lower() == "no action" else onupdate.lower()) + if onupdate + else None + ), + ( + (None if ondelete.lower() == "no action" else ondelete.lower()) + if ondelete + else None + ), + # convert initially + deferrable into one three-state value + ( + "initially_deferrable" + if initially and initially.lower() == "deferred" + else "deferrable" if deferrable else "not deferrable" + ), + ) + + @util.memoized_property + def unnamed_no_options(self): + return ( + self.source_schema, + self.source_table, + tuple(self.source_columns), + self.target_schema, + self.target_table, + tuple(self.target_columns), + ) + + +def is_index_sig(sig: _constraint_sig) -> TypeGuard[_ix_constraint_sig]: + return sig._is_index + + +def is_uq_sig(sig: _constraint_sig) -> TypeGuard[_uq_constraint_sig]: + return sig._is_uq + + +def is_fk_sig(sig: _constraint_sig) -> TypeGuard[_fk_constraint_sig]: + return sig._is_fk diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/base.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/base.py new file mode 100644 index 000000000..ad2847eb2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/base.py @@ -0,0 +1,364 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import functools +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import exc +from sqlalchemy import Integer +from sqlalchemy import types as sqltypes +from sqlalchemy.ext.compiler import compiles +from sqlalchemy.schema import Column +from sqlalchemy.schema import DDLElement +from sqlalchemy.sql.elements import quoted_name + +from ..util.sqla_compat import _columns_for_constraint # noqa +from ..util.sqla_compat import _find_columns # noqa +from ..util.sqla_compat import _fk_spec # noqa +from ..util.sqla_compat import _is_type_bound # noqa +from ..util.sqla_compat import _table_for_constraint # noqa + +if TYPE_CHECKING: + from typing import Any + + from sqlalchemy import Computed + from sqlalchemy import Identity + from sqlalchemy.sql.compiler import Compiled + from sqlalchemy.sql.compiler import DDLCompiler + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.functions import Function + from sqlalchemy.sql.schema import FetchedValue + from sqlalchemy.sql.type_api import TypeEngine + + from .impl import DefaultImpl + +_ServerDefault = Union["TextClause", "FetchedValue", "Function[Any]", str] + + +class AlterTable(DDLElement): + """Represent an ALTER TABLE statement. + + Only the string name and optional schema name of the table + is required, not a full Table object. + + """ + + def __init__( + self, + table_name: str, + schema: Optional[Union[quoted_name, str]] = None, + ) -> None: + self.table_name = table_name + self.schema = schema + + +class RenameTable(AlterTable): + def __init__( + self, + old_table_name: str, + new_table_name: Union[quoted_name, str], + schema: Optional[Union[quoted_name, str]] = None, + ) -> None: + super().__init__(old_table_name, schema=schema) + self.new_table_name = new_table_name + + +class AlterColumn(AlterTable): + def __init__( + self, + name: str, + column_name: str, + schema: Optional[str] = None, + existing_type: Optional[TypeEngine] = None, + existing_nullable: Optional[bool] = None, + existing_server_default: Optional[_ServerDefault] = None, + existing_comment: Optional[str] = None, + ) -> None: + super().__init__(name, schema=schema) + self.column_name = column_name + self.existing_type = ( + sqltypes.to_instance(existing_type) + if existing_type is not None + else None + ) + self.existing_nullable = existing_nullable + self.existing_server_default = existing_server_default + self.existing_comment = existing_comment + + +class ColumnNullable(AlterColumn): + def __init__( + self, name: str, column_name: str, nullable: bool, **kw + ) -> None: + super().__init__(name, column_name, **kw) + self.nullable = nullable + + +class ColumnType(AlterColumn): + def __init__( + self, name: str, column_name: str, type_: TypeEngine, **kw + ) -> None: + super().__init__(name, column_name, **kw) + self.type_ = sqltypes.to_instance(type_) + + +class ColumnName(AlterColumn): + def __init__( + self, name: str, column_name: str, newname: str, **kw + ) -> None: + super().__init__(name, column_name, **kw) + self.newname = newname + + +class ColumnDefault(AlterColumn): + def __init__( + self, + name: str, + column_name: str, + default: Optional[_ServerDefault], + **kw, + ) -> None: + super().__init__(name, column_name, **kw) + self.default = default + + +class ComputedColumnDefault(AlterColumn): + def __init__( + self, name: str, column_name: str, default: Optional[Computed], **kw + ) -> None: + super().__init__(name, column_name, **kw) + self.default = default + + +class IdentityColumnDefault(AlterColumn): + def __init__( + self, + name: str, + column_name: str, + default: Optional[Identity], + impl: DefaultImpl, + **kw, + ) -> None: + super().__init__(name, column_name, **kw) + self.default = default + self.impl = impl + + +class AddColumn(AlterTable): + def __init__( + self, + name: str, + column: Column[Any], + schema: Optional[Union[quoted_name, str]] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + super().__init__(name, schema=schema) + self.column = column + self.if_not_exists = if_not_exists + + +class DropColumn(AlterTable): + def __init__( + self, + name: str, + column: Column[Any], + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + ) -> None: + super().__init__(name, schema=schema) + self.column = column + self.if_exists = if_exists + + +class ColumnComment(AlterColumn): + def __init__( + self, name: str, column_name: str, comment: Optional[str], **kw + ) -> None: + super().__init__(name, column_name, **kw) + self.comment = comment + + +@compiles(RenameTable) +def visit_rename_table( + element: RenameTable, compiler: DDLCompiler, **kw +) -> str: + return "%s RENAME TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_table_name(compiler, element.new_table_name, element.schema), + ) + + +@compiles(AddColumn) +def visit_add_column(element: AddColumn, compiler: DDLCompiler, **kw) -> str: + return "%s %s" % ( + alter_table(compiler, element.table_name, element.schema), + add_column( + compiler, element.column, if_not_exists=element.if_not_exists, **kw + ), + ) + + +@compiles(DropColumn) +def visit_drop_column(element: DropColumn, compiler: DDLCompiler, **kw) -> str: + return "%s %s" % ( + alter_table(compiler, element.table_name, element.schema), + drop_column( + compiler, element.column.name, if_exists=element.if_exists, **kw + ), + ) + + +@compiles(ColumnNullable) +def visit_column_nullable( + element: ColumnNullable, compiler: DDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + "DROP NOT NULL" if element.nullable else "SET NOT NULL", + ) + + +@compiles(ColumnType) +def visit_column_type(element: ColumnType, compiler: DDLCompiler, **kw) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + "TYPE %s" % format_type(compiler, element.type_), + ) + + +@compiles(ColumnName) +def visit_column_name(element: ColumnName, compiler: DDLCompiler, **kw) -> str: + return "%s RENAME %s TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + format_column_name(compiler, element.newname), + ) + + +@compiles(ColumnDefault) +def visit_column_default( + element: ColumnDefault, compiler: DDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + ( + "SET DEFAULT %s" % format_server_default(compiler, element.default) + if element.default is not None + else "DROP DEFAULT" + ), + ) + + +@compiles(ComputedColumnDefault) +def visit_computed_column( + element: ComputedColumnDefault, compiler: DDLCompiler, **kw +): + raise exc.CompileError( + 'Adding or removing a "computed" construct, e.g. GENERATED ' + "ALWAYS AS, to or from an existing column is not supported." + ) + + +@compiles(IdentityColumnDefault) +def visit_identity_column( + element: IdentityColumnDefault, compiler: DDLCompiler, **kw +): + raise exc.CompileError( + 'Adding, removing or modifying an "identity" construct, ' + "e.g. GENERATED AS IDENTITY, to or from an existing " + "column is not supported in this dialect." + ) + + +def quote_dotted( + name: Union[quoted_name, str], quote: functools.partial +) -> Union[quoted_name, str]: + """quote the elements of a dotted name""" + + if isinstance(name, quoted_name): + return quote(name) + result = ".".join([quote(x) for x in name.split(".")]) + return result + + +def format_table_name( + compiler: Compiled, + name: Union[quoted_name, str], + schema: Optional[Union[quoted_name, str]], +) -> Union[quoted_name, str]: + quote = functools.partial(compiler.preparer.quote) + if schema: + return quote_dotted(schema, quote) + "." + quote(name) + else: + return quote(name) + + +def format_column_name( + compiler: DDLCompiler, name: Optional[Union[quoted_name, str]] +) -> Union[quoted_name, str]: + return compiler.preparer.quote(name) # type: ignore[arg-type] + + +def format_server_default( + compiler: DDLCompiler, + default: Optional[_ServerDefault], +) -> str: + # this can be updated to use compiler.render_default_string + # for SQLAlchemy 2.0 and above; not in 1.4 + default_str = compiler.get_column_default_string( + Column("x", Integer, server_default=default) + ) + assert default_str is not None + return default_str + + +def format_type(compiler: DDLCompiler, type_: TypeEngine) -> str: + return compiler.dialect.type_compiler.process(type_) + + +def alter_table( + compiler: DDLCompiler, + name: str, + schema: Optional[str], +) -> str: + return "ALTER TABLE %s" % format_table_name(compiler, name, schema) + + +def drop_column( + compiler: DDLCompiler, name: str, if_exists: Optional[bool] = None, **kw +) -> str: + return "DROP COLUMN %s%s" % ( + "IF EXISTS " if if_exists else "", + format_column_name(compiler, name), + ) + + +def alter_column(compiler: DDLCompiler, name: str) -> str: + return "ALTER COLUMN %s" % format_column_name(compiler, name) + + +def add_column( + compiler: DDLCompiler, + column: Column[Any], + if_not_exists: Optional[bool] = None, + **kw, +) -> str: + text = "ADD COLUMN %s%s" % ( + "IF NOT EXISTS " if if_not_exists else "", + compiler.get_column_specification(column, **kw), + ) + + const = " ".join( + compiler.process(constraint) for constraint in column.constraints + ) + if const: + text += " " + const + + return text diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/impl.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/impl.py new file mode 100644 index 000000000..d352f12ee --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/impl.py @@ -0,0 +1,902 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import logging +import re +from typing import Any +from typing import Callable +from typing import Dict +from typing import Iterable +from typing import List +from typing import Mapping +from typing import NamedTuple +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import cast +from sqlalchemy import Column +from sqlalchemy import MetaData +from sqlalchemy import PrimaryKeyConstraint +from sqlalchemy import schema +from sqlalchemy import String +from sqlalchemy import Table +from sqlalchemy import text + +from . import _autogen +from . import base +from ._autogen import _constraint_sig as _constraint_sig +from ._autogen import ComparisonResult as ComparisonResult +from .. import util +from ..util import sqla_compat + +if TYPE_CHECKING: + from typing import Literal + from typing import TextIO + + from sqlalchemy.engine import Connection + from sqlalchemy.engine import Dialect + from sqlalchemy.engine.cursor import CursorResult + from sqlalchemy.engine.reflection import Inspector + from sqlalchemy.sql import ClauseElement + from sqlalchemy.sql import Executable + from sqlalchemy.sql.elements import quoted_name + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.schema import ForeignKeyConstraint + from sqlalchemy.sql.schema import Index + from sqlalchemy.sql.schema import UniqueConstraint + from sqlalchemy.sql.selectable import TableClause + from sqlalchemy.sql.type_api import TypeEngine + + from .base import _ServerDefault + from ..autogenerate.api import AutogenContext + from ..operations.batch import ApplyBatchImpl + from ..operations.batch import BatchOperationsImpl + +log = logging.getLogger(__name__) + + +class ImplMeta(type): + def __init__( + cls, + classname: str, + bases: Tuple[Type[DefaultImpl]], + dict_: Dict[str, Any], + ): + newtype = type.__init__(cls, classname, bases, dict_) + if "__dialect__" in dict_: + _impls[dict_["__dialect__"]] = cls # type: ignore[assignment] + return newtype + + +_impls: Dict[str, Type[DefaultImpl]] = {} + + +class DefaultImpl(metaclass=ImplMeta): + """Provide the entrypoint for major migration operations, + including database-specific behavioral variances. + + While individual SQL/DDL constructs already provide + for database-specific implementations, variances here + allow for entirely different sequences of operations + to take place for a particular migration, such as + SQL Server's special 'IDENTITY INSERT' step for + bulk inserts. + + """ + + __dialect__ = "default" + + transactional_ddl = False + command_terminator = ";" + type_synonyms: Tuple[Set[str], ...] = ({"NUMERIC", "DECIMAL"},) + type_arg_extract: Sequence[str] = () + # These attributes are deprecated in SQLAlchemy via #10247. They need to + # be ignored to support older version that did not use dialect kwargs. + # They only apply to Oracle and are replaced by oracle_order, + # oracle_on_null + identity_attrs_ignore: Tuple[str, ...] = ("order", "on_null") + + def __init__( + self, + dialect: Dialect, + connection: Optional[Connection], + as_sql: bool, + transactional_ddl: Optional[bool], + output_buffer: Optional[TextIO], + context_opts: Dict[str, Any], + ) -> None: + self.dialect = dialect + self.connection = connection + self.as_sql = as_sql + self.literal_binds = context_opts.get("literal_binds", False) + + self.output_buffer = output_buffer + self.memo: dict = {} + self.context_opts = context_opts + if transactional_ddl is not None: + self.transactional_ddl = transactional_ddl + + if self.literal_binds: + if not self.as_sql: + raise util.CommandError( + "Can't use literal_binds setting without as_sql mode" + ) + + @classmethod + def get_by_dialect(cls, dialect: Dialect) -> Type[DefaultImpl]: + return _impls[dialect.name] + + def static_output(self, text: str) -> None: + assert self.output_buffer is not None + self.output_buffer.write(text + "\n\n") + self.output_buffer.flush() + + def version_table_impl( + self, + *, + version_table: str, + version_table_schema: Optional[str], + version_table_pk: bool, + **kw: Any, + ) -> Table: + """Generate a :class:`.Table` object which will be used as the + structure for the Alembic version table. + + Third party dialects may override this hook to provide an alternate + structure for this :class:`.Table`; requirements are only that it + be named based on the ``version_table`` parameter and contains + at least a single string-holding column named ``version_num``. + + .. versionadded:: 1.14 + + """ + vt = Table( + version_table, + MetaData(), + Column("version_num", String(32), nullable=False), + schema=version_table_schema, + ) + if version_table_pk: + vt.append_constraint( + PrimaryKeyConstraint( + "version_num", name=f"{version_table}_pkc" + ) + ) + + return vt + + def requires_recreate_in_batch( + self, batch_op: BatchOperationsImpl + ) -> bool: + """Return True if the given :class:`.BatchOperationsImpl` + would need the table to be recreated and copied in order to + proceed. + + Normally, only returns True on SQLite when operations other + than add_column are present. + + """ + return False + + def prep_table_for_batch( + self, batch_impl: ApplyBatchImpl, table: Table + ) -> None: + """perform any operations needed on a table before a new + one is created to replace it in batch mode. + + the PG dialect uses this to drop constraints on the table + before the new one uses those same names. + + """ + + @property + def bind(self) -> Optional[Connection]: + return self.connection + + def _exec( + self, + construct: Union[Executable, str], + execution_options: Optional[Mapping[str, Any]] = None, + multiparams: Optional[Sequence[Mapping[str, Any]]] = None, + params: Mapping[str, Any] = util.immutabledict(), + ) -> Optional[CursorResult]: + if isinstance(construct, str): + construct = text(construct) + if self.as_sql: + if multiparams is not None or params: + raise TypeError("SQL parameters not allowed with as_sql") + + compile_kw: dict[str, Any] + if self.literal_binds and not isinstance( + construct, schema.DDLElement + ): + compile_kw = dict(compile_kwargs={"literal_binds": True}) + else: + compile_kw = {} + + if TYPE_CHECKING: + assert isinstance(construct, ClauseElement) + compiled = construct.compile(dialect=self.dialect, **compile_kw) + self.static_output( + str(compiled).replace("\t", " ").strip() + + self.command_terminator + ) + return None + else: + conn = self.connection + assert conn is not None + if execution_options: + conn = conn.execution_options(**execution_options) + + if params and multiparams is not None: + raise TypeError( + "Can't send params and multiparams at the same time" + ) + + if multiparams: + return conn.execute(construct, multiparams) + else: + return conn.execute(construct, params) + + def execute( + self, + sql: Union[Executable, str], + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + self._exec(sql, execution_options) + + def alter_column( + self, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + server_default: Optional[ + Union[_ServerDefault, Literal[False]] + ] = False, + name: Optional[str] = None, + type_: Optional[TypeEngine] = None, + schema: Optional[str] = None, + autoincrement: Optional[bool] = None, + comment: Optional[Union[str, Literal[False]]] = False, + existing_comment: Optional[str] = None, + existing_type: Optional[TypeEngine] = None, + existing_server_default: Optional[_ServerDefault] = None, + existing_nullable: Optional[bool] = None, + existing_autoincrement: Optional[bool] = None, + **kw: Any, + ) -> None: + if autoincrement is not None or existing_autoincrement is not None: + util.warn( + "autoincrement and existing_autoincrement " + "only make sense for MySQL", + stacklevel=3, + ) + if nullable is not None: + self._exec( + base.ColumnNullable( + table_name, + column_name, + nullable, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + ) + ) + if server_default is not False: + kw = {} + cls_: Type[ + Union[ + base.ComputedColumnDefault, + base.IdentityColumnDefault, + base.ColumnDefault, + ] + ] + if sqla_compat._server_default_is_computed( + server_default, existing_server_default + ): + cls_ = base.ComputedColumnDefault + elif sqla_compat._server_default_is_identity( + server_default, existing_server_default + ): + cls_ = base.IdentityColumnDefault + kw["impl"] = self + else: + cls_ = base.ColumnDefault + self._exec( + cls_( + table_name, + column_name, + server_default, # type:ignore[arg-type] + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + **kw, + ) + ) + if type_ is not None: + self._exec( + base.ColumnType( + table_name, + column_name, + type_, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + ) + ) + + if comment is not False: + self._exec( + base.ColumnComment( + table_name, + column_name, + comment, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + ) + ) + + # do the new name last ;) + if name is not None: + self._exec( + base.ColumnName( + table_name, + column_name, + name, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + ) + ) + + def add_column( + self, + table_name: str, + column: Column[Any], + *, + schema: Optional[Union[str, quoted_name]] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + self._exec( + base.AddColumn( + table_name, + column, + schema=schema, + if_not_exists=if_not_exists, + ) + ) + + def drop_column( + self, + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw, + ) -> None: + self._exec( + base.DropColumn( + table_name, column, schema=schema, if_exists=if_exists + ) + ) + + def add_constraint(self, const: Any) -> None: + if const._create_rule is None or const._create_rule(self): + self._exec(schema.AddConstraint(const)) + + def drop_constraint(self, const: Constraint, **kw: Any) -> None: + self._exec(schema.DropConstraint(const, **kw)) + + def rename_table( + self, + old_table_name: str, + new_table_name: Union[str, quoted_name], + schema: Optional[Union[str, quoted_name]] = None, + ) -> None: + self._exec( + base.RenameTable(old_table_name, new_table_name, schema=schema) + ) + + def create_table(self, table: Table, **kw: Any) -> None: + table.dispatch.before_create( + table, self.connection, checkfirst=False, _ddl_runner=self + ) + self._exec(schema.CreateTable(table, **kw)) + table.dispatch.after_create( + table, self.connection, checkfirst=False, _ddl_runner=self + ) + for index in table.indexes: + self._exec(schema.CreateIndex(index)) + + with_comment = ( + self.dialect.supports_comments and not self.dialect.inline_comments + ) + comment = table.comment + if comment and with_comment: + self.create_table_comment(table) + + for column in table.columns: + comment = column.comment + if comment and with_comment: + self.create_column_comment(column) + + def drop_table(self, table: Table, **kw: Any) -> None: + table.dispatch.before_drop( + table, self.connection, checkfirst=False, _ddl_runner=self + ) + self._exec(schema.DropTable(table, **kw)) + table.dispatch.after_drop( + table, self.connection, checkfirst=False, _ddl_runner=self + ) + + def create_index(self, index: Index, **kw: Any) -> None: + self._exec(schema.CreateIndex(index, **kw)) + + def create_table_comment(self, table: Table) -> None: + self._exec(schema.SetTableComment(table)) + + def drop_table_comment(self, table: Table) -> None: + self._exec(schema.DropTableComment(table)) + + def create_column_comment(self, column: Column[Any]) -> None: + self._exec(schema.SetColumnComment(column)) + + def drop_index(self, index: Index, **kw: Any) -> None: + self._exec(schema.DropIndex(index, **kw)) + + def bulk_insert( + self, + table: Union[TableClause, Table], + rows: List[dict], + multiinsert: bool = True, + ) -> None: + if not isinstance(rows, list): + raise TypeError("List expected") + elif rows and not isinstance(rows[0], dict): + raise TypeError("List of dictionaries expected") + if self.as_sql: + for row in rows: + self._exec( + table.insert() + .inline() + .values( + **{ + k: ( + sqla_compat._literal_bindparam( + k, v, type_=table.c[k].type + ) + if not isinstance( + v, sqla_compat._literal_bindparam + ) + else v + ) + for k, v in row.items() + } + ) + ) + else: + if rows: + if multiinsert: + self._exec(table.insert().inline(), multiparams=rows) + else: + for row in rows: + self._exec(table.insert().inline().values(**row)) + + def _tokenize_column_type(self, column: Column) -> Params: + definition: str + definition = self.dialect.type_compiler.process(column.type).lower() + + # tokenize the SQLAlchemy-generated version of a type, so that + # the two can be compared. + # + # examples: + # NUMERIC(10, 5) + # TIMESTAMP WITH TIMEZONE + # INTEGER UNSIGNED + # INTEGER (10) UNSIGNED + # INTEGER(10) UNSIGNED + # varchar character set utf8 + # + + tokens: List[str] = re.findall(r"[\w\-_]+|\(.+?\)", definition) + + term_tokens: List[str] = [] + paren_term = None + + for token in tokens: + if re.match(r"^\(.*\)$", token): + paren_term = token + else: + term_tokens.append(token) + + params = Params(term_tokens[0], term_tokens[1:], [], {}) + + if paren_term: + term: str + for term in re.findall("[^(),]+", paren_term): + if "=" in term: + key, val = term.split("=") + params.kwargs[key.strip()] = val.strip() + else: + params.args.append(term.strip()) + + return params + + def _column_types_match( + self, inspector_params: Params, metadata_params: Params + ) -> bool: + if inspector_params.token0 == metadata_params.token0: + return True + + synonyms = [{t.lower() for t in batch} for batch in self.type_synonyms] + inspector_all_terms = " ".join( + [inspector_params.token0] + inspector_params.tokens + ) + metadata_all_terms = " ".join( + [metadata_params.token0] + metadata_params.tokens + ) + + for batch in synonyms: + if {inspector_all_terms, metadata_all_terms}.issubset(batch) or { + inspector_params.token0, + metadata_params.token0, + }.issubset(batch): + return True + return False + + def _column_args_match( + self, inspected_params: Params, meta_params: Params + ) -> bool: + """We want to compare column parameters. However, we only want + to compare parameters that are set. If they both have `collation`, + we want to make sure they are the same. However, if only one + specifies it, dont flag it for being less specific + """ + + if ( + len(meta_params.tokens) == len(inspected_params.tokens) + and meta_params.tokens != inspected_params.tokens + ): + return False + + if ( + len(meta_params.args) == len(inspected_params.args) + and meta_params.args != inspected_params.args + ): + return False + + insp = " ".join(inspected_params.tokens).lower() + meta = " ".join(meta_params.tokens).lower() + + for reg in self.type_arg_extract: + mi = re.search(reg, insp) + mm = re.search(reg, meta) + + if mi and mm and mi.group(1) != mm.group(1): + return False + + return True + + def compare_type( + self, inspector_column: Column[Any], metadata_column: Column + ) -> bool: + """Returns True if there ARE differences between the types of the two + columns. Takes impl.type_synonyms into account between retrospected + and metadata types + """ + inspector_params = self._tokenize_column_type(inspector_column) + metadata_params = self._tokenize_column_type(metadata_column) + + if not self._column_types_match(inspector_params, metadata_params): + return True + if not self._column_args_match(inspector_params, metadata_params): + return True + return False + + def compare_server_default( + self, + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_inspector_default, + ): + return rendered_inspector_default != rendered_metadata_default + + def correct_for_autogen_constraints( + self, + conn_uniques: Set[UniqueConstraint], + conn_indexes: Set[Index], + metadata_unique_constraints: Set[UniqueConstraint], + metadata_indexes: Set[Index], + ) -> None: + pass + + def cast_for_batch_migrate(self, existing, existing_transfer, new_type): + if existing.type._type_affinity is not new_type._type_affinity: + existing_transfer["expr"] = cast( + existing_transfer["expr"], new_type + ) + + def render_ddl_sql_expr( + self, expr: ClauseElement, is_server_default: bool = False, **kw: Any + ) -> str: + """Render a SQL expression that is typically a server default, + index expression, etc. + + """ + + compile_kw = {"literal_binds": True, "include_table": False} + + return str( + expr.compile(dialect=self.dialect, compile_kwargs=compile_kw) + ) + + def _compat_autogen_column_reflect(self, inspector: Inspector) -> Callable: + return self.autogen_column_reflect + + def correct_for_autogen_foreignkeys( + self, + conn_fks: Set[ForeignKeyConstraint], + metadata_fks: Set[ForeignKeyConstraint], + ) -> None: + pass + + def autogen_column_reflect(self, inspector, table, column_info): + """A hook that is attached to the 'column_reflect' event for when + a Table is reflected from the database during the autogenerate + process. + + Dialects can elect to modify the information gathered here. + + """ + + def start_migrations(self) -> None: + """A hook called when :meth:`.EnvironmentContext.run_migrations` + is called. + + Implementations can set up per-migration-run state here. + + """ + + def emit_begin(self) -> None: + """Emit the string ``BEGIN``, or the backend-specific + equivalent, on the current connection context. + + This is used in offline mode and typically + via :meth:`.EnvironmentContext.begin_transaction`. + + """ + self.static_output("BEGIN" + self.command_terminator) + + def emit_commit(self) -> None: + """Emit the string ``COMMIT``, or the backend-specific + equivalent, on the current connection context. + + This is used in offline mode and typically + via :meth:`.EnvironmentContext.begin_transaction`. + + """ + self.static_output("COMMIT" + self.command_terminator) + + def render_type( + self, type_obj: TypeEngine, autogen_context: AutogenContext + ) -> Union[str, Literal[False]]: + return False + + def _compare_identity_default(self, metadata_identity, inspector_identity): + # ignored contains the attributes that were not considered + # because assumed to their default values in the db. + diff, ignored = _compare_identity_options( + metadata_identity, + inspector_identity, + schema.Identity(), + skip={"always"}, + ) + + meta_always = getattr(metadata_identity, "always", None) + inspector_always = getattr(inspector_identity, "always", None) + # None and False are the same in this comparison + if bool(meta_always) != bool(inspector_always): + diff.add("always") + + diff.difference_update(self.identity_attrs_ignore) + + # returns 3 values: + return ( + # different identity attributes + diff, + # ignored identity attributes + ignored, + # if the two identity should be considered different + bool(diff) or bool(metadata_identity) != bool(inspector_identity), + ) + + def _compare_index_unique( + self, metadata_index: Index, reflected_index: Index + ) -> Optional[str]: + conn_unique = bool(reflected_index.unique) + meta_unique = bool(metadata_index.unique) + if conn_unique != meta_unique: + return f"unique={conn_unique} to unique={meta_unique}" + else: + return None + + def _create_metadata_constraint_sig( + self, constraint: _autogen._C, **opts: Any + ) -> _constraint_sig[_autogen._C]: + return _constraint_sig.from_constraint(True, self, constraint, **opts) + + def _create_reflected_constraint_sig( + self, constraint: _autogen._C, **opts: Any + ) -> _constraint_sig[_autogen._C]: + return _constraint_sig.from_constraint(False, self, constraint, **opts) + + def compare_indexes( + self, + metadata_index: Index, + reflected_index: Index, + ) -> ComparisonResult: + """Compare two indexes by comparing the signature generated by + ``create_index_sig``. + + This method returns a ``ComparisonResult``. + """ + msg: List[str] = [] + unique_msg = self._compare_index_unique( + metadata_index, reflected_index + ) + if unique_msg: + msg.append(unique_msg) + m_sig = self._create_metadata_constraint_sig(metadata_index) + r_sig = self._create_reflected_constraint_sig(reflected_index) + + assert _autogen.is_index_sig(m_sig) + assert _autogen.is_index_sig(r_sig) + + # The assumption is that the index have no expression + for sig in m_sig, r_sig: + if sig.has_expressions: + log.warning( + "Generating approximate signature for index %s. " + "The dialect " + "implementation should either skip expression indexes " + "or provide a custom implementation.", + sig.const, + ) + + if m_sig.column_names != r_sig.column_names: + msg.append( + f"expression {r_sig.column_names} to {m_sig.column_names}" + ) + + if msg: + return ComparisonResult.Different(msg) + else: + return ComparisonResult.Equal() + + def compare_unique_constraint( + self, + metadata_constraint: UniqueConstraint, + reflected_constraint: UniqueConstraint, + ) -> ComparisonResult: + """Compare two unique constraints by comparing the two signatures. + + The arguments are two tuples that contain the unique constraint and + the signatures generated by ``create_unique_constraint_sig``. + + This method returns a ``ComparisonResult``. + """ + metadata_tup = self._create_metadata_constraint_sig( + metadata_constraint + ) + reflected_tup = self._create_reflected_constraint_sig( + reflected_constraint + ) + + meta_sig = metadata_tup.unnamed + conn_sig = reflected_tup.unnamed + if conn_sig != meta_sig: + return ComparisonResult.Different( + f"expression {conn_sig} to {meta_sig}" + ) + else: + return ComparisonResult.Equal() + + def _skip_functional_indexes(self, metadata_indexes, conn_indexes): + conn_indexes_by_name = {c.name: c for c in conn_indexes} + + for idx in list(metadata_indexes): + if idx.name in conn_indexes_by_name: + continue + iex = sqla_compat.is_expression_index(idx) + if iex: + util.warn( + "autogenerate skipping metadata-specified " + "expression-based index " + f"{idx.name!r}; dialect {self.__dialect__!r} under " + f"SQLAlchemy {sqla_compat.sqlalchemy_version} can't " + "reflect these indexes so they can't be compared" + ) + metadata_indexes.discard(idx) + + def adjust_reflected_dialect_options( + self, reflected_object: Dict[str, Any], kind: str + ) -> Dict[str, Any]: + return reflected_object.get("dialect_options", {}) + + +class Params(NamedTuple): + token0: str + tokens: List[str] + args: List[str] + kwargs: Dict[str, str] + + +def _compare_identity_options( + metadata_io: Union[schema.Identity, schema.Sequence, None], + inspector_io: Union[schema.Identity, schema.Sequence, None], + default_io: Union[schema.Identity, schema.Sequence], + skip: Set[str], +): + # this can be used for identity or sequence compare. + # default_io is an instance of IdentityOption with all attributes to the + # default value. + meta_d = sqla_compat._get_identity_options_dict(metadata_io) + insp_d = sqla_compat._get_identity_options_dict(inspector_io) + + diff = set() + ignored_attr = set() + + def check_dicts( + meta_dict: Mapping[str, Any], + insp_dict: Mapping[str, Any], + default_dict: Mapping[str, Any], + attrs: Iterable[str], + ): + for attr in set(attrs).difference(skip): + meta_value = meta_dict.get(attr) + insp_value = insp_dict.get(attr) + if insp_value != meta_value: + default_value = default_dict.get(attr) + if meta_value == default_value: + ignored_attr.add(attr) + else: + diff.add(attr) + + check_dicts( + meta_d, + insp_d, + sqla_compat._get_identity_options_dict(default_io), + set(meta_d).union(insp_d), + ) + if sqla_compat.identity_has_dialect_kwargs: + assert hasattr(default_io, "dialect_kwargs") + # use only the dialect kwargs in inspector_io since metadata_io + # can have options for many backends + check_dicts( + getattr(metadata_io, "dialect_kwargs", {}), + getattr(inspector_io, "dialect_kwargs", {}), + default_io.dialect_kwargs, + getattr(inspector_io, "dialect_kwargs", {}), + ) + + return diff, ignored_attr diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mssql.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mssql.py new file mode 100644 index 000000000..5376da5ad --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mssql.py @@ -0,0 +1,421 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import re +from typing import Any +from typing import Dict +from typing import List +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import types as sqltypes +from sqlalchemy.schema import Column +from sqlalchemy.schema import CreateIndex +from sqlalchemy.sql.base import Executable +from sqlalchemy.sql.elements import ClauseElement + +from .base import AddColumn +from .base import alter_column +from .base import alter_table +from .base import ColumnDefault +from .base import ColumnName +from .base import ColumnNullable +from .base import ColumnType +from .base import format_column_name +from .base import format_server_default +from .base import format_table_name +from .base import format_type +from .base import RenameTable +from .impl import DefaultImpl +from .. import util +from ..util import sqla_compat +from ..util.sqla_compat import compiles + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy.dialects.mssql.base import MSDDLCompiler + from sqlalchemy.dialects.mssql.base import MSSQLCompiler + from sqlalchemy.engine.cursor import CursorResult + from sqlalchemy.sql.schema import Index + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.selectable import TableClause + from sqlalchemy.sql.type_api import TypeEngine + + from .base import _ServerDefault + + +class MSSQLImpl(DefaultImpl): + __dialect__ = "mssql" + transactional_ddl = True + batch_separator = "GO" + + type_synonyms = DefaultImpl.type_synonyms + ({"VARCHAR", "NVARCHAR"},) + identity_attrs_ignore = DefaultImpl.identity_attrs_ignore + ( + "minvalue", + "maxvalue", + "nominvalue", + "nomaxvalue", + "cycle", + "cache", + ) + + def __init__(self, *arg, **kw) -> None: + super().__init__(*arg, **kw) + self.batch_separator = self.context_opts.get( + "mssql_batch_separator", self.batch_separator + ) + + def _exec(self, construct: Any, *args, **kw) -> Optional[CursorResult]: + result = super()._exec(construct, *args, **kw) + if self.as_sql and self.batch_separator: + self.static_output(self.batch_separator) + return result + + def emit_begin(self) -> None: + self.static_output("BEGIN TRANSACTION" + self.command_terminator) + + def emit_commit(self) -> None: + super().emit_commit() + if self.as_sql and self.batch_separator: + self.static_output(self.batch_separator) + + def alter_column( + self, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + server_default: Optional[ + Union[_ServerDefault, Literal[False]] + ] = False, + name: Optional[str] = None, + type_: Optional[TypeEngine] = None, + schema: Optional[str] = None, + existing_type: Optional[TypeEngine] = None, + existing_server_default: Optional[_ServerDefault] = None, + existing_nullable: Optional[bool] = None, + **kw: Any, + ) -> None: + if nullable is not None: + if type_ is not None: + # the NULL/NOT NULL alter will handle + # the type alteration + existing_type = type_ + type_ = None + elif existing_type is None: + raise util.CommandError( + "MS-SQL ALTER COLUMN operations " + "with NULL or NOT NULL require the " + "existing_type or a new type_ be passed." + ) + elif existing_nullable is not None and type_ is not None: + nullable = existing_nullable + + # the NULL/NOT NULL alter will handle + # the type alteration + existing_type = type_ + type_ = None + + elif type_ is not None: + util.warn( + "MS-SQL ALTER COLUMN operations that specify type_= " + "should also specify a nullable= or " + "existing_nullable= argument to avoid implicit conversion " + "of NOT NULL columns to NULL." + ) + + used_default = False + if sqla_compat._server_default_is_identity( + server_default, existing_server_default + ) or sqla_compat._server_default_is_computed( + server_default, existing_server_default + ): + used_default = True + kw["server_default"] = server_default + kw["existing_server_default"] = existing_server_default + + super().alter_column( + table_name, + column_name, + nullable=nullable, + type_=type_, + schema=schema, + existing_type=existing_type, + existing_nullable=existing_nullable, + **kw, + ) + + if server_default is not False and used_default is False: + if existing_server_default is not False or server_default is None: + self._exec( + _ExecDropConstraint( + table_name, + column_name, + "sys.default_constraints", + schema, + ) + ) + if server_default is not None: + super().alter_column( + table_name, + column_name, + schema=schema, + server_default=server_default, + ) + + if name is not None: + super().alter_column( + table_name, column_name, schema=schema, name=name + ) + + def create_index(self, index: Index, **kw: Any) -> None: + # this likely defaults to None if not present, so get() + # should normally not return the default value. being + # defensive in any case + mssql_include = index.kwargs.get("mssql_include", None) or () + assert index.table is not None + for col in mssql_include: + if col not in index.table.c: + index.table.append_column(Column(col, sqltypes.NullType)) + self._exec(CreateIndex(index, **kw)) + + def bulk_insert( # type:ignore[override] + self, table: Union[TableClause, Table], rows: List[dict], **kw: Any + ) -> None: + if self.as_sql: + self._exec( + "SET IDENTITY_INSERT %s ON" + % self.dialect.identifier_preparer.format_table(table) + ) + super().bulk_insert(table, rows, **kw) + self._exec( + "SET IDENTITY_INSERT %s OFF" + % self.dialect.identifier_preparer.format_table(table) + ) + else: + super().bulk_insert(table, rows, **kw) + + def drop_column( + self, + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + **kw, + ) -> None: + drop_default = kw.pop("mssql_drop_default", False) + if drop_default: + self._exec( + _ExecDropConstraint( + table_name, column, "sys.default_constraints", schema + ) + ) + drop_check = kw.pop("mssql_drop_check", False) + if drop_check: + self._exec( + _ExecDropConstraint( + table_name, column, "sys.check_constraints", schema + ) + ) + drop_fks = kw.pop("mssql_drop_foreign_key", False) + if drop_fks: + self._exec(_ExecDropFKConstraint(table_name, column, schema)) + super().drop_column(table_name, column, schema=schema, **kw) + + def compare_server_default( + self, + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_inspector_default, + ): + if rendered_metadata_default is not None: + rendered_metadata_default = re.sub( + r"[\(\) \"\']", "", rendered_metadata_default + ) + + if rendered_inspector_default is not None: + # SQL Server collapses whitespace and adds arbitrary parenthesis + # within expressions. our only option is collapse all of it + + rendered_inspector_default = re.sub( + r"[\(\) \"\']", "", rendered_inspector_default + ) + + return rendered_inspector_default != rendered_metadata_default + + def _compare_identity_default(self, metadata_identity, inspector_identity): + diff, ignored, is_alter = super()._compare_identity_default( + metadata_identity, inspector_identity + ) + + if ( + metadata_identity is None + and inspector_identity is not None + and not diff + and inspector_identity.column is not None + and inspector_identity.column.primary_key + ): + # mssql reflect primary keys with autoincrement as identity + # columns. if no different attributes are present ignore them + is_alter = False + + return diff, ignored, is_alter + + def adjust_reflected_dialect_options( + self, reflected_object: Dict[str, Any], kind: str + ) -> Dict[str, Any]: + options: Dict[str, Any] + options = reflected_object.get("dialect_options", {}).copy() + if not options.get("mssql_include"): + options.pop("mssql_include", None) + if not options.get("mssql_clustered"): + options.pop("mssql_clustered", None) + return options + + +class _ExecDropConstraint(Executable, ClauseElement): + inherit_cache = False + + def __init__( + self, + tname: str, + colname: Union[Column[Any], str], + type_: str, + schema: Optional[str], + ) -> None: + self.tname = tname + self.colname = colname + self.type_ = type_ + self.schema = schema + + +class _ExecDropFKConstraint(Executable, ClauseElement): + inherit_cache = False + + def __init__( + self, tname: str, colname: Column[Any], schema: Optional[str] + ) -> None: + self.tname = tname + self.colname = colname + self.schema = schema + + +@compiles(_ExecDropConstraint, "mssql") +def _exec_drop_col_constraint( + element: _ExecDropConstraint, compiler: MSSQLCompiler, **kw +) -> str: + schema, tname, colname, type_ = ( + element.schema, + element.tname, + element.colname, + element.type_, + ) + # from http://www.mssqltips.com/sqlservertip/1425/\ + # working-with-default-constraints-in-sql-server/ + return """declare @const_name varchar(256) +select @const_name = QUOTENAME([name]) from %(type)s +where parent_object_id = object_id('%(schema_dot)s%(tname)s') +and col_name(parent_object_id, parent_column_id) = '%(colname)s' +exec('alter table %(tname_quoted)s drop constraint ' + @const_name)""" % { + "type": type_, + "tname": tname, + "colname": colname, + "tname_quoted": format_table_name(compiler, tname, schema), + "schema_dot": schema + "." if schema else "", + } + + +@compiles(_ExecDropFKConstraint, "mssql") +def _exec_drop_col_fk_constraint( + element: _ExecDropFKConstraint, compiler: MSSQLCompiler, **kw +) -> str: + schema, tname, colname = element.schema, element.tname, element.colname + + return """declare @const_name varchar(256) +select @const_name = QUOTENAME([name]) from +sys.foreign_keys fk join sys.foreign_key_columns fkc +on fk.object_id=fkc.constraint_object_id +where fkc.parent_object_id = object_id('%(schema_dot)s%(tname)s') +and col_name(fkc.parent_object_id, fkc.parent_column_id) = '%(colname)s' +exec('alter table %(tname_quoted)s drop constraint ' + @const_name)""" % { + "tname": tname, + "colname": colname, + "tname_quoted": format_table_name(compiler, tname, schema), + "schema_dot": schema + "." if schema else "", + } + + +@compiles(AddColumn, "mssql") +def visit_add_column(element: AddColumn, compiler: MSDDLCompiler, **kw) -> str: + return "%s %s" % ( + alter_table(compiler, element.table_name, element.schema), + mssql_add_column(compiler, element.column, **kw), + ) + + +def mssql_add_column( + compiler: MSDDLCompiler, column: Column[Any], **kw +) -> str: + return "ADD %s" % compiler.get_column_specification(column, **kw) + + +@compiles(ColumnNullable, "mssql") +def visit_column_nullable( + element: ColumnNullable, compiler: MSDDLCompiler, **kw +) -> str: + return "%s %s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + format_type(compiler, element.existing_type), # type: ignore[arg-type] + "NULL" if element.nullable else "NOT NULL", + ) + + +@compiles(ColumnDefault, "mssql") +def visit_column_default( + element: ColumnDefault, compiler: MSDDLCompiler, **kw +) -> str: + # TODO: there can also be a named constraint + # with ADD CONSTRAINT here + return "%s ADD DEFAULT %s FOR %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_server_default(compiler, element.default), + format_column_name(compiler, element.column_name), + ) + + +@compiles(ColumnName, "mssql") +def visit_rename_column( + element: ColumnName, compiler: MSDDLCompiler, **kw +) -> str: + return "EXEC sp_rename '%s.%s', %s, 'COLUMN'" % ( + format_table_name(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + format_column_name(compiler, element.newname), + ) + + +@compiles(ColumnType, "mssql") +def visit_column_type( + element: ColumnType, compiler: MSDDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + format_type(compiler, element.type_), + ) + + +@compiles(RenameTable, "mssql") +def visit_rename_table( + element: RenameTable, compiler: MSDDLCompiler, **kw +) -> str: + return "EXEC sp_rename '%s', %s" % ( + format_table_name(compiler, element.table_name, element.schema), + format_table_name(compiler, element.new_table_name, None), + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mysql.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mysql.py new file mode 100644 index 000000000..3f8c0628e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/mysql.py @@ -0,0 +1,495 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import re +from typing import Any +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import schema +from sqlalchemy import types as sqltypes + +from .base import alter_table +from .base import AlterColumn +from .base import ColumnDefault +from .base import ColumnName +from .base import ColumnNullable +from .base import ColumnType +from .base import format_column_name +from .base import format_server_default +from .impl import DefaultImpl +from .. import util +from ..util import sqla_compat +from ..util.sqla_compat import _is_type_bound +from ..util.sqla_compat import compiles + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy.dialects.mysql.base import MySQLDDLCompiler + from sqlalchemy.sql.ddl import DropConstraint + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.type_api import TypeEngine + + from .base import _ServerDefault + + +class MySQLImpl(DefaultImpl): + __dialect__ = "mysql" + + transactional_ddl = False + type_synonyms = DefaultImpl.type_synonyms + ( + {"BOOL", "TINYINT"}, + {"JSON", "LONGTEXT"}, + ) + type_arg_extract = [r"character set ([\w\-_]+)", r"collate ([\w\-_]+)"] + + def alter_column( + self, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + server_default: Optional[ + Union[_ServerDefault, Literal[False]] + ] = False, + name: Optional[str] = None, + type_: Optional[TypeEngine] = None, + schema: Optional[str] = None, + existing_type: Optional[TypeEngine] = None, + existing_server_default: Optional[_ServerDefault] = None, + existing_nullable: Optional[bool] = None, + autoincrement: Optional[bool] = None, + existing_autoincrement: Optional[bool] = None, + comment: Optional[Union[str, Literal[False]]] = False, + existing_comment: Optional[str] = None, + **kw: Any, + ) -> None: + if sqla_compat._server_default_is_identity( + server_default, existing_server_default + ) or sqla_compat._server_default_is_computed( + server_default, existing_server_default + ): + # modifying computed or identity columns is not supported + # the default will raise + super().alter_column( + table_name, + column_name, + nullable=nullable, + type_=type_, + schema=schema, + existing_type=existing_type, + existing_nullable=existing_nullable, + server_default=server_default, + existing_server_default=existing_server_default, + **kw, + ) + if name is not None or self._is_mysql_allowed_functional_default( + type_ if type_ is not None else existing_type, server_default + ): + self._exec( + MySQLChangeColumn( + table_name, + column_name, + schema=schema, + newname=name if name is not None else column_name, + nullable=( + nullable + if nullable is not None + else ( + existing_nullable + if existing_nullable is not None + else True + ) + ), + type_=type_ if type_ is not None else existing_type, + default=( + server_default + if server_default is not False + else existing_server_default + ), + autoincrement=( + autoincrement + if autoincrement is not None + else existing_autoincrement + ), + comment=( + comment if comment is not False else existing_comment + ), + ) + ) + elif ( + nullable is not None + or type_ is not None + or autoincrement is not None + or comment is not False + ): + self._exec( + MySQLModifyColumn( + table_name, + column_name, + schema=schema, + newname=name if name is not None else column_name, + nullable=( + nullable + if nullable is not None + else ( + existing_nullable + if existing_nullable is not None + else True + ) + ), + type_=type_ if type_ is not None else existing_type, + default=( + server_default + if server_default is not False + else existing_server_default + ), + autoincrement=( + autoincrement + if autoincrement is not None + else existing_autoincrement + ), + comment=( + comment if comment is not False else existing_comment + ), + ) + ) + elif server_default is not False: + self._exec( + MySQLAlterDefault( + table_name, column_name, server_default, schema=schema + ) + ) + + def drop_constraint( + self, + const: Constraint, + **kw: Any, + ) -> None: + if isinstance(const, schema.CheckConstraint) and _is_type_bound(const): + return + + super().drop_constraint(const) + + def _is_mysql_allowed_functional_default( + self, + type_: Optional[TypeEngine], + server_default: Optional[Union[_ServerDefault, Literal[False]]], + ) -> bool: + return ( + type_ is not None + and type_._type_affinity is sqltypes.DateTime + and server_default is not None + ) + + def compare_server_default( + self, + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_inspector_default, + ): + # partially a workaround for SQLAlchemy issue #3023; if the + # column were created without "NOT NULL", MySQL may have added + # an implicit default of '0' which we need to skip + # TODO: this is not really covered anymore ? + if ( + metadata_column.type._type_affinity is sqltypes.Integer + and inspector_column.primary_key + and not inspector_column.autoincrement + and not rendered_metadata_default + and rendered_inspector_default == "'0'" + ): + return False + elif ( + rendered_inspector_default + and inspector_column.type._type_affinity is sqltypes.Integer + ): + rendered_inspector_default = ( + re.sub(r"^'|'$", "", rendered_inspector_default) + if rendered_inspector_default is not None + else None + ) + return rendered_inspector_default != rendered_metadata_default + elif ( + rendered_metadata_default + and metadata_column.type._type_affinity is sqltypes.String + ): + metadata_default = re.sub(r"^'|'$", "", rendered_metadata_default) + return rendered_inspector_default != f"'{metadata_default}'" + elif rendered_inspector_default and rendered_metadata_default: + # adjust for "function()" vs. "FUNCTION" as can occur particularly + # for the CURRENT_TIMESTAMP function on newer MariaDB versions + + # SQLAlchemy MySQL dialect bundles ON UPDATE into the server + # default; adjust for this possibly being present. + onupdate_ins = re.match( + r"(.*) (on update.*?)(?:\(\))?$", + rendered_inspector_default.lower(), + ) + onupdate_met = re.match( + r"(.*) (on update.*?)(?:\(\))?$", + rendered_metadata_default.lower(), + ) + + if onupdate_ins: + if not onupdate_met: + return True + elif onupdate_ins.group(2) != onupdate_met.group(2): + return True + + rendered_inspector_default = onupdate_ins.group(1) + rendered_metadata_default = onupdate_met.group(1) + + return re.sub( + r"(.*?)(?:\(\))?$", r"\1", rendered_inspector_default.lower() + ) != re.sub( + r"(.*?)(?:\(\))?$", r"\1", rendered_metadata_default.lower() + ) + else: + return rendered_inspector_default != rendered_metadata_default + + def correct_for_autogen_constraints( + self, + conn_unique_constraints, + conn_indexes, + metadata_unique_constraints, + metadata_indexes, + ): + # TODO: if SQLA 1.0, make use of "duplicates_index" + # metadata + removed = set() + for idx in list(conn_indexes): + if idx.unique: + continue + # MySQL puts implicit indexes on FK columns, even if + # composite and even if MyISAM, so can't check this too easily. + # the name of the index may be the column name or it may + # be the name of the FK constraint. + for col in idx.columns: + if idx.name == col.name: + conn_indexes.remove(idx) + removed.add(idx.name) + break + for fk in col.foreign_keys: + if fk.name == idx.name: + conn_indexes.remove(idx) + removed.add(idx.name) + break + if idx.name in removed: + break + + # then remove indexes from the "metadata_indexes" + # that we've removed from reflected, otherwise they come out + # as adds (see #202) + for idx in list(metadata_indexes): + if idx.name in removed: + metadata_indexes.remove(idx) + + def correct_for_autogen_foreignkeys(self, conn_fks, metadata_fks): + conn_fk_by_sig = { + self._create_reflected_constraint_sig(fk).unnamed_no_options: fk + for fk in conn_fks + } + metadata_fk_by_sig = { + self._create_metadata_constraint_sig(fk).unnamed_no_options: fk + for fk in metadata_fks + } + + for sig in set(conn_fk_by_sig).intersection(metadata_fk_by_sig): + mdfk = metadata_fk_by_sig[sig] + cnfk = conn_fk_by_sig[sig] + # MySQL considers RESTRICT to be the default and doesn't + # report on it. if the model has explicit RESTRICT and + # the conn FK has None, set it to RESTRICT + if ( + mdfk.ondelete is not None + and mdfk.ondelete.lower() == "restrict" + and cnfk.ondelete is None + ): + cnfk.ondelete = "RESTRICT" + if ( + mdfk.onupdate is not None + and mdfk.onupdate.lower() == "restrict" + and cnfk.onupdate is None + ): + cnfk.onupdate = "RESTRICT" + + +class MariaDBImpl(MySQLImpl): + __dialect__ = "mariadb" + + +class MySQLAlterDefault(AlterColumn): + def __init__( + self, + name: str, + column_name: str, + default: Optional[_ServerDefault], + schema: Optional[str] = None, + ) -> None: + super(AlterColumn, self).__init__(name, schema=schema) + self.column_name = column_name + self.default = default + + +class MySQLChangeColumn(AlterColumn): + def __init__( + self, + name: str, + column_name: str, + schema: Optional[str] = None, + newname: Optional[str] = None, + type_: Optional[TypeEngine] = None, + nullable: Optional[bool] = None, + default: Optional[Union[_ServerDefault, Literal[False]]] = False, + autoincrement: Optional[bool] = None, + comment: Optional[Union[str, Literal[False]]] = False, + ) -> None: + super(AlterColumn, self).__init__(name, schema=schema) + self.column_name = column_name + self.nullable = nullable + self.newname = newname + self.default = default + self.autoincrement = autoincrement + self.comment = comment + if type_ is None: + raise util.CommandError( + "All MySQL CHANGE/MODIFY COLUMN operations " + "require the existing type." + ) + + self.type_ = sqltypes.to_instance(type_) + + +class MySQLModifyColumn(MySQLChangeColumn): + pass + + +@compiles(ColumnNullable, "mysql", "mariadb") +@compiles(ColumnName, "mysql", "mariadb") +@compiles(ColumnDefault, "mysql", "mariadb") +@compiles(ColumnType, "mysql", "mariadb") +def _mysql_doesnt_support_individual(element, compiler, **kw): + raise NotImplementedError( + "Individual alter column constructs not supported by MySQL" + ) + + +@compiles(MySQLAlterDefault, "mysql", "mariadb") +def _mysql_alter_default( + element: MySQLAlterDefault, compiler: MySQLDDLCompiler, **kw +) -> str: + return "%s ALTER COLUMN %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + ( + "SET DEFAULT %s" % format_server_default(compiler, element.default) + if element.default is not None + else "DROP DEFAULT" + ), + ) + + +@compiles(MySQLModifyColumn, "mysql", "mariadb") +def _mysql_modify_column( + element: MySQLModifyColumn, compiler: MySQLDDLCompiler, **kw +) -> str: + return "%s MODIFY %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + _mysql_colspec( + compiler, + nullable=element.nullable, + server_default=element.default, + type_=element.type_, + autoincrement=element.autoincrement, + comment=element.comment, + ), + ) + + +@compiles(MySQLChangeColumn, "mysql", "mariadb") +def _mysql_change_column( + element: MySQLChangeColumn, compiler: MySQLDDLCompiler, **kw +) -> str: + return "%s CHANGE %s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + format_column_name(compiler, element.newname), + _mysql_colspec( + compiler, + nullable=element.nullable, + server_default=element.default, + type_=element.type_, + autoincrement=element.autoincrement, + comment=element.comment, + ), + ) + + +def _mysql_colspec( + compiler: MySQLDDLCompiler, + nullable: Optional[bool], + server_default: Optional[Union[_ServerDefault, Literal[False]]], + type_: TypeEngine, + autoincrement: Optional[bool], + comment: Optional[Union[str, Literal[False]]], +) -> str: + spec = "%s %s" % ( + compiler.dialect.type_compiler.process(type_), + "NULL" if nullable else "NOT NULL", + ) + if autoincrement: + spec += " AUTO_INCREMENT" + if server_default is not False and server_default is not None: + spec += " DEFAULT %s" % format_server_default(compiler, server_default) + if comment: + spec += " COMMENT %s" % compiler.sql_compiler.render_literal_value( + comment, sqltypes.String() + ) + + return spec + + +@compiles(schema.DropConstraint, "mysql", "mariadb") +def _mysql_drop_constraint( + element: DropConstraint, compiler: MySQLDDLCompiler, **kw +) -> str: + """Redefine SQLAlchemy's drop constraint to + raise errors for invalid constraint type.""" + + constraint = element.element + if isinstance( + constraint, + ( + schema.ForeignKeyConstraint, + schema.PrimaryKeyConstraint, + schema.UniqueConstraint, + ), + ): + assert not kw + return compiler.visit_drop_constraint(element) + elif isinstance(constraint, schema.CheckConstraint): + # note that SQLAlchemy as of 1.2 does not yet support + # DROP CONSTRAINT for MySQL/MariaDB, so we implement fully + # here. + if compiler.dialect.is_mariadb: # type: ignore[attr-defined] + return "ALTER TABLE %s DROP CONSTRAINT %s" % ( + compiler.preparer.format_table(constraint.table), + compiler.preparer.format_constraint(constraint), + ) + else: + return "ALTER TABLE %s DROP CHECK %s" % ( + compiler.preparer.format_table(constraint.table), + compiler.preparer.format_constraint(constraint), + ) + else: + raise NotImplementedError( + "No generic 'DROP CONSTRAINT' in MySQL - " + "please specify constraint type" + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/oracle.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/oracle.py new file mode 100644 index 000000000..eac99124f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/oracle.py @@ -0,0 +1,202 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import re +from typing import Any +from typing import Optional +from typing import TYPE_CHECKING + +from sqlalchemy.sql import sqltypes + +from .base import AddColumn +from .base import alter_table +from .base import ColumnComment +from .base import ColumnDefault +from .base import ColumnName +from .base import ColumnNullable +from .base import ColumnType +from .base import format_column_name +from .base import format_server_default +from .base import format_table_name +from .base import format_type +from .base import IdentityColumnDefault +from .base import RenameTable +from .impl import DefaultImpl +from ..util.sqla_compat import compiles + +if TYPE_CHECKING: + from sqlalchemy.dialects.oracle.base import OracleDDLCompiler + from sqlalchemy.engine.cursor import CursorResult + from sqlalchemy.sql.schema import Column + + +class OracleImpl(DefaultImpl): + __dialect__ = "oracle" + transactional_ddl = False + batch_separator = "/" + command_terminator = "" + type_synonyms = DefaultImpl.type_synonyms + ( + {"VARCHAR", "VARCHAR2"}, + {"BIGINT", "INTEGER", "SMALLINT", "DECIMAL", "NUMERIC", "NUMBER"}, + {"DOUBLE", "FLOAT", "DOUBLE_PRECISION"}, + ) + identity_attrs_ignore = () + + def __init__(self, *arg, **kw) -> None: + super().__init__(*arg, **kw) + self.batch_separator = self.context_opts.get( + "oracle_batch_separator", self.batch_separator + ) + + def _exec(self, construct: Any, *args, **kw) -> Optional[CursorResult]: + result = super()._exec(construct, *args, **kw) + if self.as_sql and self.batch_separator: + self.static_output(self.batch_separator) + return result + + def compare_server_default( + self, + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_inspector_default, + ): + if rendered_metadata_default is not None: + rendered_metadata_default = re.sub( + r"^\((.+)\)$", r"\1", rendered_metadata_default + ) + + rendered_metadata_default = re.sub( + r"^\"?'(.+)'\"?$", r"\1", rendered_metadata_default + ) + + if rendered_inspector_default is not None: + rendered_inspector_default = re.sub( + r"^\((.+)\)$", r"\1", rendered_inspector_default + ) + + rendered_inspector_default = re.sub( + r"^\"?'(.+)'\"?$", r"\1", rendered_inspector_default + ) + + rendered_inspector_default = rendered_inspector_default.strip() + return rendered_inspector_default != rendered_metadata_default + + def emit_begin(self) -> None: + self._exec("SET TRANSACTION READ WRITE") + + def emit_commit(self) -> None: + self._exec("COMMIT") + + +@compiles(AddColumn, "oracle") +def visit_add_column( + element: AddColumn, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s %s" % ( + alter_table(compiler, element.table_name, element.schema), + add_column(compiler, element.column, **kw), + ) + + +@compiles(ColumnNullable, "oracle") +def visit_column_nullable( + element: ColumnNullable, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + "NULL" if element.nullable else "NOT NULL", + ) + + +@compiles(ColumnType, "oracle") +def visit_column_type( + element: ColumnType, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + "%s" % format_type(compiler, element.type_), + ) + + +@compiles(ColumnName, "oracle") +def visit_column_name( + element: ColumnName, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s RENAME COLUMN %s TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + format_column_name(compiler, element.newname), + ) + + +@compiles(ColumnDefault, "oracle") +def visit_column_default( + element: ColumnDefault, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + ( + "DEFAULT %s" % format_server_default(compiler, element.default) + if element.default is not None + else "DEFAULT NULL" + ), + ) + + +@compiles(ColumnComment, "oracle") +def visit_column_comment( + element: ColumnComment, compiler: OracleDDLCompiler, **kw +) -> str: + ddl = "COMMENT ON COLUMN {table_name}.{column_name} IS {comment}" + + comment = compiler.sql_compiler.render_literal_value( + (element.comment if element.comment is not None else ""), + sqltypes.String(), + ) + + return ddl.format( + table_name=element.table_name, + column_name=element.column_name, + comment=comment, + ) + + +@compiles(RenameTable, "oracle") +def visit_rename_table( + element: RenameTable, compiler: OracleDDLCompiler, **kw +) -> str: + return "%s RENAME TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_table_name(compiler, element.new_table_name, None), + ) + + +def alter_column(compiler: OracleDDLCompiler, name: str) -> str: + return "MODIFY %s" % format_column_name(compiler, name) + + +def add_column(compiler: OracleDDLCompiler, column: Column[Any], **kw) -> str: + return "ADD %s" % compiler.get_column_specification(column, **kw) + + +@compiles(IdentityColumnDefault, "oracle") +def visit_identity_column( + element: IdentityColumnDefault, compiler: OracleDDLCompiler, **kw +): + text = "%s %s " % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + ) + if element.default is None: + # drop identity + text += "DROP IDENTITY" + return text + else: + text += compiler.visit_identity_column(element.default) + return text diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/postgresql.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/postgresql.py new file mode 100644 index 000000000..90ecf70c1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/postgresql.py @@ -0,0 +1,854 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import logging +import re +from typing import Any +from typing import cast +from typing import Dict +from typing import List +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import Column +from sqlalchemy import Float +from sqlalchemy import Identity +from sqlalchemy import literal_column +from sqlalchemy import Numeric +from sqlalchemy import select +from sqlalchemy import text +from sqlalchemy import types as sqltypes +from sqlalchemy.dialects.postgresql import BIGINT +from sqlalchemy.dialects.postgresql import ExcludeConstraint +from sqlalchemy.dialects.postgresql import INTEGER +from sqlalchemy.schema import CreateIndex +from sqlalchemy.sql.elements import ColumnClause +from sqlalchemy.sql.elements import TextClause +from sqlalchemy.sql.functions import FunctionElement +from sqlalchemy.types import NULLTYPE + +from .base import alter_column +from .base import alter_table +from .base import AlterColumn +from .base import ColumnComment +from .base import format_column_name +from .base import format_table_name +from .base import format_type +from .base import IdentityColumnDefault +from .base import RenameTable +from .impl import ComparisonResult +from .impl import DefaultImpl +from .. import util +from ..autogenerate import render +from ..operations import ops +from ..operations import schemaobj +from ..operations.base import BatchOperations +from ..operations.base import Operations +from ..util import sqla_compat +from ..util.sqla_compat import compiles + + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy import Index + from sqlalchemy import UniqueConstraint + from sqlalchemy.dialects.postgresql.array import ARRAY + from sqlalchemy.dialects.postgresql.base import PGDDLCompiler + from sqlalchemy.dialects.postgresql.hstore import HSTORE + from sqlalchemy.dialects.postgresql.json import JSON + from sqlalchemy.dialects.postgresql.json import JSONB + from sqlalchemy.sql.elements import ClauseElement + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.elements import quoted_name + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.type_api import TypeEngine + + from .base import _ServerDefault + from ..autogenerate.api import AutogenContext + from ..autogenerate.render import _f_name + from ..runtime.migration import MigrationContext + + +log = logging.getLogger(__name__) + + +class PostgresqlImpl(DefaultImpl): + __dialect__ = "postgresql" + transactional_ddl = True + type_synonyms = DefaultImpl.type_synonyms + ( + {"FLOAT", "DOUBLE PRECISION"}, + ) + + def create_index(self, index: Index, **kw: Any) -> None: + # this likely defaults to None if not present, so get() + # should normally not return the default value. being + # defensive in any case + postgresql_include = index.kwargs.get("postgresql_include", None) or () + for col in postgresql_include: + if col not in index.table.c: # type: ignore[union-attr] + index.table.append_column( # type: ignore[union-attr] + Column(col, sqltypes.NullType) + ) + self._exec(CreateIndex(index, **kw)) + + def prep_table_for_batch(self, batch_impl, table): + for constraint in table.constraints: + if ( + constraint.name is not None + and constraint.name in batch_impl.named_constraints + ): + self.drop_constraint(constraint) + + def compare_server_default( + self, + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_inspector_default, + ): + # don't do defaults for SERIAL columns + if ( + metadata_column.primary_key + and metadata_column is metadata_column.table._autoincrement_column + ): + return False + + conn_col_default = rendered_inspector_default + + defaults_equal = conn_col_default == rendered_metadata_default + if defaults_equal: + return False + + if None in ( + conn_col_default, + rendered_metadata_default, + metadata_column.server_default, + ): + return not defaults_equal + + metadata_default = metadata_column.server_default.arg + + if isinstance(metadata_default, str): + if not isinstance(inspector_column.type, (Numeric, Float)): + metadata_default = re.sub(r"^'|'$", "", metadata_default) + metadata_default = f"'{metadata_default}'" + + metadata_default = literal_column(metadata_default) + + # run a real compare against the server + conn = self.connection + assert conn is not None + return not conn.scalar( + select(literal_column(conn_col_default) == metadata_default) + ) + + def alter_column( + self, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + server_default: Optional[ + Union[_ServerDefault, Literal[False]] + ] = False, + name: Optional[str] = None, + type_: Optional[TypeEngine] = None, + schema: Optional[str] = None, + autoincrement: Optional[bool] = None, + existing_type: Optional[TypeEngine] = None, + existing_server_default: Optional[_ServerDefault] = None, + existing_nullable: Optional[bool] = None, + existing_autoincrement: Optional[bool] = None, + **kw: Any, + ) -> None: + using = kw.pop("postgresql_using", None) + + if using is not None and type_ is None: + raise util.CommandError( + "postgresql_using must be used with the type_ parameter" + ) + + if type_ is not None: + self._exec( + PostgresqlColumnType( + table_name, + column_name, + type_, + schema=schema, + using=using, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + ) + ) + + super().alter_column( + table_name, + column_name, + nullable=nullable, + server_default=server_default, + name=name, + schema=schema, + autoincrement=autoincrement, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_autoincrement=existing_autoincrement, + **kw, + ) + + def autogen_column_reflect(self, inspector, table, column_info): + if column_info.get("default") and isinstance( + column_info["type"], (INTEGER, BIGINT) + ): + seq_match = re.match( + r"nextval\('(.+?)'::regclass\)", column_info["default"] + ) + if seq_match: + info = sqla_compat._exec_on_inspector( + inspector, + text( + "select c.relname, a.attname " + "from pg_class as c join " + "pg_depend d on d.objid=c.oid and " + "d.classid='pg_class'::regclass and " + "d.refclassid='pg_class'::regclass " + "join pg_class t on t.oid=d.refobjid " + "join pg_attribute a on a.attrelid=t.oid and " + "a.attnum=d.refobjsubid " + "where c.relkind='S' and " + "c.oid=cast(:seqname as regclass)" + ), + seqname=seq_match.group(1), + ).first() + if info: + seqname, colname = info + if colname == column_info["name"]: + log.info( + "Detected sequence named '%s' as " + "owned by integer column '%s(%s)', " + "assuming SERIAL and omitting", + seqname, + table.name, + colname, + ) + # sequence, and the owner is this column, + # its a SERIAL - whack it! + del column_info["default"] + + def correct_for_autogen_constraints( + self, + conn_unique_constraints, + conn_indexes, + metadata_unique_constraints, + metadata_indexes, + ): + doubled_constraints = { + index + for index in conn_indexes + if index.info.get("duplicates_constraint") + } + + for ix in doubled_constraints: + conn_indexes.remove(ix) + + if not sqla_compat.sqla_2: + self._skip_functional_indexes(metadata_indexes, conn_indexes) + + # pg behavior regarding modifiers + # | # | compiled sql | returned sql | regexp. group is removed | + # | - | ---------------- | -----------------| ------------------------ | + # | 1 | nulls first | nulls first | - | + # | 2 | nulls last | | (? str: + expr = expr.lower().replace('"', "").replace("'", "") + if index.table is not None: + # should not be needed, since include_table=False is in compile + expr = expr.replace(f"{index.table.name.lower()}.", "") + + if "::" in expr: + # strip :: cast. types can have spaces in them + expr = re.sub(r"(::[\w ]+\w)", "", expr) + + while expr and expr[0] == "(" and expr[-1] == ")": + expr = expr[1:-1] + + # NOTE: when parsing the connection expression this cleanup could + # be skipped + for rs in self._default_modifiers_re: + if match := rs.search(expr): + start, end = match.span(1) + expr = expr[:start] + expr[end:] + break + + while expr and expr[0] == "(" and expr[-1] == ")": + expr = expr[1:-1] + + # strip casts + cast_re = re.compile(r"cast\s*\(") + if cast_re.match(expr): + expr = cast_re.sub("", expr) + # remove the as type + expr = re.sub(r"as\s+[^)]+\)", "", expr) + # remove spaces + expr = expr.replace(" ", "") + return expr + + def _dialect_options( + self, item: Union[Index, UniqueConstraint] + ) -> Tuple[Any, ...]: + # only the positive case is returned by sqlalchemy reflection so + # None and False are threated the same + if item.dialect_kwargs.get("postgresql_nulls_not_distinct"): + return ("nulls_not_distinct",) + return () + + def compare_indexes( + self, + metadata_index: Index, + reflected_index: Index, + ) -> ComparisonResult: + msg = [] + unique_msg = self._compare_index_unique( + metadata_index, reflected_index + ) + if unique_msg: + msg.append(unique_msg) + m_exprs = metadata_index.expressions + r_exprs = reflected_index.expressions + if len(m_exprs) != len(r_exprs): + msg.append(f"expression number {len(r_exprs)} to {len(m_exprs)}") + if msg: + # no point going further, return early + return ComparisonResult.Different(msg) + skip = [] + for pos, (m_e, r_e) in enumerate(zip(m_exprs, r_exprs), 1): + m_compile = self._compile_element(m_e) + m_text = self._cleanup_index_expr(metadata_index, m_compile) + # print(f"META ORIG: {m_compile!r} CLEANUP: {m_text!r}") + r_compile = self._compile_element(r_e) + r_text = self._cleanup_index_expr(metadata_index, r_compile) + # print(f"CONN ORIG: {r_compile!r} CLEANUP: {r_text!r}") + if m_text == r_text: + continue # expressions these are equal + elif m_compile.strip().endswith("_ops") and ( + " " in m_compile or ")" in m_compile # is an expression + ): + skip.append( + f"expression #{pos} {m_compile!r} detected " + "as including operator clause." + ) + util.warn( + f"Expression #{pos} {m_compile!r} in index " + f"{reflected_index.name!r} detected to include " + "an operator clause. Expression compare cannot proceed. " + "Please move the operator clause to the " + "``postgresql_ops`` dict to enable proper compare " + "of the index expressions: " + "https://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#operator-classes", # noqa: E501 + ) + else: + msg.append(f"expression #{pos} {r_compile!r} to {m_compile!r}") + + m_options = self._dialect_options(metadata_index) + r_options = self._dialect_options(reflected_index) + if m_options != r_options: + msg.extend(f"options {r_options} to {m_options}") + + if msg: + return ComparisonResult.Different(msg) + elif skip: + # if there are other changes detected don't skip the index + return ComparisonResult.Skip(skip) + else: + return ComparisonResult.Equal() + + def compare_unique_constraint( + self, + metadata_constraint: UniqueConstraint, + reflected_constraint: UniqueConstraint, + ) -> ComparisonResult: + metadata_tup = self._create_metadata_constraint_sig( + metadata_constraint + ) + reflected_tup = self._create_reflected_constraint_sig( + reflected_constraint + ) + + meta_sig = metadata_tup.unnamed + conn_sig = reflected_tup.unnamed + if conn_sig != meta_sig: + return ComparisonResult.Different( + f"expression {conn_sig} to {meta_sig}" + ) + + metadata_do = self._dialect_options(metadata_tup.const) + conn_do = self._dialect_options(reflected_tup.const) + if metadata_do != conn_do: + return ComparisonResult.Different( + f"expression {conn_do} to {metadata_do}" + ) + + return ComparisonResult.Equal() + + def adjust_reflected_dialect_options( + self, reflected_options: Dict[str, Any], kind: str + ) -> Dict[str, Any]: + options: Dict[str, Any] + options = reflected_options.get("dialect_options", {}).copy() + if not options.get("postgresql_include"): + options.pop("postgresql_include", None) + return options + + def _compile_element(self, element: Union[ClauseElement, str]) -> str: + if isinstance(element, str): + return element + return element.compile( + dialect=self.dialect, + compile_kwargs={"literal_binds": True, "include_table": False}, + ).string + + def render_ddl_sql_expr( + self, + expr: ClauseElement, + is_server_default: bool = False, + is_index: bool = False, + **kw: Any, + ) -> str: + """Render a SQL expression that is typically a server default, + index expression, etc. + + """ + + # apply self_group to index expressions; + # see https://github.com/sqlalchemy/sqlalchemy/blob/ + # 82fa95cfce070fab401d020c6e6e4a6a96cc2578/ + # lib/sqlalchemy/dialects/postgresql/base.py#L2261 + if is_index and not isinstance(expr, ColumnClause): + expr = expr.self_group() + + return super().render_ddl_sql_expr( + expr, is_server_default=is_server_default, is_index=is_index, **kw + ) + + def render_type( + self, type_: TypeEngine, autogen_context: AutogenContext + ) -> Union[str, Literal[False]]: + mod = type(type_).__module__ + if not mod.startswith("sqlalchemy.dialects.postgresql"): + return False + + if hasattr(self, "_render_%s_type" % type_.__visit_name__): + meth = getattr(self, "_render_%s_type" % type_.__visit_name__) + return meth(type_, autogen_context) + + return False + + def _render_HSTORE_type( + self, type_: HSTORE, autogen_context: AutogenContext + ) -> str: + return cast( + str, + render._render_type_w_subtype( + type_, autogen_context, "text_type", r"(.+?\(.*text_type=)" + ), + ) + + def _render_ARRAY_type( + self, type_: ARRAY, autogen_context: AutogenContext + ) -> str: + return cast( + str, + render._render_type_w_subtype( + type_, autogen_context, "item_type", r"(.+?\()" + ), + ) + + def _render_JSON_type( + self, type_: JSON, autogen_context: AutogenContext + ) -> str: + return cast( + str, + render._render_type_w_subtype( + type_, autogen_context, "astext_type", r"(.+?\(.*astext_type=)" + ), + ) + + def _render_JSONB_type( + self, type_: JSONB, autogen_context: AutogenContext + ) -> str: + return cast( + str, + render._render_type_w_subtype( + type_, autogen_context, "astext_type", r"(.+?\(.*astext_type=)" + ), + ) + + +class PostgresqlColumnType(AlterColumn): + def __init__( + self, name: str, column_name: str, type_: TypeEngine, **kw + ) -> None: + using = kw.pop("using", None) + super().__init__(name, column_name, **kw) + self.type_ = sqltypes.to_instance(type_) + self.using = using + + +@compiles(RenameTable, "postgresql") +def visit_rename_table( + element: RenameTable, compiler: PGDDLCompiler, **kw +) -> str: + return "%s RENAME TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_table_name(compiler, element.new_table_name, None), + ) + + +@compiles(PostgresqlColumnType, "postgresql") +def visit_column_type( + element: PostgresqlColumnType, compiler: PGDDLCompiler, **kw +) -> str: + return "%s %s %s %s" % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + "TYPE %s" % format_type(compiler, element.type_), + "USING %s" % element.using if element.using else "", + ) + + +@compiles(ColumnComment, "postgresql") +def visit_column_comment( + element: ColumnComment, compiler: PGDDLCompiler, **kw +) -> str: + ddl = "COMMENT ON COLUMN {table_name}.{column_name} IS {comment}" + comment = ( + compiler.sql_compiler.render_literal_value( + element.comment, sqltypes.String() + ) + if element.comment is not None + else "NULL" + ) + + return ddl.format( + table_name=format_table_name( + compiler, element.table_name, element.schema + ), + column_name=format_column_name(compiler, element.column_name), + comment=comment, + ) + + +@compiles(IdentityColumnDefault, "postgresql") +def visit_identity_column( + element: IdentityColumnDefault, compiler: PGDDLCompiler, **kw +): + text = "%s %s " % ( + alter_table(compiler, element.table_name, element.schema), + alter_column(compiler, element.column_name), + ) + if element.default is None: + # drop identity + text += "DROP IDENTITY" + return text + elif element.existing_server_default is None: + # add identity options + text += "ADD " + text += compiler.visit_identity_column(element.default) + return text + else: + # alter identity + diff, _, _ = element.impl._compare_identity_default( + element.default, element.existing_server_default + ) + identity = element.default + for attr in sorted(diff): + if attr == "always": + text += "SET GENERATED %s " % ( + "ALWAYS" if identity.always else "BY DEFAULT" + ) + else: + text += "SET %s " % compiler.get_identity_options( + Identity(**{attr: getattr(identity, attr)}) + ) + return text + + +@Operations.register_operation("create_exclude_constraint") +@BatchOperations.register_operation( + "create_exclude_constraint", "batch_create_exclude_constraint" +) +@ops.AddConstraintOp.register_add_constraint("exclude_constraint") +class CreateExcludeConstraintOp(ops.AddConstraintOp): + """Represent a create exclude constraint operation.""" + + constraint_type = "exclude" + + def __init__( + self, + constraint_name: sqla_compat._ConstraintName, + table_name: Union[str, quoted_name], + elements: Union[ + Sequence[Tuple[str, str]], + Sequence[Tuple[ColumnClause[Any], str]], + ], + where: Optional[Union[ColumnElement[bool], str]] = None, + schema: Optional[str] = None, + _orig_constraint: Optional[ExcludeConstraint] = None, + **kw, + ) -> None: + self.constraint_name = constraint_name + self.table_name = table_name + self.elements = elements + self.where = where + self.schema = schema + self._orig_constraint = _orig_constraint + self.kw = kw + + @classmethod + def from_constraint( # type:ignore[override] + cls, constraint: ExcludeConstraint + ) -> CreateExcludeConstraintOp: + constraint_table = sqla_compat._table_for_constraint(constraint) + return cls( + constraint.name, + constraint_table.name, + [ # type: ignore + (expr, op) for expr, name, op in constraint._render_exprs + ], + where=cast("ColumnElement[bool] | None", constraint.where), + schema=constraint_table.schema, + _orig_constraint=constraint, + deferrable=constraint.deferrable, + initially=constraint.initially, + using=constraint.using, + ) + + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> ExcludeConstraint: + if self._orig_constraint is not None: + return self._orig_constraint + schema_obj = schemaobj.SchemaObjects(migration_context) + t = schema_obj.table(self.table_name, schema=self.schema) + excl = ExcludeConstraint( + *self.elements, + name=self.constraint_name, + where=self.where, + **self.kw, + ) + for ( + expr, + name, + oper, + ) in excl._render_exprs: + t.append_column(Column(name, NULLTYPE)) + t.append_constraint(excl) + return excl + + @classmethod + def create_exclude_constraint( + cls, + operations: Operations, + constraint_name: str, + table_name: str, + *elements: Any, + **kw: Any, + ) -> Optional[Table]: + """Issue an alter to create an EXCLUDE constraint using the + current migration context. + + .. note:: This method is Postgresql specific, and additionally + requires at least SQLAlchemy 1.0. + + e.g.:: + + from alembic import op + + op.create_exclude_constraint( + "user_excl", + "user", + ("period", "&&"), + ("group", "="), + where=("group != 'some group'"), + ) + + Note that the expressions work the same way as that of + the ``ExcludeConstraint`` object itself; if plain strings are + passed, quoting rules must be applied manually. + + :param name: Name of the constraint. + :param table_name: String name of the source table. + :param elements: exclude conditions. + :param where: SQL expression or SQL string with optional WHERE + clause. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. + + """ + op = cls(constraint_name, table_name, elements, **kw) + return operations.invoke(op) + + @classmethod + def batch_create_exclude_constraint( + cls, + operations: BatchOperations, + constraint_name: str, + *elements: Any, + **kw: Any, + ) -> Optional[Table]: + """Issue a "create exclude constraint" instruction using the + current batch migration context. + + .. note:: This method is Postgresql specific, and additionally + requires at least SQLAlchemy 1.0. + + .. seealso:: + + :meth:`.Operations.create_exclude_constraint` + + """ + kw["schema"] = operations.impl.schema + op = cls(constraint_name, operations.impl.table_name, elements, **kw) + return operations.invoke(op) + + +@render.renderers.dispatch_for(CreateExcludeConstraintOp) +def _add_exclude_constraint( + autogen_context: AutogenContext, op: CreateExcludeConstraintOp +) -> str: + return _exclude_constraint(op.to_constraint(), autogen_context, alter=True) + + +@render._constraint_renderers.dispatch_for(ExcludeConstraint) +def _render_inline_exclude_constraint( + constraint: ExcludeConstraint, + autogen_context: AutogenContext, + namespace_metadata: MetaData, +) -> str: + rendered = render._user_defined_render( + "exclude", constraint, autogen_context + ) + if rendered is not False: + return rendered + + return _exclude_constraint(constraint, autogen_context, False) + + +def _postgresql_autogenerate_prefix(autogen_context: AutogenContext) -> str: + imports = autogen_context.imports + if imports is not None: + imports.add("from sqlalchemy.dialects import postgresql") + return "postgresql." + + +def _exclude_constraint( + constraint: ExcludeConstraint, + autogen_context: AutogenContext, + alter: bool, +) -> str: + opts: List[Tuple[str, Union[quoted_name, str, _f_name, None]]] = [] + + has_batch = autogen_context._has_batch + + if constraint.deferrable: + opts.append(("deferrable", str(constraint.deferrable))) + if constraint.initially: + opts.append(("initially", str(constraint.initially))) + if constraint.using: + opts.append(("using", str(constraint.using))) + if not has_batch and alter and constraint.table.schema: + opts.append(("schema", render._ident(constraint.table.schema))) + if not alter and constraint.name: + opts.append( + ("name", render._render_gen_name(autogen_context, constraint.name)) + ) + + def do_expr_where_opts(): + args = [ + "(%s, %r)" + % ( + _render_potential_column( + sqltext, # type:ignore[arg-type] + autogen_context, + ), + opstring, + ) + for sqltext, name, opstring in constraint._render_exprs + ] + if constraint.where is not None: + args.append( + "where=%s" + % render._render_potential_expr( + constraint.where, autogen_context + ) + ) + args.extend(["%s=%r" % (k, v) for k, v in opts]) + return args + + if alter: + args = [ + repr(render._render_gen_name(autogen_context, constraint.name)) + ] + if not has_batch: + args += [repr(render._ident(constraint.table.name))] + args.extend(do_expr_where_opts()) + return "%(prefix)screate_exclude_constraint(%(args)s)" % { + "prefix": render._alembic_autogenerate_prefix(autogen_context), + "args": ", ".join(args), + } + else: + args = do_expr_where_opts() + return "%(prefix)sExcludeConstraint(%(args)s)" % { + "prefix": _postgresql_autogenerate_prefix(autogen_context), + "args": ", ".join(args), + } + + +def _render_potential_column( + value: Union[ + ColumnClause[Any], Column[Any], TextClause, FunctionElement[Any] + ], + autogen_context: AutogenContext, +) -> str: + if isinstance(value, ColumnClause): + if value.is_literal: + # like literal_column("int8range(from, to)") in ExcludeConstraint + template = "%(prefix)sliteral_column(%(name)r)" + else: + template = "%(prefix)scolumn(%(name)r)" + + return template % { + "prefix": render._sqlalchemy_autogenerate_prefix(autogen_context), + "name": value.name, + } + else: + return render._render_potential_expr( + value, + autogen_context, + wrap_in_element=isinstance(value, (TextClause, FunctionElement)), + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/sqlite.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/sqlite.py new file mode 100644 index 000000000..5f141330f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/ddl/sqlite.py @@ -0,0 +1,237 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import re +from typing import Any +from typing import Dict +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import cast +from sqlalchemy import Computed +from sqlalchemy import JSON +from sqlalchemy import schema +from sqlalchemy import sql + +from .base import alter_table +from .base import ColumnName +from .base import format_column_name +from .base import format_table_name +from .base import RenameTable +from .impl import DefaultImpl +from .. import util +from ..util.sqla_compat import compiles + +if TYPE_CHECKING: + from sqlalchemy.engine.reflection import Inspector + from sqlalchemy.sql.compiler import DDLCompiler + from sqlalchemy.sql.elements import Cast + from sqlalchemy.sql.elements import ClauseElement + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.type_api import TypeEngine + + from ..operations.batch import BatchOperationsImpl + + +class SQLiteImpl(DefaultImpl): + __dialect__ = "sqlite" + + transactional_ddl = False + """SQLite supports transactional DDL, but pysqlite does not: + see: http://bugs.python.org/issue10740 + """ + + def requires_recreate_in_batch( + self, batch_op: BatchOperationsImpl + ) -> bool: + """Return True if the given :class:`.BatchOperationsImpl` + would need the table to be recreated and copied in order to + proceed. + + Normally, only returns True on SQLite when operations other + than add_column are present. + + """ + for op in batch_op.batch: + if op[0] == "add_column": + col = op[1][1] + if isinstance( + col.server_default, schema.DefaultClause + ) and isinstance(col.server_default.arg, sql.ClauseElement): + return True + elif ( + isinstance(col.server_default, Computed) + and col.server_default.persisted + ): + return True + elif op[0] not in ("create_index", "drop_index"): + return True + else: + return False + + def add_constraint(self, const: Constraint): + # attempt to distinguish between an + # auto-gen constraint and an explicit one + if const._create_rule is None: + raise NotImplementedError( + "No support for ALTER of constraints in SQLite dialect. " + "Please refer to the batch mode feature which allows for " + "SQLite migrations using a copy-and-move strategy." + ) + elif const._create_rule(self): + util.warn( + "Skipping unsupported ALTER for " + "creation of implicit constraint. " + "Please refer to the batch mode feature which allows for " + "SQLite migrations using a copy-and-move strategy." + ) + + def drop_constraint(self, const: Constraint, **kw: Any): + if const._create_rule is None: + raise NotImplementedError( + "No support for ALTER of constraints in SQLite dialect. " + "Please refer to the batch mode feature which allows for " + "SQLite migrations using a copy-and-move strategy." + ) + + def compare_server_default( + self, + inspector_column: Column[Any], + metadata_column: Column[Any], + rendered_metadata_default: Optional[str], + rendered_inspector_default: Optional[str], + ) -> bool: + if rendered_metadata_default is not None: + rendered_metadata_default = re.sub( + r"^\((.+)\)$", r"\1", rendered_metadata_default + ) + + rendered_metadata_default = re.sub( + r"^\"?'(.+)'\"?$", r"\1", rendered_metadata_default + ) + + if rendered_inspector_default is not None: + rendered_inspector_default = re.sub( + r"^\((.+)\)$", r"\1", rendered_inspector_default + ) + + rendered_inspector_default = re.sub( + r"^\"?'(.+)'\"?$", r"\1", rendered_inspector_default + ) + + return rendered_inspector_default != rendered_metadata_default + + def _guess_if_default_is_unparenthesized_sql_expr( + self, expr: Optional[str] + ) -> bool: + """Determine if a server default is a SQL expression or a constant. + + There are too many assertions that expect server defaults to round-trip + identically without parenthesis added so we will add parens only in + very specific cases. + + """ + if not expr: + return False + elif re.match(r"^[0-9\.]$", expr): + return False + elif re.match(r"^'.+'$", expr): + return False + elif re.match(r"^\(.+\)$", expr): + return False + else: + return True + + def autogen_column_reflect( + self, + inspector: Inspector, + table: Table, + column_info: Dict[str, Any], + ) -> None: + # SQLite expression defaults require parenthesis when sent + # as DDL + if self._guess_if_default_is_unparenthesized_sql_expr( + column_info.get("default", None) + ): + column_info["default"] = "(%s)" % (column_info["default"],) + + def render_ddl_sql_expr( + self, expr: ClauseElement, is_server_default: bool = False, **kw + ) -> str: + # SQLite expression defaults require parenthesis when sent + # as DDL + str_expr = super().render_ddl_sql_expr( + expr, is_server_default=is_server_default, **kw + ) + + if ( + is_server_default + and self._guess_if_default_is_unparenthesized_sql_expr(str_expr) + ): + str_expr = "(%s)" % (str_expr,) + return str_expr + + def cast_for_batch_migrate( + self, + existing: Column[Any], + existing_transfer: Dict[str, Union[TypeEngine, Cast]], + new_type: TypeEngine, + ) -> None: + if ( + existing.type._type_affinity is not new_type._type_affinity + and not isinstance(new_type, JSON) + ): + existing_transfer["expr"] = cast( + existing_transfer["expr"], new_type + ) + + def correct_for_autogen_constraints( + self, + conn_unique_constraints, + conn_indexes, + metadata_unique_constraints, + metadata_indexes, + ): + self._skip_functional_indexes(metadata_indexes, conn_indexes) + + +@compiles(RenameTable, "sqlite") +def visit_rename_table( + element: RenameTable, compiler: DDLCompiler, **kw +) -> str: + return "%s RENAME TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_table_name(compiler, element.new_table_name, None), + ) + + +@compiles(ColumnName, "sqlite") +def visit_column_name(element: ColumnName, compiler: DDLCompiler, **kw) -> str: + return "%s RENAME COLUMN %s TO %s" % ( + alter_table(compiler, element.table_name, element.schema), + format_column_name(compiler, element.column_name), + format_column_name(compiler, element.newname), + ) + + +# @compiles(AddColumn, 'sqlite') +# def visit_add_column(element, compiler, **kw): +# return "%s %s" % ( +# alter_table(compiler, element.table_name, element.schema), +# add_column(compiler, element.column, **kw) +# ) + + +# def add_column(compiler, column, **kw): +# text = "ADD COLUMN %s" % compiler.get_column_specification(column, **kw) +# need to modify SQLAlchemy so that the CHECK associated with a Boolean +# or Enum gets placed as part of the column constraints, not the Table +# see ticket 98 +# for const in column.constraints: +# text += compiler.process(AddConstraint(const)) +# return text diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/environment.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/environment.py new file mode 100644 index 000000000..adfc93eb0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/environment.py @@ -0,0 +1 @@ +from .runtime.environment import * # noqa diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/migration.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/migration.py new file mode 100644 index 000000000..02626e2cf --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/migration.py @@ -0,0 +1 @@ +from .runtime.migration import * # noqa diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.py new file mode 100644 index 000000000..f3f5fac0c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.py @@ -0,0 +1,5 @@ +from .operations.base import Operations + +# create proxy functions for +# each method on the Operations class. +Operations.create_module_class_proxy(globals(), locals()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.pyi b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.pyi new file mode 100644 index 000000000..8cdf75907 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/op.pyi @@ -0,0 +1,1356 @@ +# ### this file stubs are generated by tools/write_pyi.py - do not edit ### +# ### imports are manually managed +from __future__ import annotations + +from contextlib import contextmanager +from typing import Any +from typing import Awaitable +from typing import Callable +from typing import Dict +from typing import Iterator +from typing import List +from typing import Literal +from typing import Mapping +from typing import Optional +from typing import overload +from typing import Sequence +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +if TYPE_CHECKING: + from sqlalchemy.engine import Connection + from sqlalchemy.sql import Executable + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.elements import conv + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.expression import TableClause + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Computed + from sqlalchemy.sql.schema import Identity + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.type_api import TypeEngine + from sqlalchemy.util import immutabledict + + from .operations.base import BatchOperations + from .operations.ops import AddColumnOp + from .operations.ops import AddConstraintOp + from .operations.ops import AlterColumnOp + from .operations.ops import AlterTableOp + from .operations.ops import BulkInsertOp + from .operations.ops import CreateIndexOp + from .operations.ops import CreateTableCommentOp + from .operations.ops import CreateTableOp + from .operations.ops import DropColumnOp + from .operations.ops import DropConstraintOp + from .operations.ops import DropIndexOp + from .operations.ops import DropTableCommentOp + from .operations.ops import DropTableOp + from .operations.ops import ExecuteSQLOp + from .operations.ops import MigrateOperation + from .runtime.migration import MigrationContext + from .util.sqla_compat import _literal_bindparam + +_T = TypeVar("_T") +_C = TypeVar("_C", bound=Callable[..., Any]) + +### end imports ### + +def add_column( + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + if_not_exists: Optional[bool] = None, +) -> None: + """Issue an "add column" instruction using the current + migration context. + + e.g.:: + + from alembic import op + from sqlalchemy import Column, String + + op.add_column("organization", Column("name", String())) + + The :meth:`.Operations.add_column` method typically corresponds + to the SQL command "ALTER TABLE... ADD COLUMN". Within the scope + of this command, the column's name, datatype, nullability, + and optional server-generated defaults may be indicated. + + .. note:: + + With the exception of NOT NULL constraints or single-column FOREIGN + KEY constraints, other kinds of constraints such as PRIMARY KEY, + UNIQUE or CHECK constraints **cannot** be generated using this + method; for these constraints, refer to operations such as + :meth:`.Operations.create_primary_key` and + :meth:`.Operations.create_check_constraint`. In particular, the + following :class:`~sqlalchemy.schema.Column` parameters are + **ignored**: + + * :paramref:`~sqlalchemy.schema.Column.primary_key` - SQL databases + typically do not support an ALTER operation that can add + individual columns one at a time to an existing primary key + constraint, therefore it's less ambiguous to use the + :meth:`.Operations.create_primary_key` method, which assumes no + existing primary key constraint is present. + * :paramref:`~sqlalchemy.schema.Column.unique` - use the + :meth:`.Operations.create_unique_constraint` method + * :paramref:`~sqlalchemy.schema.Column.index` - use the + :meth:`.Operations.create_index` method + + + The provided :class:`~sqlalchemy.schema.Column` object may include a + :class:`~sqlalchemy.schema.ForeignKey` constraint directive, + referencing a remote table name. For this specific type of constraint, + Alembic will automatically emit a second ALTER statement in order to + add the single-column FOREIGN KEY constraint separately:: + + from alembic import op + from sqlalchemy import Column, INTEGER, ForeignKey + + op.add_column( + "organization", + Column("account_id", INTEGER, ForeignKey("accounts.id")), + ) + + The column argument passed to :meth:`.Operations.add_column` is a + :class:`~sqlalchemy.schema.Column` construct, used in the same way it's + used in SQLAlchemy. In particular, values or functions to be indicated + as producing the column's default value on the database side are + specified using the ``server_default`` parameter, and not ``default`` + which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the column add + op.add_column( + "account", + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + :param table_name: String name of the parent table. + :param column: a :class:`sqlalchemy.schema.Column` object + representing the new column. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator + when creating the new column for compatible dialects + + .. versionadded:: 1.16.0 + + """ + +def alter_column( + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + comment: Union[str, Literal[False], None] = False, + server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + new_column_name: Optional[str] = None, + type_: Union[TypeEngine[Any], Type[TypeEngine[Any]], None] = None, + existing_type: Union[TypeEngine[Any], Type[TypeEngine[Any]], None] = None, + existing_server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + **kw: Any, +) -> None: + r"""Issue an "alter column" instruction using the + current migration context. + + Generally, only that aspect of the column which + is being changed, i.e. name, type, nullability, + default, needs to be specified. Multiple changes + can also be specified at once and the backend should + "do the right thing", emitting each change either + separately or together as the backend allows. + + MySQL has special requirements here, since MySQL + cannot ALTER a column without a full specification. + When producing MySQL-compatible migration files, + it is recommended that the ``existing_type``, + ``existing_server_default``, and ``existing_nullable`` + parameters be present, if not being altered. + + Type changes which are against the SQLAlchemy + "schema" types :class:`~sqlalchemy.types.Boolean` + and :class:`~sqlalchemy.types.Enum` may also + add or drop constraints which accompany those + types on backends that don't support them natively. + The ``existing_type`` argument is + used in this case to identify and remove a previous + constraint that was bound to the type object. + + :param table_name: string name of the target table. + :param column_name: string name of the target column, + as it exists before the operation begins. + :param nullable: Optional; specify ``True`` or ``False`` + to alter the column's nullability. + :param server_default: Optional; specify a string + SQL expression, :func:`~sqlalchemy.sql.expression.text`, + or :class:`~sqlalchemy.schema.DefaultClause` to indicate + an alteration to the column's default value. + Set to ``None`` to have the default removed. + :param comment: optional string text of a new comment to add to the + column. + :param new_column_name: Optional; specify a string name here to + indicate the new name within a column rename operation. + :param type\_: Optional; a :class:`~sqlalchemy.types.TypeEngine` + type object to specify a change to the column's type. + For SQLAlchemy types that also indicate a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, :class:`~sqlalchemy.types.Enum`), + the constraint is also generated. + :param autoincrement: set the ``AUTO_INCREMENT`` flag of the column; + currently understood by the MySQL dialect. + :param existing_type: Optional; a + :class:`~sqlalchemy.types.TypeEngine` + type object to specify the previous type. This + is required for all MySQL column alter operations that + don't otherwise specify a new type, as well as for + when nullability is being changed on a SQL Server + column. It is also used if the type is a so-called + SQLAlchemy "schema" type which may define a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, + :class:`~sqlalchemy.types.Enum`), + so that the constraint can be dropped. + :param existing_server_default: Optional; The existing + default value of the column. Required on MySQL if + an existing default is not being changed; else MySQL + removes the default. + :param existing_nullable: Optional; the existing nullability + of the column. Required on MySQL if the existing nullability + is not being changed; else MySQL sets this to NULL. + :param existing_autoincrement: Optional; the existing autoincrement + of the column. Used for MySQL's system of altering a column + that specifies ``AUTO_INCREMENT``. + :param existing_comment: string text of the existing comment on the + column to be maintained. Required on MySQL if the existing comment + on the column is not being changed. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param postgresql_using: String argument which will indicate a + SQL expression to render within the Postgresql-specific USING clause + within ALTER COLUMN. This string is taken directly as raw SQL which + must explicitly include any necessary quoting or escaping of tokens + within the expression. + + """ + +@contextmanager +def batch_alter_table( + table_name: str, + schema: Optional[str] = None, + recreate: Literal["auto", "always", "never"] = "auto", + partial_reordering: Optional[Tuple[Any, ...]] = None, + copy_from: Optional[Table] = None, + table_args: Tuple[Any, ...] = (), + table_kwargs: Mapping[str, Any] = immutabledict({}), + reflect_args: Tuple[Any, ...] = (), + reflect_kwargs: Mapping[str, Any] = immutabledict({}), + naming_convention: Optional[Dict[str, str]] = None, +) -> Iterator[BatchOperations]: + """Invoke a series of per-table migrations in batch. + + Batch mode allows a series of operations specific to a table + to be syntactically grouped together, and allows for alternate + modes of table migration, in particular the "recreate" style of + migration required by SQLite. + + "recreate" style is as follows: + + 1. A new table is created with the new specification, based on the + migration directives within the batch, using a temporary name. + + 2. the data copied from the existing table to the new table. + + 3. the existing table is dropped. + + 4. the new table is renamed to the existing table name. + + The directive by default will only use "recreate" style on the + SQLite backend, and only if directives are present which require + this form, e.g. anything other than ``add_column()``. The batch + operation on other backends will proceed using standard ALTER TABLE + operations. + + The method is used as a context manager, which returns an instance + of :class:`.BatchOperations`; this object is the same as + :class:`.Operations` except that table names and schema names + are omitted. E.g.:: + + with op.batch_alter_table("some_table") as batch_op: + batch_op.add_column(Column("foo", Integer)) + batch_op.drop_column("bar") + + The operations within the context manager are invoked at once + when the context is ended. When run against SQLite, if the + migrations include operations not supported by SQLite's ALTER TABLE, + the entire table will be copied to a new one with the new + specification, moving all data across as well. + + The copy operation by default uses reflection to retrieve the current + structure of the table, and therefore :meth:`.batch_alter_table` + in this mode requires that the migration is run in "online" mode. + The ``copy_from`` parameter may be passed which refers to an existing + :class:`.Table` object, which will bypass this reflection step. + + .. note:: The table copy operation will currently not copy + CHECK constraints, and may not copy UNIQUE constraints that are + unnamed, as is possible on SQLite. See the section + :ref:`sqlite_batch_constraints` for workarounds. + + :param table_name: name of table + :param schema: optional schema name. + :param recreate: under what circumstances the table should be + recreated. At its default of ``"auto"``, the SQLite dialect will + recreate the table if any operations other than ``add_column()``, + ``create_index()``, or ``drop_index()`` are + present. Other options include ``"always"`` and ``"never"``. + :param copy_from: optional :class:`~sqlalchemy.schema.Table` object + that will act as the structure of the table being copied. If omitted, + table reflection is used to retrieve the structure of the table. + + .. seealso:: + + :ref:`batch_offline_mode` + + :paramref:`~.Operations.batch_alter_table.reflect_args` + + :paramref:`~.Operations.batch_alter_table.reflect_kwargs` + + :param reflect_args: a sequence of additional positional arguments that + will be applied to the table structure being reflected / copied; + this may be used to pass column and constraint overrides to the + table that will be reflected, in lieu of passing the whole + :class:`~sqlalchemy.schema.Table` using + :paramref:`~.Operations.batch_alter_table.copy_from`. + :param reflect_kwargs: a dictionary of additional keyword arguments + that will be applied to the table structure being copied; this may be + used to pass additional table and reflection options to the table that + will be reflected, in lieu of passing the whole + :class:`~sqlalchemy.schema.Table` using + :paramref:`~.Operations.batch_alter_table.copy_from`. + :param table_args: a sequence of additional positional arguments that + will be applied to the new :class:`~sqlalchemy.schema.Table` when + created, in addition to those copied from the source table. + This may be used to provide additional constraints such as CHECK + constraints that may not be reflected. + :param table_kwargs: a dictionary of additional keyword arguments + that will be applied to the new :class:`~sqlalchemy.schema.Table` + when created, in addition to those copied from the source table. + This may be used to provide for additional table options that may + not be reflected. + :param naming_convention: a naming convention dictionary of the form + described at :ref:`autogen_naming_conventions` which will be applied + to the :class:`~sqlalchemy.schema.MetaData` during the reflection + process. This is typically required if one wants to drop SQLite + constraints, as these constraints will not have names when + reflected on this backend. Requires SQLAlchemy **0.9.4** or greater. + + .. seealso:: + + :ref:`dropping_sqlite_foreign_keys` + + :param partial_reordering: a list of tuples, each suggesting a desired + ordering of two or more columns in the newly created table. Requires + that :paramref:`.batch_alter_table.recreate` is set to ``"always"``. + Examples, given a table with columns "a", "b", "c", and "d": + + Specify the order of all columns:: + + with op.batch_alter_table( + "some_table", + recreate="always", + partial_reordering=[("c", "d", "a", "b")], + ) as batch_op: + pass + + Ensure "d" appears before "c", and "b", appears before "a":: + + with op.batch_alter_table( + "some_table", + recreate="always", + partial_reordering=[("d", "c"), ("b", "a")], + ) as batch_op: + pass + + The ordering of columns not included in the partial_reordering + set is undefined. Therefore it is best to specify the complete + ordering of all columns for best results. + + .. note:: batch mode requires SQLAlchemy 0.8 or above. + + .. seealso:: + + :ref:`batch_migrations` + + """ + +def bulk_insert( + table: Union[Table, TableClause], + rows: List[Dict[str, Any]], + *, + multiinsert: bool = True, +) -> None: + """Issue a "bulk insert" operation using the current + migration context. + + This provides a means of representing an INSERT of multiple rows + which works equally well in the context of executing on a live + connection as well as that of generating a SQL script. In the + case of a SQL script, the values are rendered inline into the + statement. + + e.g.:: + + from alembic import op + from datetime import date + from sqlalchemy.sql import table, column + from sqlalchemy import String, Integer, Date + + # Create an ad-hoc table to use for the insert statement. + accounts_table = table( + "account", + column("id", Integer), + column("name", String), + column("create_date", Date), + ) + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": date(2010, 10, 5), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": date(2007, 5, 27), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": date(2008, 8, 15), + }, + ], + ) + + When using --sql mode, some datatypes may not render inline + automatically, such as dates and other special types. When this + issue is present, :meth:`.Operations.inline_literal` may be used:: + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": op.inline_literal("2010-10-05"), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": op.inline_literal("2007-05-27"), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": op.inline_literal("2008-08-15"), + }, + ], + multiinsert=False, + ) + + When using :meth:`.Operations.inline_literal` in conjunction with + :meth:`.Operations.bulk_insert`, in order for the statement to work + in "online" (e.g. non --sql) mode, the + :paramref:`~.Operations.bulk_insert.multiinsert` + flag should be set to ``False``, which will have the effect of + individual INSERT statements being emitted to the database, each + with a distinct VALUES clause, so that the "inline" values can + still be rendered, rather than attempting to pass the values + as bound parameters. + + :param table: a table object which represents the target of the INSERT. + + :param rows: a list of dictionaries indicating rows. + + :param multiinsert: when at its default of True and --sql mode is not + enabled, the INSERT statement will be executed using + "executemany()" style, where all elements in the list of + dictionaries are passed as bound parameters in a single + list. Setting this to False results in individual INSERT + statements being emitted per parameter set, and is needed + in those cases where non-literal values are present in the + parameter sets. + + """ + +def create_check_constraint( + constraint_name: Optional[str], + table_name: str, + condition: Union[str, ColumnElement[bool], TextClause], + *, + schema: Optional[str] = None, + **kw: Any, +) -> None: + """Issue a "create check constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + from sqlalchemy.sql import column, func + + op.create_check_constraint( + "ck_user_name_len", + "user", + func.len(column("name")) > 5, + ) + + CHECK constraints are usually against a SQL expression, so ad-hoc + table metadata is usually needed. The function will convert the given + arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound + to an anonymous table in order to emit the CREATE statement. + + :param name: Name of the check constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param condition: SQL expression that's the condition of the + constraint. Can be a string or SQLAlchemy expression language + structure. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + +def create_exclude_constraint( + constraint_name: str, table_name: str, *elements: Any, **kw: Any +) -> Optional[Table]: + """Issue an alter to create an EXCLUDE constraint using the + current migration context. + + .. note:: This method is Postgresql specific, and additionally + requires at least SQLAlchemy 1.0. + + e.g.:: + + from alembic import op + + op.create_exclude_constraint( + "user_excl", + "user", + ("period", "&&"), + ("group", "="), + where=("group != 'some group'"), + ) + + Note that the expressions work the same way as that of + the ``ExcludeConstraint`` object itself; if plain strings are + passed, quoting rules must be applied manually. + + :param name: Name of the constraint. + :param table_name: String name of the source table. + :param elements: exclude conditions. + :param where: SQL expression or SQL string with optional WHERE + clause. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. + + """ + +def create_foreign_key( + constraint_name: Optional[str], + source_table: str, + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + *, + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + source_schema: Optional[str] = None, + referent_schema: Optional[str] = None, + **dialect_kw: Any, +) -> None: + """Issue a "create foreign key" instruction using the + current migration context. + + e.g.:: + + from alembic import op + + op.create_foreign_key( + "fk_user_address", + "address", + "user", + ["user_id"], + ["id"], + ) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.ForeignKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the foreign key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param source_table: String name of the source table. + :param referent_table: String name of the destination table. + :param local_cols: a list of string column names in the + source table. + :param remote_cols: a list of string column names in the + remote table. + :param onupdate: Optional string. If set, emit ON UPDATE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param ondelete: Optional string. If set, emit ON DELETE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param deferrable: optional bool. If set, emit DEFERRABLE or NOT + DEFERRABLE when issuing DDL for this constraint. + :param source_schema: Optional schema name of the source table. + :param referent_schema: Optional schema name of the destination table. + + """ + +def create_index( + index_name: Optional[str], + table_name: str, + columns: Sequence[Union[str, TextClause, ColumnElement[Any]]], + *, + schema: Optional[str] = None, + unique: bool = False, + if_not_exists: Optional[bool] = None, + **kw: Any, +) -> None: + r"""Issue a "create index" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_index("ik_test", "t1", ["foo", "bar"]) + + Functional indexes can be produced by using the + :func:`sqlalchemy.sql.expression.text` construct:: + + from alembic import op + from sqlalchemy import text + + op.create_index("ik_test", "t1", [text("lower(foo)")]) + + :param index_name: name of the index. + :param table_name: name of the owning table. + :param columns: a list consisting of string column names and/or + :func:`~sqlalchemy.sql.expression.text` constructs. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param unique: If True, create a unique index. + + :param quote: Force quoting of this column's name on or off, + corresponding to ``True`` or ``False``. When left at its default + of ``None``, the column identifier will be quoted according to + whether the name is case sensitive (identifiers with at least one + upper case character are treated as case sensitive), or if it's a + reserved word. This flag is only needed to force quoting of a + reserved word which is not known by the SQLAlchemy dialect. + + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ + +def create_primary_key( + constraint_name: Optional[str], + table_name: str, + columns: List[str], + *, + schema: Optional[str] = None, +) -> None: + """Issue a "create primary key" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_primary_key("pk_my_table", "my_table", ["id", "version"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.PrimaryKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the primary key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions` + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the target table. + :param columns: a list of string column names to be applied to the + primary key constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + +def create_table( + table_name: str, + *columns: SchemaItem, + if_not_exists: Optional[bool] = None, + **kw: Any, +) -> Table: + r"""Issue a "create table" instruction using the current migration + context. + + This directive receives an argument list similar to that of the + traditional :class:`sqlalchemy.schema.Table` construct, but without the + metadata:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + Note that :meth:`.create_table` accepts + :class:`~sqlalchemy.schema.Column` + constructs directly from the SQLAlchemy library. In particular, + default values to be created on the database side are + specified using the ``server_default`` parameter, and not + ``default`` which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the "timestamp" column + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + The function also returns a newly created + :class:`~sqlalchemy.schema.Table` object, corresponding to the table + specification given, which is suitable for + immediate SQL operations, in particular + :meth:`.Operations.bulk_insert`:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + account_table = op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + op.bulk_insert( + account_table, + [ + {"name": "A1", "description": "account 1"}, + {"name": "A2", "description": "account 2"}, + ], + ) + + :param table_name: Name of the table + :param \*columns: collection of :class:`~sqlalchemy.schema.Column` + objects within + the table, as well as optional :class:`~sqlalchemy.schema.Constraint` + objects + and :class:`~.sqlalchemy.schema.Index` objects. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + :return: the :class:`~sqlalchemy.schema.Table` object corresponding + to the parameters given. + + """ + +def create_table_comment( + table_name: str, + comment: Optional[str], + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, +) -> None: + """Emit a COMMENT ON operation to set the comment for a table. + + :param table_name: string name of the target table. + :param comment: string value of the comment being registered against + the specified table. + :param existing_comment: String value of a comment + already registered on the specified table, used within autogenerate + so that the operation is reversible, but not required for direct + use. + + .. seealso:: + + :meth:`.Operations.drop_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ + +def create_unique_constraint( + constraint_name: Optional[str], + table_name: str, + columns: Sequence[str], + *, + schema: Optional[str] = None, + **kw: Any, +) -> Any: + """Issue a "create unique constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + op.create_unique_constraint("uq_user_name", "user", ["name"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.UniqueConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param name: Name of the unique constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param columns: a list of string column names in the + source table. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + +def drop_column( + table_name: str, + column_name: str, + *, + schema: Optional[str] = None, + **kw: Any, +) -> None: + """Issue a "drop column" instruction using the current + migration context. + + e.g.:: + + drop_column("organization", "account_id") + + :param table_name: name of table + :param column_name: name of column + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the new column for compatible dialects + + .. versionadded:: 1.16.0 + + :param mssql_drop_check: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the CHECK constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.check_constraints, + then exec's a separate DROP CONSTRAINT for that constraint. + :param mssql_drop_default: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the DEFAULT constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.default_constraints, + then exec's a separate DROP CONSTRAINT for that default. + :param mssql_drop_foreign_key: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop a single FOREIGN KEY constraint on the column using a + SQL-script-compatible + block that selects into a @variable from + sys.foreign_keys/sys.foreign_key_columns, + then exec's a separate DROP CONSTRAINT for that default. Only + works if the column has exactly one FK constraint which refers to + it, at the moment. + """ + +def drop_constraint( + constraint_name: str, + table_name: str, + type_: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, +) -> None: + r"""Drop a constraint of the given name, typically via DROP CONSTRAINT. + + :param constraint_name: name of the constraint. + :param table_name: table name. + :param type\_: optional, required on MySQL. can be + 'foreignkey', 'primary', 'unique', or 'check'. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the constraint + + .. versionadded:: 1.16.0 + + """ + +def drop_index( + index_name: str, + table_name: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, +) -> None: + r"""Issue a "drop index" instruction using the current + migration context. + + e.g.:: + + drop_index("accounts") + + :param index_name: name of the index. + :param table_name: name of the owning table. Some + backends such as Microsoft SQL Server require this. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + :param if_exists: If True, adds IF EXISTS operator when + dropping the index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ + +def drop_table( + table_name: str, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, +) -> None: + r"""Issue a "drop table" instruction using the current + migration context. + + + e.g.:: + + drop_table("accounts") + + :param table_name: Name of the table + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + """ + +def drop_table_comment( + table_name: str, + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, +) -> None: + """Issue a "drop table comment" operation to + remove an existing comment set on a table. + + :param table_name: string name of the target table. + :param existing_comment: An optional string value of a comment already + registered on the specified table. + + .. seealso:: + + :meth:`.Operations.create_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ + +def execute( + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, +) -> None: + r"""Execute the given SQL using the current migration context. + + The given SQL can be a plain string, e.g.:: + + op.execute("INSERT INTO table (foo) VALUES ('some value')") + + Or it can be any kind of Core SQL Expression construct, such as + below where we use an update construct:: + + from sqlalchemy.sql import table, column + from sqlalchemy import String + from alembic import op + + account = table("account", column("name", String)) + op.execute( + account.update() + .where(account.c.name == op.inline_literal("account 1")) + .values({"name": op.inline_literal("account 2")}) + ) + + Above, we made use of the SQLAlchemy + :func:`sqlalchemy.sql.expression.table` and + :func:`sqlalchemy.sql.expression.column` constructs to make a brief, + ad-hoc table construct just for our UPDATE statement. A full + :class:`~sqlalchemy.schema.Table` construct of course works perfectly + fine as well, though note it's a recommended practice to at least + ensure the definition of a table is self-contained within the migration + script, rather than imported from a module that may break compatibility + with older migrations. + + In a SQL script context, the statement is emitted directly to the + output stream. There is *no* return result, however, as this + function is oriented towards generating a change script + that can run in "offline" mode. Additionally, parameterized + statements are discouraged here, as they *will not work* in offline + mode. Above, we use :meth:`.inline_literal` where parameters are + to be used. + + For full interaction with a connected database where parameters can + also be used normally, use the "bind" available from the context:: + + from alembic import op + + connection = op.get_bind() + + connection.execute( + account.update() + .where(account.c.name == "account 1") + .values({"name": "account 2"}) + ) + + Additionally, when passing the statement as a plain string, it is first + coerced into a :func:`sqlalchemy.sql.expression.text` construct + before being passed along. In the less likely case that the + literal SQL string contains a colon, it must be escaped with a + backslash, as:: + + op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')") + + + :param sqltext: Any legal SQLAlchemy expression, including: + + * a string + * a :func:`sqlalchemy.sql.expression.text` construct. + * a :func:`sqlalchemy.sql.expression.insert` construct. + * a :func:`sqlalchemy.sql.expression.update` construct. + * a :func:`sqlalchemy.sql.expression.delete` construct. + * Any "executable" described in SQLAlchemy Core documentation, + noting that no result set is returned. + + .. note:: when passing a plain string, the statement is coerced into + a :func:`sqlalchemy.sql.expression.text` construct. This construct + considers symbols with colons, e.g. ``:foo`` to be bound parameters. + To avoid this, ensure that colon symbols are escaped, e.g. + ``\:foo``. + + :param execution_options: Optional dictionary of + execution options, will be passed to + :meth:`sqlalchemy.engine.Connection.execution_options`. + """ + +def f(name: str) -> conv: + """Indicate a string name that has already had a naming convention + applied to it. + + This feature combines with the SQLAlchemy ``naming_convention`` feature + to disambiguate constraint names that have already had naming + conventions applied to them, versus those that have not. This is + necessary in the case that the ``"%(constraint_name)s"`` token + is used within a naming convention, so that it can be identified + that this particular name should remain fixed. + + If the :meth:`.Operations.f` is used on a constraint, the naming + convention will not take effect:: + + op.add_column("t", "x", Boolean(name=op.f("ck_bool_t_x"))) + + Above, the CHECK constraint generated will have the name + ``ck_bool_t_x`` regardless of whether or not a naming convention is + in use. + + Alternatively, if a naming convention is in use, and 'f' is not used, + names will be converted along conventions. If the ``target_metadata`` + contains the naming convention + ``{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}``, then the + output of the following:: + + op.add_column("t", "x", Boolean(name="x")) + + will be:: + + CONSTRAINT ck_bool_t_x CHECK (x in (1, 0))) + + The function is rendered in the output of autogenerate when + a particular constraint name is already converted. + + """ + +def get_bind() -> Connection: + """Return the current 'bind'. + + Under normal circumstances, this is the + :class:`~sqlalchemy.engine.Connection` currently being used + to emit SQL to the database. + + In a SQL script context, this value is ``None``. [TODO: verify this] + + """ + +def get_context() -> MigrationContext: + """Return the :class:`.MigrationContext` object that's + currently in use. + + """ + +def implementation_for(op_cls: Any) -> Callable[[_C], _C]: + """Register an implementation for a given :class:`.MigrateOperation`. + + This is part of the operation extensibility API. + + .. seealso:: + + :ref:`operation_plugins` - example of use + + """ + +def inline_literal( + value: Union[str, int], type_: Optional[TypeEngine[Any]] = None +) -> _literal_bindparam: + r"""Produce an 'inline literal' expression, suitable for + using in an INSERT, UPDATE, or DELETE statement. + + When using Alembic in "offline" mode, CRUD operations + aren't compatible with SQLAlchemy's default behavior surrounding + literal values, + which is that they are converted into bound values and passed + separately into the ``execute()`` method of the DBAPI cursor. + An offline SQL + script needs to have these rendered inline. While it should + always be noted that inline literal values are an **enormous** + security hole in an application that handles untrusted input, + a schema migration is not run in this context, so + literals are safe to render inline, with the caveat that + advanced types like dates may not be supported directly + by SQLAlchemy. + + See :meth:`.Operations.execute` for an example usage of + :meth:`.Operations.inline_literal`. + + The environment can also be configured to attempt to render + "literal" values inline automatically, for those simple types + that are supported by the dialect; see + :paramref:`.EnvironmentContext.configure.literal_binds` for this + more recently added feature. + + :param value: The value to render. Strings, integers, and simple + numerics should be supported. Other types like boolean, + dates, etc. may or may not be supported yet by various + backends. + :param type\_: optional - a :class:`sqlalchemy.types.TypeEngine` + subclass stating the type of this value. In SQLAlchemy + expressions, this is usually derived automatically + from the Python type of the value itself, as well as + based on the context in which the value is used. + + .. seealso:: + + :paramref:`.EnvironmentContext.configure.literal_binds` + + """ + +@overload +def invoke(operation: CreateTableOp) -> Table: ... +@overload +def invoke( + operation: Union[ + AddConstraintOp, + DropConstraintOp, + CreateIndexOp, + DropIndexOp, + AddColumnOp, + AlterColumnOp, + AlterTableOp, + CreateTableCommentOp, + DropTableCommentOp, + DropColumnOp, + BulkInsertOp, + DropTableOp, + ExecuteSQLOp, + ], +) -> None: ... +@overload +def invoke(operation: MigrateOperation) -> Any: + """Given a :class:`.MigrateOperation`, invoke it in terms of + this :class:`.Operations` instance. + + """ + +def register_operation( + name: str, sourcename: Optional[str] = None +) -> Callable[[Type[_T]], Type[_T]]: + """Register a new operation for this class. + + This method is normally used to add new operations + to the :class:`.Operations` class, and possibly the + :class:`.BatchOperations` class as well. All Alembic migration + operations are implemented via this system, however the system + is also available as a public API to facilitate adding custom + operations. + + .. seealso:: + + :ref:`operation_plugins` + + + """ + +def rename_table( + old_table_name: str, new_table_name: str, *, schema: Optional[str] = None +) -> None: + """Emit an ALTER TABLE to rename a table. + + :param old_table_name: old name. + :param new_table_name: new name. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + +def run_async( + async_function: Callable[..., Awaitable[_T]], *args: Any, **kw_args: Any +) -> _T: + """Invoke the given asynchronous callable, passing an asynchronous + :class:`~sqlalchemy.ext.asyncio.AsyncConnection` as the first + argument. + + This method allows calling async functions from within the + synchronous ``upgrade()`` or ``downgrade()`` alembic migration + method. + + The async connection passed to the callable shares the same + transaction as the connection running in the migration context. + + Any additional arg or kw_arg passed to this function are passed + to the provided async function. + + .. versionadded: 1.11 + + .. note:: + + This method can be called only when alembic is called using + an async dialect. + """ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/__init__.py new file mode 100644 index 000000000..26197cbe8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/__init__.py @@ -0,0 +1,15 @@ +from . import toimpl +from .base import AbstractOperations +from .base import BatchOperations +from .base import Operations +from .ops import MigrateOperation +from .ops import MigrationScript + + +__all__ = [ + "AbstractOperations", + "Operations", + "BatchOperations", + "MigrateOperation", + "MigrationScript", +] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/base.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/base.py new file mode 100644 index 000000000..26c327242 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/base.py @@ -0,0 +1,1923 @@ +# mypy: allow-untyped-calls + +from __future__ import annotations + +from contextlib import contextmanager +import re +import textwrap +from typing import Any +from typing import Awaitable +from typing import Callable +from typing import Dict +from typing import Iterator +from typing import List # noqa +from typing import Mapping +from typing import NoReturn +from typing import Optional +from typing import overload +from typing import Sequence # noqa +from typing import Tuple +from typing import Type # noqa +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy.sql.elements import conv + +from . import batch +from . import schemaobj +from .. import util +from ..util import sqla_compat +from ..util.compat import formatannotation_fwdref +from ..util.compat import inspect_formatargspec +from ..util.compat import inspect_getfullargspec +from ..util.sqla_compat import _literal_bindparam + + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy import Table + from sqlalchemy.engine import Connection + from sqlalchemy.sql import Executable + from sqlalchemy.sql.expression import ColumnElement + from sqlalchemy.sql.expression import TableClause + from sqlalchemy.sql.expression import TextClause + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Computed + from sqlalchemy.sql.schema import Identity + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.types import TypeEngine + + from .batch import BatchOperationsImpl + from .ops import AddColumnOp + from .ops import AddConstraintOp + from .ops import AlterColumnOp + from .ops import AlterTableOp + from .ops import BulkInsertOp + from .ops import CreateIndexOp + from .ops import CreateTableCommentOp + from .ops import CreateTableOp + from .ops import DropColumnOp + from .ops import DropConstraintOp + from .ops import DropIndexOp + from .ops import DropTableCommentOp + from .ops import DropTableOp + from .ops import ExecuteSQLOp + from .ops import MigrateOperation + from ..ddl import DefaultImpl + from ..runtime.migration import MigrationContext +__all__ = ("Operations", "BatchOperations") +_T = TypeVar("_T") + +_C = TypeVar("_C", bound=Callable[..., Any]) + + +class AbstractOperations(util.ModuleClsProxy): + """Base class for Operations and BatchOperations. + + .. versionadded:: 1.11.0 + + """ + + impl: Union[DefaultImpl, BatchOperationsImpl] + _to_impl = util.Dispatcher() + + def __init__( + self, + migration_context: MigrationContext, + impl: Optional[BatchOperationsImpl] = None, + ) -> None: + """Construct a new :class:`.Operations` + + :param migration_context: a :class:`.MigrationContext` + instance. + + """ + self.migration_context = migration_context + if impl is None: + self.impl = migration_context.impl + else: + self.impl = impl + + self.schema_obj = schemaobj.SchemaObjects(migration_context) + + @classmethod + def register_operation( + cls, name: str, sourcename: Optional[str] = None + ) -> Callable[[Type[_T]], Type[_T]]: + """Register a new operation for this class. + + This method is normally used to add new operations + to the :class:`.Operations` class, and possibly the + :class:`.BatchOperations` class as well. All Alembic migration + operations are implemented via this system, however the system + is also available as a public API to facilitate adding custom + operations. + + .. seealso:: + + :ref:`operation_plugins` + + + """ + + def register(op_cls: Type[_T]) -> Type[_T]: + if sourcename is None: + fn = getattr(op_cls, name) + source_name = fn.__name__ + else: + fn = getattr(op_cls, sourcename) + source_name = fn.__name__ + + spec = inspect_getfullargspec(fn) + + name_args = spec[0] + assert name_args[0:2] == ["cls", "operations"] + + name_args[0:2] = ["self"] + + args = inspect_formatargspec( + *spec, formatannotation=formatannotation_fwdref + ) + num_defaults = len(spec[3]) if spec[3] else 0 + + defaulted_vals: Tuple[Any, ...] + + if num_defaults: + defaulted_vals = tuple(name_args[0 - num_defaults :]) + else: + defaulted_vals = () + + defaulted_vals += tuple(spec[4]) + # here, we are using formatargspec in a different way in order + # to get a string that will re-apply incoming arguments to a new + # function call + + apply_kw = inspect_formatargspec( + name_args + spec[4], + spec[1], + spec[2], + defaulted_vals, + formatvalue=lambda x: "=" + x, + formatannotation=formatannotation_fwdref, + ) + + args = re.sub( + r'[_]?ForwardRef\(([\'"].+?[\'"])\)', + lambda m: m.group(1), + args, + ) + + func_text = textwrap.dedent( + """\ + def %(name)s%(args)s: + %(doc)r + return op_cls.%(source_name)s%(apply_kw)s + """ + % { + "name": name, + "source_name": source_name, + "args": args, + "apply_kw": apply_kw, + "doc": fn.__doc__, + } + ) + + globals_ = dict(globals()) + globals_.update({"op_cls": op_cls}) + lcl: Dict[str, Any] = {} + + exec(func_text, globals_, lcl) + setattr(cls, name, lcl[name]) + fn.__func__.__doc__ = ( + "This method is proxied on " + "the :class:`.%s` class, via the :meth:`.%s.%s` method." + % (cls.__name__, cls.__name__, name) + ) + if hasattr(fn, "_legacy_translations"): + lcl[name]._legacy_translations = fn._legacy_translations + return op_cls + + return register + + @classmethod + def implementation_for(cls, op_cls: Any) -> Callable[[_C], _C]: + """Register an implementation for a given :class:`.MigrateOperation`. + + This is part of the operation extensibility API. + + .. seealso:: + + :ref:`operation_plugins` - example of use + + """ + + def decorate(fn: _C) -> _C: + cls._to_impl.dispatch_for(op_cls)(fn) + return fn + + return decorate + + @classmethod + @contextmanager + def context( + cls, migration_context: MigrationContext + ) -> Iterator[Operations]: + op = Operations(migration_context) + op._install_proxy() + yield op + op._remove_proxy() + + @contextmanager + def batch_alter_table( + self, + table_name: str, + schema: Optional[str] = None, + recreate: Literal["auto", "always", "never"] = "auto", + partial_reordering: Optional[Tuple[Any, ...]] = None, + copy_from: Optional[Table] = None, + table_args: Tuple[Any, ...] = (), + table_kwargs: Mapping[str, Any] = util.immutabledict(), + reflect_args: Tuple[Any, ...] = (), + reflect_kwargs: Mapping[str, Any] = util.immutabledict(), + naming_convention: Optional[Dict[str, str]] = None, + ) -> Iterator[BatchOperations]: + """Invoke a series of per-table migrations in batch. + + Batch mode allows a series of operations specific to a table + to be syntactically grouped together, and allows for alternate + modes of table migration, in particular the "recreate" style of + migration required by SQLite. + + "recreate" style is as follows: + + 1. A new table is created with the new specification, based on the + migration directives within the batch, using a temporary name. + + 2. the data copied from the existing table to the new table. + + 3. the existing table is dropped. + + 4. the new table is renamed to the existing table name. + + The directive by default will only use "recreate" style on the + SQLite backend, and only if directives are present which require + this form, e.g. anything other than ``add_column()``. The batch + operation on other backends will proceed using standard ALTER TABLE + operations. + + The method is used as a context manager, which returns an instance + of :class:`.BatchOperations`; this object is the same as + :class:`.Operations` except that table names and schema names + are omitted. E.g.:: + + with op.batch_alter_table("some_table") as batch_op: + batch_op.add_column(Column("foo", Integer)) + batch_op.drop_column("bar") + + The operations within the context manager are invoked at once + when the context is ended. When run against SQLite, if the + migrations include operations not supported by SQLite's ALTER TABLE, + the entire table will be copied to a new one with the new + specification, moving all data across as well. + + The copy operation by default uses reflection to retrieve the current + structure of the table, and therefore :meth:`.batch_alter_table` + in this mode requires that the migration is run in "online" mode. + The ``copy_from`` parameter may be passed which refers to an existing + :class:`.Table` object, which will bypass this reflection step. + + .. note:: The table copy operation will currently not copy + CHECK constraints, and may not copy UNIQUE constraints that are + unnamed, as is possible on SQLite. See the section + :ref:`sqlite_batch_constraints` for workarounds. + + :param table_name: name of table + :param schema: optional schema name. + :param recreate: under what circumstances the table should be + recreated. At its default of ``"auto"``, the SQLite dialect will + recreate the table if any operations other than ``add_column()``, + ``create_index()``, or ``drop_index()`` are + present. Other options include ``"always"`` and ``"never"``. + :param copy_from: optional :class:`~sqlalchemy.schema.Table` object + that will act as the structure of the table being copied. If omitted, + table reflection is used to retrieve the structure of the table. + + .. seealso:: + + :ref:`batch_offline_mode` + + :paramref:`~.Operations.batch_alter_table.reflect_args` + + :paramref:`~.Operations.batch_alter_table.reflect_kwargs` + + :param reflect_args: a sequence of additional positional arguments that + will be applied to the table structure being reflected / copied; + this may be used to pass column and constraint overrides to the + table that will be reflected, in lieu of passing the whole + :class:`~sqlalchemy.schema.Table` using + :paramref:`~.Operations.batch_alter_table.copy_from`. + :param reflect_kwargs: a dictionary of additional keyword arguments + that will be applied to the table structure being copied; this may be + used to pass additional table and reflection options to the table that + will be reflected, in lieu of passing the whole + :class:`~sqlalchemy.schema.Table` using + :paramref:`~.Operations.batch_alter_table.copy_from`. + :param table_args: a sequence of additional positional arguments that + will be applied to the new :class:`~sqlalchemy.schema.Table` when + created, in addition to those copied from the source table. + This may be used to provide additional constraints such as CHECK + constraints that may not be reflected. + :param table_kwargs: a dictionary of additional keyword arguments + that will be applied to the new :class:`~sqlalchemy.schema.Table` + when created, in addition to those copied from the source table. + This may be used to provide for additional table options that may + not be reflected. + :param naming_convention: a naming convention dictionary of the form + described at :ref:`autogen_naming_conventions` which will be applied + to the :class:`~sqlalchemy.schema.MetaData` during the reflection + process. This is typically required if one wants to drop SQLite + constraints, as these constraints will not have names when + reflected on this backend. Requires SQLAlchemy **0.9.4** or greater. + + .. seealso:: + + :ref:`dropping_sqlite_foreign_keys` + + :param partial_reordering: a list of tuples, each suggesting a desired + ordering of two or more columns in the newly created table. Requires + that :paramref:`.batch_alter_table.recreate` is set to ``"always"``. + Examples, given a table with columns "a", "b", "c", and "d": + + Specify the order of all columns:: + + with op.batch_alter_table( + "some_table", + recreate="always", + partial_reordering=[("c", "d", "a", "b")], + ) as batch_op: + pass + + Ensure "d" appears before "c", and "b", appears before "a":: + + with op.batch_alter_table( + "some_table", + recreate="always", + partial_reordering=[("d", "c"), ("b", "a")], + ) as batch_op: + pass + + The ordering of columns not included in the partial_reordering + set is undefined. Therefore it is best to specify the complete + ordering of all columns for best results. + + .. note:: batch mode requires SQLAlchemy 0.8 or above. + + .. seealso:: + + :ref:`batch_migrations` + + """ + impl = batch.BatchOperationsImpl( + self, + table_name, + schema, + recreate, + copy_from, + table_args, + table_kwargs, + reflect_args, + reflect_kwargs, + naming_convention, + partial_reordering, + ) + batch_op = BatchOperations(self.migration_context, impl=impl) + yield batch_op + impl.flush() + + def get_context(self) -> MigrationContext: + """Return the :class:`.MigrationContext` object that's + currently in use. + + """ + + return self.migration_context + + @overload + def invoke(self, operation: CreateTableOp) -> Table: ... + + @overload + def invoke( + self, + operation: Union[ + AddConstraintOp, + DropConstraintOp, + CreateIndexOp, + DropIndexOp, + AddColumnOp, + AlterColumnOp, + AlterTableOp, + CreateTableCommentOp, + DropTableCommentOp, + DropColumnOp, + BulkInsertOp, + DropTableOp, + ExecuteSQLOp, + ], + ) -> None: ... + + @overload + def invoke(self, operation: MigrateOperation) -> Any: ... + + def invoke(self, operation: MigrateOperation) -> Any: + """Given a :class:`.MigrateOperation`, invoke it in terms of + this :class:`.Operations` instance. + + """ + fn = self._to_impl.dispatch( + operation, self.migration_context.impl.__dialect__ + ) + return fn(self, operation) + + def f(self, name: str) -> conv: + """Indicate a string name that has already had a naming convention + applied to it. + + This feature combines with the SQLAlchemy ``naming_convention`` feature + to disambiguate constraint names that have already had naming + conventions applied to them, versus those that have not. This is + necessary in the case that the ``"%(constraint_name)s"`` token + is used within a naming convention, so that it can be identified + that this particular name should remain fixed. + + If the :meth:`.Operations.f` is used on a constraint, the naming + convention will not take effect:: + + op.add_column("t", "x", Boolean(name=op.f("ck_bool_t_x"))) + + Above, the CHECK constraint generated will have the name + ``ck_bool_t_x`` regardless of whether or not a naming convention is + in use. + + Alternatively, if a naming convention is in use, and 'f' is not used, + names will be converted along conventions. If the ``target_metadata`` + contains the naming convention + ``{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}``, then the + output of the following:: + + op.add_column("t", "x", Boolean(name="x")) + + will be:: + + CONSTRAINT ck_bool_t_x CHECK (x in (1, 0))) + + The function is rendered in the output of autogenerate when + a particular constraint name is already converted. + + """ + return conv(name) + + def inline_literal( + self, value: Union[str, int], type_: Optional[TypeEngine[Any]] = None + ) -> _literal_bindparam: + r"""Produce an 'inline literal' expression, suitable for + using in an INSERT, UPDATE, or DELETE statement. + + When using Alembic in "offline" mode, CRUD operations + aren't compatible with SQLAlchemy's default behavior surrounding + literal values, + which is that they are converted into bound values and passed + separately into the ``execute()`` method of the DBAPI cursor. + An offline SQL + script needs to have these rendered inline. While it should + always be noted that inline literal values are an **enormous** + security hole in an application that handles untrusted input, + a schema migration is not run in this context, so + literals are safe to render inline, with the caveat that + advanced types like dates may not be supported directly + by SQLAlchemy. + + See :meth:`.Operations.execute` for an example usage of + :meth:`.Operations.inline_literal`. + + The environment can also be configured to attempt to render + "literal" values inline automatically, for those simple types + that are supported by the dialect; see + :paramref:`.EnvironmentContext.configure.literal_binds` for this + more recently added feature. + + :param value: The value to render. Strings, integers, and simple + numerics should be supported. Other types like boolean, + dates, etc. may or may not be supported yet by various + backends. + :param type\_: optional - a :class:`sqlalchemy.types.TypeEngine` + subclass stating the type of this value. In SQLAlchemy + expressions, this is usually derived automatically + from the Python type of the value itself, as well as + based on the context in which the value is used. + + .. seealso:: + + :paramref:`.EnvironmentContext.configure.literal_binds` + + """ + return sqla_compat._literal_bindparam(None, value, type_=type_) + + def get_bind(self) -> Connection: + """Return the current 'bind'. + + Under normal circumstances, this is the + :class:`~sqlalchemy.engine.Connection` currently being used + to emit SQL to the database. + + In a SQL script context, this value is ``None``. [TODO: verify this] + + """ + return self.migration_context.impl.bind # type: ignore[return-value] + + def run_async( + self, + async_function: Callable[..., Awaitable[_T]], + *args: Any, + **kw_args: Any, + ) -> _T: + """Invoke the given asynchronous callable, passing an asynchronous + :class:`~sqlalchemy.ext.asyncio.AsyncConnection` as the first + argument. + + This method allows calling async functions from within the + synchronous ``upgrade()`` or ``downgrade()`` alembic migration + method. + + The async connection passed to the callable shares the same + transaction as the connection running in the migration context. + + Any additional arg or kw_arg passed to this function are passed + to the provided async function. + + .. versionadded: 1.11 + + .. note:: + + This method can be called only when alembic is called using + an async dialect. + """ + if not sqla_compat.sqla_14_18: + raise NotImplementedError("SQLAlchemy 1.4.18+ required") + sync_conn = self.get_bind() + if sync_conn is None: + raise NotImplementedError("Cannot call run_async in SQL mode") + if not sync_conn.dialect.is_async: + raise ValueError("Cannot call run_async with a sync engine") + from sqlalchemy.ext.asyncio import AsyncConnection + from sqlalchemy.util import await_only + + async_conn = AsyncConnection._retrieve_proxy_for_target(sync_conn) + return await_only(async_function(async_conn, *args, **kw_args)) + + +class Operations(AbstractOperations): + """Define high level migration operations. + + Each operation corresponds to some schema migration operation, + executed against a particular :class:`.MigrationContext` + which in turn represents connectivity to a database, + or a file output stream. + + While :class:`.Operations` is normally configured as + part of the :meth:`.EnvironmentContext.run_migrations` + method called from an ``env.py`` script, a standalone + :class:`.Operations` instance can be + made for use cases external to regular Alembic + migrations by passing in a :class:`.MigrationContext`:: + + from alembic.migration import MigrationContext + from alembic.operations import Operations + + conn = myengine.connect() + ctx = MigrationContext.configure(conn) + op = Operations(ctx) + + op.alter_column("t", "c", nullable=True) + + Note that as of 0.8, most of the methods on this class are produced + dynamically using the :meth:`.Operations.register_operation` + method. + + """ + + if TYPE_CHECKING: + # START STUB FUNCTIONS: op_cls + # ### the following stubs are generated by tools/write_pyi.py ### + # ### do not edit ### + + def add_column( + self, + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + """Issue an "add column" instruction using the current + migration context. + + e.g.:: + + from alembic import op + from sqlalchemy import Column, String + + op.add_column("organization", Column("name", String())) + + The :meth:`.Operations.add_column` method typically corresponds + to the SQL command "ALTER TABLE... ADD COLUMN". Within the scope + of this command, the column's name, datatype, nullability, + and optional server-generated defaults may be indicated. + + .. note:: + + With the exception of NOT NULL constraints or single-column FOREIGN + KEY constraints, other kinds of constraints such as PRIMARY KEY, + UNIQUE or CHECK constraints **cannot** be generated using this + method; for these constraints, refer to operations such as + :meth:`.Operations.create_primary_key` and + :meth:`.Operations.create_check_constraint`. In particular, the + following :class:`~sqlalchemy.schema.Column` parameters are + **ignored**: + + * :paramref:`~sqlalchemy.schema.Column.primary_key` - SQL databases + typically do not support an ALTER operation that can add + individual columns one at a time to an existing primary key + constraint, therefore it's less ambiguous to use the + :meth:`.Operations.create_primary_key` method, which assumes no + existing primary key constraint is present. + * :paramref:`~sqlalchemy.schema.Column.unique` - use the + :meth:`.Operations.create_unique_constraint` method + * :paramref:`~sqlalchemy.schema.Column.index` - use the + :meth:`.Operations.create_index` method + + + The provided :class:`~sqlalchemy.schema.Column` object may include a + :class:`~sqlalchemy.schema.ForeignKey` constraint directive, + referencing a remote table name. For this specific type of constraint, + Alembic will automatically emit a second ALTER statement in order to + add the single-column FOREIGN KEY constraint separately:: + + from alembic import op + from sqlalchemy import Column, INTEGER, ForeignKey + + op.add_column( + "organization", + Column("account_id", INTEGER, ForeignKey("accounts.id")), + ) + + The column argument passed to :meth:`.Operations.add_column` is a + :class:`~sqlalchemy.schema.Column` construct, used in the same way it's + used in SQLAlchemy. In particular, values or functions to be indicated + as producing the column's default value on the database side are + specified using the ``server_default`` parameter, and not ``default`` + which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the column add + op.add_column( + "account", + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + :param table_name: String name of the parent table. + :param column: a :class:`sqlalchemy.schema.Column` object + representing the new column. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator + when creating the new column for compatible dialects + + .. versionadded:: 1.16.0 + + """ # noqa: E501 + ... + + def alter_column( + self, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + comment: Union[str, Literal[False], None] = False, + server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + new_column_name: Optional[str] = None, + type_: Union[TypeEngine[Any], Type[TypeEngine[Any]], None] = None, + existing_type: Union[ + TypeEngine[Any], Type[TypeEngine[Any]], None + ] = None, + existing_server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + r"""Issue an "alter column" instruction using the + current migration context. + + Generally, only that aspect of the column which + is being changed, i.e. name, type, nullability, + default, needs to be specified. Multiple changes + can also be specified at once and the backend should + "do the right thing", emitting each change either + separately or together as the backend allows. + + MySQL has special requirements here, since MySQL + cannot ALTER a column without a full specification. + When producing MySQL-compatible migration files, + it is recommended that the ``existing_type``, + ``existing_server_default``, and ``existing_nullable`` + parameters be present, if not being altered. + + Type changes which are against the SQLAlchemy + "schema" types :class:`~sqlalchemy.types.Boolean` + and :class:`~sqlalchemy.types.Enum` may also + add or drop constraints which accompany those + types on backends that don't support them natively. + The ``existing_type`` argument is + used in this case to identify and remove a previous + constraint that was bound to the type object. + + :param table_name: string name of the target table. + :param column_name: string name of the target column, + as it exists before the operation begins. + :param nullable: Optional; specify ``True`` or ``False`` + to alter the column's nullability. + :param server_default: Optional; specify a string + SQL expression, :func:`~sqlalchemy.sql.expression.text`, + or :class:`~sqlalchemy.schema.DefaultClause` to indicate + an alteration to the column's default value. + Set to ``None`` to have the default removed. + :param comment: optional string text of a new comment to add to the + column. + :param new_column_name: Optional; specify a string name here to + indicate the new name within a column rename operation. + :param type\_: Optional; a :class:`~sqlalchemy.types.TypeEngine` + type object to specify a change to the column's type. + For SQLAlchemy types that also indicate a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, :class:`~sqlalchemy.types.Enum`), + the constraint is also generated. + :param autoincrement: set the ``AUTO_INCREMENT`` flag of the column; + currently understood by the MySQL dialect. + :param existing_type: Optional; a + :class:`~sqlalchemy.types.TypeEngine` + type object to specify the previous type. This + is required for all MySQL column alter operations that + don't otherwise specify a new type, as well as for + when nullability is being changed on a SQL Server + column. It is also used if the type is a so-called + SQLAlchemy "schema" type which may define a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, + :class:`~sqlalchemy.types.Enum`), + so that the constraint can be dropped. + :param existing_server_default: Optional; The existing + default value of the column. Required on MySQL if + an existing default is not being changed; else MySQL + removes the default. + :param existing_nullable: Optional; the existing nullability + of the column. Required on MySQL if the existing nullability + is not being changed; else MySQL sets this to NULL. + :param existing_autoincrement: Optional; the existing autoincrement + of the column. Used for MySQL's system of altering a column + that specifies ``AUTO_INCREMENT``. + :param existing_comment: string text of the existing comment on the + column to be maintained. Required on MySQL if the existing comment + on the column is not being changed. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param postgresql_using: String argument which will indicate a + SQL expression to render within the Postgresql-specific USING clause + within ALTER COLUMN. This string is taken directly as raw SQL which + must explicitly include any necessary quoting or escaping of tokens + within the expression. + + """ # noqa: E501 + ... + + def bulk_insert( + self, + table: Union[Table, TableClause], + rows: List[Dict[str, Any]], + *, + multiinsert: bool = True, + ) -> None: + """Issue a "bulk insert" operation using the current + migration context. + + This provides a means of representing an INSERT of multiple rows + which works equally well in the context of executing on a live + connection as well as that of generating a SQL script. In the + case of a SQL script, the values are rendered inline into the + statement. + + e.g.:: + + from alembic import op + from datetime import date + from sqlalchemy.sql import table, column + from sqlalchemy import String, Integer, Date + + # Create an ad-hoc table to use for the insert statement. + accounts_table = table( + "account", + column("id", Integer), + column("name", String), + column("create_date", Date), + ) + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": date(2010, 10, 5), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": date(2007, 5, 27), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": date(2008, 8, 15), + }, + ], + ) + + When using --sql mode, some datatypes may not render inline + automatically, such as dates and other special types. When this + issue is present, :meth:`.Operations.inline_literal` may be used:: + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": op.inline_literal("2010-10-05"), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": op.inline_literal("2007-05-27"), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": op.inline_literal("2008-08-15"), + }, + ], + multiinsert=False, + ) + + When using :meth:`.Operations.inline_literal` in conjunction with + :meth:`.Operations.bulk_insert`, in order for the statement to work + in "online" (e.g. non --sql) mode, the + :paramref:`~.Operations.bulk_insert.multiinsert` + flag should be set to ``False``, which will have the effect of + individual INSERT statements being emitted to the database, each + with a distinct VALUES clause, so that the "inline" values can + still be rendered, rather than attempting to pass the values + as bound parameters. + + :param table: a table object which represents the target of the INSERT. + + :param rows: a list of dictionaries indicating rows. + + :param multiinsert: when at its default of True and --sql mode is not + enabled, the INSERT statement will be executed using + "executemany()" style, where all elements in the list of + dictionaries are passed as bound parameters in a single + list. Setting this to False results in individual INSERT + statements being emitted per parameter set, and is needed + in those cases where non-literal values are present in the + parameter sets. + + """ # noqa: E501 + ... + + def create_check_constraint( + self, + constraint_name: Optional[str], + table_name: str, + condition: Union[str, ColumnElement[bool], TextClause], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue a "create check constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + from sqlalchemy.sql import column, func + + op.create_check_constraint( + "ck_user_name_len", + "user", + func.len(column("name")) > 5, + ) + + CHECK constraints are usually against a SQL expression, so ad-hoc + table metadata is usually needed. The function will convert the given + arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound + to an anonymous table in order to emit the CREATE statement. + + :param name: Name of the check constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param condition: SQL expression that's the condition of the + constraint. Can be a string or SQLAlchemy expression language + structure. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ # noqa: E501 + ... + + def create_exclude_constraint( + self, + constraint_name: str, + table_name: str, + *elements: Any, + **kw: Any, + ) -> Optional[Table]: + """Issue an alter to create an EXCLUDE constraint using the + current migration context. + + .. note:: This method is Postgresql specific, and additionally + requires at least SQLAlchemy 1.0. + + e.g.:: + + from alembic import op + + op.create_exclude_constraint( + "user_excl", + "user", + ("period", "&&"), + ("group", "="), + where=("group != 'some group'"), + ) + + Note that the expressions work the same way as that of + the ``ExcludeConstraint`` object itself; if plain strings are + passed, quoting rules must be applied manually. + + :param name: Name of the constraint. + :param table_name: String name of the source table. + :param elements: exclude conditions. + :param where: SQL expression or SQL string with optional WHERE + clause. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. + + """ # noqa: E501 + ... + + def create_foreign_key( + self, + constraint_name: Optional[str], + source_table: str, + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + *, + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + source_schema: Optional[str] = None, + referent_schema: Optional[str] = None, + **dialect_kw: Any, + ) -> None: + """Issue a "create foreign key" instruction using the + current migration context. + + e.g.:: + + from alembic import op + + op.create_foreign_key( + "fk_user_address", + "address", + "user", + ["user_id"], + ["id"], + ) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.ForeignKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the foreign key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param source_table: String name of the source table. + :param referent_table: String name of the destination table. + :param local_cols: a list of string column names in the + source table. + :param remote_cols: a list of string column names in the + remote table. + :param onupdate: Optional string. If set, emit ON UPDATE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param ondelete: Optional string. If set, emit ON DELETE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param deferrable: optional bool. If set, emit DEFERRABLE or NOT + DEFERRABLE when issuing DDL for this constraint. + :param source_schema: Optional schema name of the source table. + :param referent_schema: Optional schema name of the destination table. + + """ # noqa: E501 + ... + + def create_index( + self, + index_name: Optional[str], + table_name: str, + columns: Sequence[Union[str, TextClause, ColumnElement[Any]]], + *, + schema: Optional[str] = None, + unique: bool = False, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "create index" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_index("ik_test", "t1", ["foo", "bar"]) + + Functional indexes can be produced by using the + :func:`sqlalchemy.sql.expression.text` construct:: + + from alembic import op + from sqlalchemy import text + + op.create_index("ik_test", "t1", [text("lower(foo)")]) + + :param index_name: name of the index. + :param table_name: name of the owning table. + :param columns: a list consisting of string column names and/or + :func:`~sqlalchemy.sql.expression.text` constructs. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param unique: If True, create a unique index. + + :param quote: Force quoting of this column's name on or off, + corresponding to ``True`` or ``False``. When left at its default + of ``None``, the column identifier will be quoted according to + whether the name is case sensitive (identifiers with at least one + upper case character are treated as case sensitive), or if it's a + reserved word. This flag is only needed to force quoting of a + reserved word which is not known by the SQLAlchemy dialect. + + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ # noqa: E501 + ... + + def create_primary_key( + self, + constraint_name: Optional[str], + table_name: str, + columns: List[str], + *, + schema: Optional[str] = None, + ) -> None: + """Issue a "create primary key" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_primary_key("pk_my_table", "my_table", ["id", "version"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.PrimaryKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the primary key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions` + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the target table. + :param columns: a list of string column names to be applied to the + primary key constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ # noqa: E501 + ... + + def create_table( + self, + table_name: str, + *columns: SchemaItem, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> Table: + r"""Issue a "create table" instruction using the current migration + context. + + This directive receives an argument list similar to that of the + traditional :class:`sqlalchemy.schema.Table` construct, but without the + metadata:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + Note that :meth:`.create_table` accepts + :class:`~sqlalchemy.schema.Column` + constructs directly from the SQLAlchemy library. In particular, + default values to be created on the database side are + specified using the ``server_default`` parameter, and not + ``default`` which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the "timestamp" column + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + The function also returns a newly created + :class:`~sqlalchemy.schema.Table` object, corresponding to the table + specification given, which is suitable for + immediate SQL operations, in particular + :meth:`.Operations.bulk_insert`:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + account_table = op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + op.bulk_insert( + account_table, + [ + {"name": "A1", "description": "account 1"}, + {"name": "A2", "description": "account 2"}, + ], + ) + + :param table_name: Name of the table + :param \*columns: collection of :class:`~sqlalchemy.schema.Column` + objects within + the table, as well as optional :class:`~sqlalchemy.schema.Constraint` + objects + and :class:`~.sqlalchemy.schema.Index` objects. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + :return: the :class:`~sqlalchemy.schema.Table` object corresponding + to the parameters given. + + """ # noqa: E501 + ... + + def create_table_comment( + self, + table_name: str, + comment: Optional[str], + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + ) -> None: + """Emit a COMMENT ON operation to set the comment for a table. + + :param table_name: string name of the target table. + :param comment: string value of the comment being registered against + the specified table. + :param existing_comment: String value of a comment + already registered on the specified table, used within autogenerate + so that the operation is reversible, but not required for direct + use. + + .. seealso:: + + :meth:`.Operations.drop_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ # noqa: E501 + ... + + def create_unique_constraint( + self, + constraint_name: Optional[str], + table_name: str, + columns: Sequence[str], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> Any: + """Issue a "create unique constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + op.create_unique_constraint("uq_user_name", "user", ["name"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.UniqueConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param name: Name of the unique constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param columns: a list of string column names in the + source table. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ # noqa: E501 + ... + + def drop_column( + self, + table_name: str, + column_name: str, + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue a "drop column" instruction using the current + migration context. + + e.g.:: + + drop_column("organization", "account_id") + + :param table_name: name of table + :param column_name: name of column + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the new column for compatible dialects + + .. versionadded:: 1.16.0 + + :param mssql_drop_check: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the CHECK constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.check_constraints, + then exec's a separate DROP CONSTRAINT for that constraint. + :param mssql_drop_default: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the DEFAULT constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.default_constraints, + then exec's a separate DROP CONSTRAINT for that default. + :param mssql_drop_foreign_key: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop a single FOREIGN KEY constraint on the column using a + SQL-script-compatible + block that selects into a @variable from + sys.foreign_keys/sys.foreign_key_columns, + then exec's a separate DROP CONSTRAINT for that default. Only + works if the column has exactly one FK constraint which refers to + it, at the moment. + """ # noqa: E501 + ... + + def drop_constraint( + self, + constraint_name: str, + table_name: str, + type_: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + ) -> None: + r"""Drop a constraint of the given name, typically via DROP CONSTRAINT. + + :param constraint_name: name of the constraint. + :param table_name: table name. + :param type\_: optional, required on MySQL. can be + 'foreignkey', 'primary', 'unique', or 'check'. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the constraint + + .. versionadded:: 1.16.0 + + """ # noqa: E501 + ... + + def drop_index( + self, + index_name: str, + table_name: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "drop index" instruction using the current + migration context. + + e.g.:: + + drop_index("accounts") + + :param index_name: name of the index. + :param table_name: name of the owning table. Some + backends such as Microsoft SQL Server require this. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + :param if_exists: If True, adds IF EXISTS operator when + dropping the index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ # noqa: E501 + ... + + def drop_table( + self, + table_name: str, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "drop table" instruction using the current + migration context. + + + e.g.:: + + drop_table("accounts") + + :param table_name: Name of the table + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + """ # noqa: E501 + ... + + def drop_table_comment( + self, + table_name: str, + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + ) -> None: + """Issue a "drop table comment" operation to + remove an existing comment set on a table. + + :param table_name: string name of the target table. + :param existing_comment: An optional string value of a comment already + registered on the specified table. + + .. seealso:: + + :meth:`.Operations.create_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ # noqa: E501 + ... + + def execute( + self, + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + r"""Execute the given SQL using the current migration context. + + The given SQL can be a plain string, e.g.:: + + op.execute("INSERT INTO table (foo) VALUES ('some value')") + + Or it can be any kind of Core SQL Expression construct, such as + below where we use an update construct:: + + from sqlalchemy.sql import table, column + from sqlalchemy import String + from alembic import op + + account = table("account", column("name", String)) + op.execute( + account.update() + .where(account.c.name == op.inline_literal("account 1")) + .values({"name": op.inline_literal("account 2")}) + ) + + Above, we made use of the SQLAlchemy + :func:`sqlalchemy.sql.expression.table` and + :func:`sqlalchemy.sql.expression.column` constructs to make a brief, + ad-hoc table construct just for our UPDATE statement. A full + :class:`~sqlalchemy.schema.Table` construct of course works perfectly + fine as well, though note it's a recommended practice to at least + ensure the definition of a table is self-contained within the migration + script, rather than imported from a module that may break compatibility + with older migrations. + + In a SQL script context, the statement is emitted directly to the + output stream. There is *no* return result, however, as this + function is oriented towards generating a change script + that can run in "offline" mode. Additionally, parameterized + statements are discouraged here, as they *will not work* in offline + mode. Above, we use :meth:`.inline_literal` where parameters are + to be used. + + For full interaction with a connected database where parameters can + also be used normally, use the "bind" available from the context:: + + from alembic import op + + connection = op.get_bind() + + connection.execute( + account.update() + .where(account.c.name == "account 1") + .values({"name": "account 2"}) + ) + + Additionally, when passing the statement as a plain string, it is first + coerced into a :func:`sqlalchemy.sql.expression.text` construct + before being passed along. In the less likely case that the + literal SQL string contains a colon, it must be escaped with a + backslash, as:: + + op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')") + + + :param sqltext: Any legal SQLAlchemy expression, including: + + * a string + * a :func:`sqlalchemy.sql.expression.text` construct. + * a :func:`sqlalchemy.sql.expression.insert` construct. + * a :func:`sqlalchemy.sql.expression.update` construct. + * a :func:`sqlalchemy.sql.expression.delete` construct. + * Any "executable" described in SQLAlchemy Core documentation, + noting that no result set is returned. + + .. note:: when passing a plain string, the statement is coerced into + a :func:`sqlalchemy.sql.expression.text` construct. This construct + considers symbols with colons, e.g. ``:foo`` to be bound parameters. + To avoid this, ensure that colon symbols are escaped, e.g. + ``\:foo``. + + :param execution_options: Optional dictionary of + execution options, will be passed to + :meth:`sqlalchemy.engine.Connection.execution_options`. + """ # noqa: E501 + ... + + def rename_table( + self, + old_table_name: str, + new_table_name: str, + *, + schema: Optional[str] = None, + ) -> None: + """Emit an ALTER TABLE to rename a table. + + :param old_table_name: old name. + :param new_table_name: new name. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ # noqa: E501 + ... + + # END STUB FUNCTIONS: op_cls + + +class BatchOperations(AbstractOperations): + """Modifies the interface :class:`.Operations` for batch mode. + + This basically omits the ``table_name`` and ``schema`` parameters + from associated methods, as these are a given when running under batch + mode. + + .. seealso:: + + :meth:`.Operations.batch_alter_table` + + Note that as of 0.8, most of the methods on this class are produced + dynamically using the :meth:`.Operations.register_operation` + method. + + """ + + impl: BatchOperationsImpl + + def _noop(self, operation: Any) -> NoReturn: + raise NotImplementedError( + "The %s method does not apply to a batch table alter operation." + % operation + ) + + if TYPE_CHECKING: + # START STUB FUNCTIONS: batch_op + # ### the following stubs are generated by tools/write_pyi.py ### + # ### do not edit ### + + def add_column( + self, + column: Column[Any], + *, + insert_before: Optional[str] = None, + insert_after: Optional[str] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + """Issue an "add column" instruction using the current + batch migration context. + + .. seealso:: + + :meth:`.Operations.add_column` + + """ # noqa: E501 + ... + + def alter_column( + self, + column_name: str, + *, + nullable: Optional[bool] = None, + comment: Union[str, Literal[False], None] = False, + server_default: Any = False, + new_column_name: Optional[str] = None, + type_: Union[TypeEngine[Any], Type[TypeEngine[Any]], None] = None, + existing_type: Union[ + TypeEngine[Any], Type[TypeEngine[Any]], None + ] = None, + existing_server_default: Union[ + str, bool, Identity, Computed, None + ] = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + insert_before: Optional[str] = None, + insert_after: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue an "alter column" instruction using the current + batch migration context. + + Parameters are the same as that of :meth:`.Operations.alter_column`, + as well as the following option(s): + + :param insert_before: String name of an existing column which this + column should be placed before, when creating the new table. + + :param insert_after: String name of an existing column which this + column should be placed after, when creating the new table. If + both :paramref:`.BatchOperations.alter_column.insert_before` + and :paramref:`.BatchOperations.alter_column.insert_after` are + omitted, the column is inserted after the last existing column + in the table. + + .. seealso:: + + :meth:`.Operations.alter_column` + + + """ # noqa: E501 + ... + + def create_check_constraint( + self, + constraint_name: str, + condition: Union[str, ColumnElement[bool], TextClause], + **kw: Any, + ) -> None: + """Issue a "create check constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_check_constraint` + + """ # noqa: E501 + ... + + def create_exclude_constraint( + self, constraint_name: str, *elements: Any, **kw: Any + ) -> Optional[Table]: + """Issue a "create exclude constraint" instruction using the + current batch migration context. + + .. note:: This method is Postgresql specific, and additionally + requires at least SQLAlchemy 1.0. + + .. seealso:: + + :meth:`.Operations.create_exclude_constraint` + + """ # noqa: E501 + ... + + def create_foreign_key( + self, + constraint_name: Optional[str], + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + *, + referent_schema: Optional[str] = None, + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + **dialect_kw: Any, + ) -> None: + """Issue a "create foreign key" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``source_schema`` + arguments from the call. + + e.g.:: + + with batch_alter_table("address") as batch_op: + batch_op.create_foreign_key( + "fk_user_address", + "user", + ["user_id"], + ["id"], + ) + + .. seealso:: + + :meth:`.Operations.create_foreign_key` + + """ # noqa: E501 + ... + + def create_index( + self, index_name: str, columns: List[str], **kw: Any + ) -> None: + """Issue a "create index" instruction using the + current batch migration context. + + .. seealso:: + + :meth:`.Operations.create_index` + + """ # noqa: E501 + ... + + def create_primary_key( + self, constraint_name: Optional[str], columns: List[str] + ) -> None: + """Issue a "create primary key" instruction using the + current batch migration context. + + The batch form of this call omits the ``table_name`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_primary_key` + + """ # noqa: E501 + ... + + def create_table_comment( + self, + comment: Optional[str], + *, + existing_comment: Optional[str] = None, + ) -> None: + """Emit a COMMENT ON operation to set the comment for a table + using the current batch migration context. + + :param comment: string value of the comment being registered against + the specified table. + :param existing_comment: String value of a comment + already registered on the specified table, used within autogenerate + so that the operation is reversible, but not required for direct + use. + + """ # noqa: E501 + ... + + def create_unique_constraint( + self, constraint_name: str, columns: Sequence[str], **kw: Any + ) -> Any: + """Issue a "create unique constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_unique_constraint` + + """ # noqa: E501 + ... + + def drop_column(self, column_name: str, **kw: Any) -> None: + """Issue a "drop column" instruction using the current + batch migration context. + + .. seealso:: + + :meth:`.Operations.drop_column` + + """ # noqa: E501 + ... + + def drop_constraint( + self, constraint_name: str, type_: Optional[str] = None + ) -> None: + """Issue a "drop constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``table_name`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.drop_constraint` + + """ # noqa: E501 + ... + + def drop_index(self, index_name: str, **kw: Any) -> None: + """Issue a "drop index" instruction using the + current batch migration context. + + .. seealso:: + + :meth:`.Operations.drop_index` + + """ # noqa: E501 + ... + + def drop_table_comment( + self, *, existing_comment: Optional[str] = None + ) -> None: + """Issue a "drop table comment" operation to + remove an existing comment set on a table using the current + batch operations context. + + :param existing_comment: An optional string value of a comment already + registered on the specified table. + + """ # noqa: E501 + ... + + def execute( + self, + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + """Execute the given SQL using the current migration context. + + .. seealso:: + + :meth:`.Operations.execute` + + """ # noqa: E501 + ... + + # END STUB FUNCTIONS: batch_op diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/batch.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/batch.py new file mode 100644 index 000000000..fe183e9c8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/batch.py @@ -0,0 +1,718 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +from typing import Any +from typing import Dict +from typing import List +from typing import Optional +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import CheckConstraint +from sqlalchemy import Column +from sqlalchemy import ForeignKeyConstraint +from sqlalchemy import Index +from sqlalchemy import MetaData +from sqlalchemy import PrimaryKeyConstraint +from sqlalchemy import schema as sql_schema +from sqlalchemy import select +from sqlalchemy import Table +from sqlalchemy import types as sqltypes +from sqlalchemy.sql.schema import SchemaEventTarget +from sqlalchemy.util import OrderedDict +from sqlalchemy.util import topological + +from ..util import exc +from ..util.sqla_compat import _columns_for_constraint +from ..util.sqla_compat import _copy +from ..util.sqla_compat import _copy_expression +from ..util.sqla_compat import _ensure_scope_for_ddl +from ..util.sqla_compat import _fk_is_self_referential +from ..util.sqla_compat import _idx_table_bound_expressions +from ..util.sqla_compat import _is_type_bound +from ..util.sqla_compat import _remove_column_from_collection +from ..util.sqla_compat import _resolve_for_variant +from ..util.sqla_compat import constraint_name_defined +from ..util.sqla_compat import constraint_name_string + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy.engine import Dialect + from sqlalchemy.sql.elements import ColumnClause + from sqlalchemy.sql.elements import quoted_name + from sqlalchemy.sql.functions import Function + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.type_api import TypeEngine + + from ..ddl.impl import DefaultImpl + + +class BatchOperationsImpl: + def __init__( + self, + operations, + table_name, + schema, + recreate, + copy_from, + table_args, + table_kwargs, + reflect_args, + reflect_kwargs, + naming_convention, + partial_reordering, + ): + self.operations = operations + self.table_name = table_name + self.schema = schema + if recreate not in ("auto", "always", "never"): + raise ValueError( + "recreate may be one of 'auto', 'always', or 'never'." + ) + self.recreate = recreate + self.copy_from = copy_from + self.table_args = table_args + self.table_kwargs = dict(table_kwargs) + self.reflect_args = reflect_args + self.reflect_kwargs = dict(reflect_kwargs) + self.reflect_kwargs.setdefault( + "listeners", list(self.reflect_kwargs.get("listeners", ())) + ) + self.reflect_kwargs["listeners"].append( + ("column_reflect", operations.impl.autogen_column_reflect) + ) + self.naming_convention = naming_convention + self.partial_reordering = partial_reordering + self.batch = [] + + @property + def dialect(self) -> Dialect: + return self.operations.impl.dialect + + @property + def impl(self) -> DefaultImpl: + return self.operations.impl + + def _should_recreate(self) -> bool: + if self.recreate == "auto": + return self.operations.impl.requires_recreate_in_batch(self) + elif self.recreate == "always": + return True + else: + return False + + def flush(self) -> None: + should_recreate = self._should_recreate() + + with _ensure_scope_for_ddl(self.impl.connection): + if not should_recreate: + for opname, arg, kw in self.batch: + fn = getattr(self.operations.impl, opname) + fn(*arg, **kw) + else: + if self.naming_convention: + m1 = MetaData(naming_convention=self.naming_convention) + else: + m1 = MetaData() + + if self.copy_from is not None: + existing_table = self.copy_from + reflected = False + else: + if self.operations.migration_context.as_sql: + raise exc.CommandError( + f"This operation cannot proceed in --sql mode; " + f"batch mode with dialect " + f"{self.operations.migration_context.dialect.name} " # noqa: E501 + f"requires a live database connection with which " + f'to reflect the table "{self.table_name}". ' + f"To generate a batch SQL migration script using " + "table " + '"move and copy", a complete Table object ' + f'should be passed to the "copy_from" argument ' + "of the batch_alter_table() method so that table " + "reflection can be skipped." + ) + + existing_table = Table( + self.table_name, + m1, + schema=self.schema, + autoload_with=self.operations.get_bind(), + *self.reflect_args, + **self.reflect_kwargs, + ) + reflected = True + + batch_impl = ApplyBatchImpl( + self.impl, + existing_table, + self.table_args, + self.table_kwargs, + reflected, + partial_reordering=self.partial_reordering, + ) + for opname, arg, kw in self.batch: + fn = getattr(batch_impl, opname) + fn(*arg, **kw) + + batch_impl._create(self.impl) + + def alter_column(self, *arg, **kw) -> None: + self.batch.append(("alter_column", arg, kw)) + + def add_column(self, *arg, **kw) -> None: + if ( + "insert_before" in kw or "insert_after" in kw + ) and not self._should_recreate(): + raise exc.CommandError( + "Can't specify insert_before or insert_after when using " + "ALTER; please specify recreate='always'" + ) + self.batch.append(("add_column", arg, kw)) + + def drop_column(self, *arg, **kw) -> None: + self.batch.append(("drop_column", arg, kw)) + + def add_constraint(self, const: Constraint) -> None: + self.batch.append(("add_constraint", (const,), {})) + + def drop_constraint(self, const: Constraint) -> None: + self.batch.append(("drop_constraint", (const,), {})) + + def rename_table(self, *arg, **kw): + self.batch.append(("rename_table", arg, kw)) + + def create_index(self, idx: Index, **kw: Any) -> None: + self.batch.append(("create_index", (idx,), kw)) + + def drop_index(self, idx: Index, **kw: Any) -> None: + self.batch.append(("drop_index", (idx,), kw)) + + def create_table_comment(self, table): + self.batch.append(("create_table_comment", (table,), {})) + + def drop_table_comment(self, table): + self.batch.append(("drop_table_comment", (table,), {})) + + def create_table(self, table): + raise NotImplementedError("Can't create table in batch mode") + + def drop_table(self, table): + raise NotImplementedError("Can't drop table in batch mode") + + def create_column_comment(self, column): + self.batch.append(("create_column_comment", (column,), {})) + + +class ApplyBatchImpl: + def __init__( + self, + impl: DefaultImpl, + table: Table, + table_args: tuple, + table_kwargs: Dict[str, Any], + reflected: bool, + partial_reordering: tuple = (), + ) -> None: + self.impl = impl + self.table = table # this is a Table object + self.table_args = table_args + self.table_kwargs = table_kwargs + self.temp_table_name = self._calc_temp_name(table.name) + self.new_table: Optional[Table] = None + + self.partial_reordering = partial_reordering # tuple of tuples + self.add_col_ordering: Tuple[ + Tuple[str, str], ... + ] = () # tuple of tuples + + self.column_transfers = OrderedDict( + (c.name, {"expr": c}) for c in self.table.c + ) + self.existing_ordering = list(self.column_transfers) + + self.reflected = reflected + self._grab_table_elements() + + @classmethod + def _calc_temp_name(cls, tablename: Union[quoted_name, str]) -> str: + return ("_alembic_tmp_%s" % tablename)[0:50] + + def _grab_table_elements(self) -> None: + schema = self.table.schema + self.columns: Dict[str, Column[Any]] = OrderedDict() + for c in self.table.c: + c_copy = _copy(c, schema=schema) + c_copy.unique = c_copy.index = False + # ensure that the type object was copied, + # as we may need to modify it in-place + if isinstance(c.type, SchemaEventTarget): + assert c_copy.type is not c.type + self.columns[c.name] = c_copy + self.named_constraints: Dict[str, Constraint] = {} + self.unnamed_constraints = [] + self.col_named_constraints = {} + self.indexes: Dict[str, Index] = {} + self.new_indexes: Dict[str, Index] = {} + + for const in self.table.constraints: + if _is_type_bound(const): + continue + elif ( + self.reflected + and isinstance(const, CheckConstraint) + and not const.name + ): + # TODO: we are skipping unnamed reflected CheckConstraint + # because + # we have no way to determine _is_type_bound() for these. + pass + elif constraint_name_string(const.name): + self.named_constraints[const.name] = const + else: + self.unnamed_constraints.append(const) + + if not self.reflected: + for col in self.table.c: + for const in col.constraints: + if const.name: + self.col_named_constraints[const.name] = (col, const) + + for idx in self.table.indexes: + self.indexes[idx.name] = idx # type: ignore[index] + + for k in self.table.kwargs: + self.table_kwargs.setdefault(k, self.table.kwargs[k]) + + def _adjust_self_columns_for_partial_reordering(self) -> None: + pairs = set() + + col_by_idx = list(self.columns) + + if self.partial_reordering: + for tuple_ in self.partial_reordering: + for index, elem in enumerate(tuple_): + if index > 0: + pairs.add((tuple_[index - 1], elem)) + else: + for index, elem in enumerate(self.existing_ordering): + if index > 0: + pairs.add((col_by_idx[index - 1], elem)) + + pairs.update(self.add_col_ordering) + + # this can happen if some columns were dropped and not removed + # from existing_ordering. this should be prevented already, but + # conservatively making sure this didn't happen + pairs_list = [p for p in pairs if p[0] != p[1]] + + sorted_ = list( + topological.sort(pairs_list, col_by_idx, deterministic_order=True) + ) + self.columns = OrderedDict((k, self.columns[k]) for k in sorted_) + self.column_transfers = OrderedDict( + (k, self.column_transfers[k]) for k in sorted_ + ) + + def _transfer_elements_to_new_table(self) -> None: + assert self.new_table is None, "Can only create new table once" + + m = MetaData() + schema = self.table.schema + + if self.partial_reordering or self.add_col_ordering: + self._adjust_self_columns_for_partial_reordering() + + self.new_table = new_table = Table( + self.temp_table_name, + m, + *(list(self.columns.values()) + list(self.table_args)), + schema=schema, + **self.table_kwargs, + ) + + for const in ( + list(self.named_constraints.values()) + self.unnamed_constraints + ): + const_columns = {c.key for c in _columns_for_constraint(const)} + + if not const_columns.issubset(self.column_transfers): + continue + + const_copy: Constraint + if isinstance(const, ForeignKeyConstraint): + if _fk_is_self_referential(const): + # for self-referential constraint, refer to the + # *original* table name, and not _alembic_batch_temp. + # This is consistent with how we're handling + # FK constraints from other tables; we assume SQLite + # no foreign keys just keeps the names unchanged, so + # when we rename back, they match again. + const_copy = _copy( + const, schema=schema, target_table=self.table + ) + else: + # "target_table" for ForeignKeyConstraint.copy() is + # only used if the FK is detected as being + # self-referential, which we are handling above. + const_copy = _copy(const, schema=schema) + else: + const_copy = _copy( + const, schema=schema, target_table=new_table + ) + if isinstance(const, ForeignKeyConstraint): + self._setup_referent(m, const) + new_table.append_constraint(const_copy) + + def _gather_indexes_from_both_tables(self) -> List[Index]: + assert self.new_table is not None + idx: List[Index] = [] + + for idx_existing in self.indexes.values(): + # this is a lift-and-move from Table.to_metadata + + if idx_existing._column_flag: + continue + + idx_copy = Index( + idx_existing.name, + unique=idx_existing.unique, + *[ + _copy_expression(expr, self.new_table) + for expr in _idx_table_bound_expressions(idx_existing) + ], + _table=self.new_table, + **idx_existing.kwargs, + ) + idx.append(idx_copy) + + for index in self.new_indexes.values(): + idx.append( + Index( + index.name, + unique=index.unique, + *[self.new_table.c[col] for col in index.columns.keys()], + **index.kwargs, + ) + ) + return idx + + def _setup_referent( + self, metadata: MetaData, constraint: ForeignKeyConstraint + ) -> None: + spec = constraint.elements[0]._get_colspec() + parts = spec.split(".") + tname = parts[-2] + if len(parts) == 3: + referent_schema = parts[0] + else: + referent_schema = None + + if tname != self.temp_table_name: + key = sql_schema._get_table_key(tname, referent_schema) + + def colspec(elem: Any): + return elem._get_colspec() + + if key in metadata.tables: + t = metadata.tables[key] + for elem in constraint.elements: + colname = colspec(elem).split(".")[-1] + if colname not in t.c: + t.append_column(Column(colname, sqltypes.NULLTYPE)) + else: + Table( + tname, + metadata, + *[ + Column(n, sqltypes.NULLTYPE) + for n in [ + colspec(elem).split(".")[-1] + for elem in constraint.elements + ] + ], + schema=referent_schema, + ) + + def _create(self, op_impl: DefaultImpl) -> None: + self._transfer_elements_to_new_table() + + op_impl.prep_table_for_batch(self, self.table) + assert self.new_table is not None + op_impl.create_table(self.new_table) + + try: + op_impl._exec( + self.new_table.insert() + .inline() + .from_select( + list( + k + for k, transfer in self.column_transfers.items() + if "expr" in transfer + ), + select( + *[ + transfer["expr"] + for transfer in self.column_transfers.values() + if "expr" in transfer + ] + ), + ) + ) + op_impl.drop_table(self.table) + except: + op_impl.drop_table(self.new_table) + raise + else: + op_impl.rename_table( + self.temp_table_name, self.table.name, schema=self.table.schema + ) + self.new_table.name = self.table.name + try: + for idx in self._gather_indexes_from_both_tables(): + op_impl.create_index(idx) + finally: + self.new_table.name = self.temp_table_name + + def alter_column( + self, + table_name: str, + column_name: str, + nullable: Optional[bool] = None, + server_default: Optional[Union[Function[Any], str, bool]] = False, + name: Optional[str] = None, + type_: Optional[TypeEngine] = None, + autoincrement: Optional[Union[bool, Literal["auto"]]] = None, + comment: Union[str, Literal[False]] = False, + **kw, + ) -> None: + existing = self.columns[column_name] + existing_transfer: Dict[str, Any] = self.column_transfers[column_name] + if name is not None and name != column_name: + # note that we don't change '.key' - we keep referring + # to the renamed column by its old key in _create(). neat! + existing.name = name + existing_transfer["name"] = name + + existing_type = kw.get("existing_type", None) + if existing_type: + resolved_existing_type = _resolve_for_variant( + kw["existing_type"], self.impl.dialect + ) + + # pop named constraints for Boolean/Enum for rename + if ( + isinstance(resolved_existing_type, SchemaEventTarget) + and resolved_existing_type.name # type:ignore[attr-defined] # noqa E501 + ): + self.named_constraints.pop( + resolved_existing_type.name, # type:ignore[attr-defined] # noqa E501 + None, + ) + + if type_ is not None: + type_ = sqltypes.to_instance(type_) + # old type is being discarded so turn off eventing + # rules. Alternatively we can + # erase the events set up by this type, but this is simpler. + # we also ignore the drop_constraint that will come here from + # Operations.implementation_for(alter_column) + + if isinstance(existing.type, SchemaEventTarget): + existing.type._create_events = ( # type:ignore[attr-defined] + existing.type.create_constraint # type:ignore[attr-defined] # noqa + ) = False + + self.impl.cast_for_batch_migrate( + existing, existing_transfer, type_ + ) + + existing.type = type_ + + # we *dont* however set events for the new type, because + # alter_column is invoked from + # Operations.implementation_for(alter_column) which already + # will emit an add_constraint() + + if nullable is not None: + existing.nullable = nullable + if server_default is not False: + if server_default is None: + existing.server_default = None + else: + sql_schema.DefaultClause( + server_default # type: ignore[arg-type] + )._set_parent(existing) + if autoincrement is not None: + existing.autoincrement = bool(autoincrement) + + if comment is not False: + existing.comment = comment + + def _setup_dependencies_for_add_column( + self, + colname: str, + insert_before: Optional[str], + insert_after: Optional[str], + ) -> None: + index_cols = self.existing_ordering + col_indexes = {name: i for i, name in enumerate(index_cols)} + + if not self.partial_reordering: + if insert_after: + if not insert_before: + if insert_after in col_indexes: + # insert after an existing column + idx = col_indexes[insert_after] + 1 + if idx < len(index_cols): + insert_before = index_cols[idx] + else: + # insert after a column that is also new + insert_before = dict(self.add_col_ordering)[ + insert_after + ] + if insert_before: + if not insert_after: + if insert_before in col_indexes: + # insert before an existing column + idx = col_indexes[insert_before] - 1 + if idx >= 0: + insert_after = index_cols[idx] + else: + # insert before a column that is also new + insert_after = { + b: a for a, b in self.add_col_ordering + }[insert_before] + + if insert_before: + self.add_col_ordering += ((colname, insert_before),) + if insert_after: + self.add_col_ordering += ((insert_after, colname),) + + if ( + not self.partial_reordering + and not insert_before + and not insert_after + and col_indexes + ): + self.add_col_ordering += ((index_cols[-1], colname),) + + def add_column( + self, + table_name: str, + column: Column[Any], + insert_before: Optional[str] = None, + insert_after: Optional[str] = None, + **kw, + ) -> None: + self._setup_dependencies_for_add_column( + column.name, insert_before, insert_after + ) + # we copy the column because operations.add_column() + # gives us a Column that is part of a Table already. + self.columns[column.name] = _copy(column, schema=self.table.schema) + self.column_transfers[column.name] = {} + + def drop_column( + self, + table_name: str, + column: Union[ColumnClause[Any], Column[Any]], + **kw, + ) -> None: + if column.name in self.table.primary_key.columns: + _remove_column_from_collection( + self.table.primary_key.columns, column + ) + del self.columns[column.name] + del self.column_transfers[column.name] + self.existing_ordering.remove(column.name) + + # pop named constraints for Boolean/Enum for rename + if ( + "existing_type" in kw + and isinstance(kw["existing_type"], SchemaEventTarget) + and kw["existing_type"].name # type:ignore[attr-defined] + ): + self.named_constraints.pop( + kw["existing_type"].name, None # type:ignore[attr-defined] + ) + + def create_column_comment(self, column): + """the batch table creation function will issue create_column_comment + on the real "impl" as part of the create table process. + + That is, the Column object will have the comment on it already, + so when it is received by add_column() it will be a normal part of + the CREATE TABLE and doesn't need an extra step here. + + """ + + def create_table_comment(self, table): + """the batch table creation function will issue create_table_comment + on the real "impl" as part of the create table process. + + """ + + def drop_table_comment(self, table): + """the batch table creation function will issue drop_table_comment + on the real "impl" as part of the create table process. + + """ + + def add_constraint(self, const: Constraint) -> None: + if not constraint_name_defined(const.name): + raise ValueError("Constraint must have a name") + if isinstance(const, sql_schema.PrimaryKeyConstraint): + if self.table.primary_key in self.unnamed_constraints: + self.unnamed_constraints.remove(self.table.primary_key) + + if constraint_name_string(const.name): + self.named_constraints[const.name] = const + else: + self.unnamed_constraints.append(const) + + def drop_constraint(self, const: Constraint) -> None: + if not const.name: + raise ValueError("Constraint must have a name") + try: + if const.name in self.col_named_constraints: + col, const = self.col_named_constraints.pop(const.name) + + for col_const in list(self.columns[col.name].constraints): + if col_const.name == const.name: + self.columns[col.name].constraints.remove(col_const) + elif constraint_name_string(const.name): + const = self.named_constraints.pop(const.name) + elif const in self.unnamed_constraints: + self.unnamed_constraints.remove(const) + + except KeyError: + if _is_type_bound(const): + # type-bound constraints are only included in the new + # table via their type object in any case, so ignore the + # drop_constraint() that comes here via the + # Operations.implementation_for(alter_column) + return + raise ValueError("No such constraint: '%s'" % const.name) + else: + if isinstance(const, PrimaryKeyConstraint): + for col in const.columns: + self.columns[col.name].primary_key = False + + def create_index(self, idx: Index) -> None: + self.new_indexes[idx.name] = idx # type: ignore[index] + + def drop_index(self, idx: Index) -> None: + try: + del self.indexes[idx.name] # type: ignore[arg-type] + except KeyError: + raise ValueError("No such index: '%s'" % idx.name) + + def rename_table(self, *arg, **kw): + raise NotImplementedError("TODO") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/ops.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/ops.py new file mode 100644 index 000000000..c9b1526b6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/ops.py @@ -0,0 +1,2842 @@ +from __future__ import annotations + +from abc import abstractmethod +import os +import pathlib +import re +from typing import Any +from typing import Callable +from typing import cast +from typing import Dict +from typing import FrozenSet +from typing import Iterator +from typing import List +from typing import MutableMapping +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy.types import NULLTYPE + +from . import schemaobj +from .base import BatchOperations +from .base import Operations +from .. import util +from ..util import sqla_compat + +if TYPE_CHECKING: + from typing import Literal + + from sqlalchemy.sql import Executable + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.elements import conv + from sqlalchemy.sql.elements import quoted_name + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.schema import CheckConstraint + from sqlalchemy.sql.schema import Column + from sqlalchemy.sql.schema import Computed + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.schema import ForeignKeyConstraint + from sqlalchemy.sql.schema import Identity + from sqlalchemy.sql.schema import Index + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import PrimaryKeyConstraint + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.schema import UniqueConstraint + from sqlalchemy.sql.selectable import TableClause + from sqlalchemy.sql.type_api import TypeEngine + + from ..autogenerate.rewriter import Rewriter + from ..runtime.migration import MigrationContext + from ..script.revision import _RevIdType + +_T = TypeVar("_T", bound=Any) +_AC = TypeVar("_AC", bound="AddConstraintOp") + + +class MigrateOperation: + """base class for migration command and organization objects. + + This system is part of the operation extensibility API. + + .. seealso:: + + :ref:`operation_objects` + + :ref:`operation_plugins` + + :ref:`customizing_revision` + + """ + + @util.memoized_property + def info(self) -> Dict[Any, Any]: + """A dictionary that may be used to store arbitrary information + along with this :class:`.MigrateOperation` object. + + """ + return {} + + _mutations: FrozenSet[Rewriter] = frozenset() + + def reverse(self) -> MigrateOperation: + raise NotImplementedError + + def to_diff_tuple(self) -> Tuple[Any, ...]: + raise NotImplementedError + + +class AddConstraintOp(MigrateOperation): + """Represent an add constraint operation.""" + + add_constraint_ops = util.Dispatcher() + + @property + def constraint_type(self) -> str: + raise NotImplementedError() + + @classmethod + def register_add_constraint( + cls, type_: str + ) -> Callable[[Type[_AC]], Type[_AC]]: + def go(klass: Type[_AC]) -> Type[_AC]: + cls.add_constraint_ops.dispatch_for(type_)(klass.from_constraint) + return klass + + return go + + @classmethod + def from_constraint(cls, constraint: Constraint) -> AddConstraintOp: + return cls.add_constraint_ops.dispatch(constraint.__visit_name__)( # type: ignore[no-any-return] # noqa: E501 + constraint + ) + + @abstractmethod + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> Constraint: + pass + + def reverse(self) -> DropConstraintOp: + return DropConstraintOp.from_constraint(self.to_constraint()) + + def to_diff_tuple(self) -> Tuple[str, Constraint]: + return ("add_constraint", self.to_constraint()) + + +@Operations.register_operation("drop_constraint") +@BatchOperations.register_operation("drop_constraint", "batch_drop_constraint") +class DropConstraintOp(MigrateOperation): + """Represent a drop constraint operation.""" + + def __init__( + self, + constraint_name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + type_: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + _reverse: Optional[AddConstraintOp] = None, + ) -> None: + self.constraint_name = constraint_name + self.table_name = table_name + self.constraint_type = type_ + self.schema = schema + self.if_exists = if_exists + self._reverse = _reverse + + def reverse(self) -> AddConstraintOp: + return AddConstraintOp.from_constraint(self.to_constraint()) + + def to_diff_tuple( + self, + ) -> Tuple[str, SchemaItem]: + if self.constraint_type == "foreignkey": + return ("remove_fk", self.to_constraint()) + else: + return ("remove_constraint", self.to_constraint()) + + @classmethod + def from_constraint(cls, constraint: Constraint) -> DropConstraintOp: + types = { + "unique_constraint": "unique", + "foreign_key_constraint": "foreignkey", + "primary_key_constraint": "primary", + "check_constraint": "check", + "column_check_constraint": "check", + "table_or_column_check_constraint": "check", + } + + constraint_table = sqla_compat._table_for_constraint(constraint) + return cls( + sqla_compat.constraint_name_or_none(constraint.name), + constraint_table.name, + schema=constraint_table.schema, + type_=types.get(constraint.__visit_name__), + _reverse=AddConstraintOp.from_constraint(constraint), + ) + + def to_constraint(self) -> Constraint: + if self._reverse is not None: + constraint = self._reverse.to_constraint() + constraint.name = self.constraint_name + constraint_table = sqla_compat._table_for_constraint(constraint) + constraint_table.name = self.table_name + constraint_table.schema = self.schema + + return constraint + else: + raise ValueError( + "constraint cannot be produced; " + "original constraint is not present" + ) + + @classmethod + def drop_constraint( + cls, + operations: Operations, + constraint_name: str, + table_name: str, + type_: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + ) -> None: + r"""Drop a constraint of the given name, typically via DROP CONSTRAINT. + + :param constraint_name: name of the constraint. + :param table_name: table name. + :param type\_: optional, required on MySQL. can be + 'foreignkey', 'primary', 'unique', or 'check'. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the constraint + + .. versionadded:: 1.16.0 + + """ + + op = cls( + constraint_name, + table_name, + type_=type_, + schema=schema, + if_exists=if_exists, + ) + return operations.invoke(op) + + @classmethod + def batch_drop_constraint( + cls, + operations: BatchOperations, + constraint_name: str, + type_: Optional[str] = None, + ) -> None: + """Issue a "drop constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``table_name`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.drop_constraint` + + """ + op = cls( + constraint_name, + operations.impl.table_name, + type_=type_, + schema=operations.impl.schema, + ) + return operations.invoke(op) + + +@Operations.register_operation("create_primary_key") +@BatchOperations.register_operation( + "create_primary_key", "batch_create_primary_key" +) +@AddConstraintOp.register_add_constraint("primary_key_constraint") +class CreatePrimaryKeyOp(AddConstraintOp): + """Represent a create primary key operation.""" + + constraint_type = "primarykey" + + def __init__( + self, + constraint_name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + columns: Sequence[str], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + self.constraint_name = constraint_name + self.table_name = table_name + self.columns = columns + self.schema = schema + self.kw = kw + + @classmethod + def from_constraint(cls, constraint: Constraint) -> CreatePrimaryKeyOp: + constraint_table = sqla_compat._table_for_constraint(constraint) + pk_constraint = cast("PrimaryKeyConstraint", constraint) + return cls( + sqla_compat.constraint_name_or_none(pk_constraint.name), + constraint_table.name, + pk_constraint.columns.keys(), + schema=constraint_table.schema, + **pk_constraint.dialect_kwargs, + ) + + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> PrimaryKeyConstraint: + schema_obj = schemaobj.SchemaObjects(migration_context) + + return schema_obj.primary_key_constraint( + self.constraint_name, + self.table_name, + self.columns, + schema=self.schema, + **self.kw, + ) + + @classmethod + def create_primary_key( + cls, + operations: Operations, + constraint_name: Optional[str], + table_name: str, + columns: List[str], + *, + schema: Optional[str] = None, + ) -> None: + """Issue a "create primary key" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_primary_key("pk_my_table", "my_table", ["id", "version"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.PrimaryKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the primary key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions` + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the target table. + :param columns: a list of string column names to be applied to the + primary key constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + op = cls(constraint_name, table_name, columns, schema=schema) + return operations.invoke(op) + + @classmethod + def batch_create_primary_key( + cls, + operations: BatchOperations, + constraint_name: Optional[str], + columns: List[str], + ) -> None: + """Issue a "create primary key" instruction using the + current batch migration context. + + The batch form of this call omits the ``table_name`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_primary_key` + + """ + op = cls( + constraint_name, + operations.impl.table_name, + columns, + schema=operations.impl.schema, + ) + return operations.invoke(op) + + +@Operations.register_operation("create_unique_constraint") +@BatchOperations.register_operation( + "create_unique_constraint", "batch_create_unique_constraint" +) +@AddConstraintOp.register_add_constraint("unique_constraint") +class CreateUniqueConstraintOp(AddConstraintOp): + """Represent a create unique constraint operation.""" + + constraint_type = "unique" + + def __init__( + self, + constraint_name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + columns: Sequence[str], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + self.constraint_name = constraint_name + self.table_name = table_name + self.columns = columns + self.schema = schema + self.kw = kw + + @classmethod + def from_constraint( + cls, constraint: Constraint + ) -> CreateUniqueConstraintOp: + constraint_table = sqla_compat._table_for_constraint(constraint) + + uq_constraint = cast("UniqueConstraint", constraint) + + kw: Dict[str, Any] = {} + if uq_constraint.deferrable: + kw["deferrable"] = uq_constraint.deferrable + if uq_constraint.initially: + kw["initially"] = uq_constraint.initially + kw.update(uq_constraint.dialect_kwargs) + return cls( + sqla_compat.constraint_name_or_none(uq_constraint.name), + constraint_table.name, + [c.name for c in uq_constraint.columns], + schema=constraint_table.schema, + **kw, + ) + + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> UniqueConstraint: + schema_obj = schemaobj.SchemaObjects(migration_context) + return schema_obj.unique_constraint( + self.constraint_name, + self.table_name, + self.columns, + schema=self.schema, + **self.kw, + ) + + @classmethod + def create_unique_constraint( + cls, + operations: Operations, + constraint_name: Optional[str], + table_name: str, + columns: Sequence[str], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> Any: + """Issue a "create unique constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + op.create_unique_constraint("uq_user_name", "user", ["name"]) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.UniqueConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param name: Name of the unique constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param columns: a list of string column names in the + source table. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + + op = cls(constraint_name, table_name, columns, schema=schema, **kw) + return operations.invoke(op) + + @classmethod + def batch_create_unique_constraint( + cls, + operations: BatchOperations, + constraint_name: str, + columns: Sequence[str], + **kw: Any, + ) -> Any: + """Issue a "create unique constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_unique_constraint` + + """ + kw["schema"] = operations.impl.schema + op = cls(constraint_name, operations.impl.table_name, columns, **kw) + return operations.invoke(op) + + +@Operations.register_operation("create_foreign_key") +@BatchOperations.register_operation( + "create_foreign_key", "batch_create_foreign_key" +) +@AddConstraintOp.register_add_constraint("foreign_key_constraint") +class CreateForeignKeyOp(AddConstraintOp): + """Represent a create foreign key constraint operation.""" + + constraint_type = "foreignkey" + + def __init__( + self, + constraint_name: Optional[sqla_compat._ConstraintNameDefined], + source_table: str, + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + **kw: Any, + ) -> None: + self.constraint_name = constraint_name + self.source_table = source_table + self.referent_table = referent_table + self.local_cols = local_cols + self.remote_cols = remote_cols + self.kw = kw + + def to_diff_tuple(self) -> Tuple[str, ForeignKeyConstraint]: + return ("add_fk", self.to_constraint()) + + @classmethod + def from_constraint(cls, constraint: Constraint) -> CreateForeignKeyOp: + fk_constraint = cast("ForeignKeyConstraint", constraint) + kw: Dict[str, Any] = {} + if fk_constraint.onupdate: + kw["onupdate"] = fk_constraint.onupdate + if fk_constraint.ondelete: + kw["ondelete"] = fk_constraint.ondelete + if fk_constraint.initially: + kw["initially"] = fk_constraint.initially + if fk_constraint.deferrable: + kw["deferrable"] = fk_constraint.deferrable + if fk_constraint.use_alter: + kw["use_alter"] = fk_constraint.use_alter + if fk_constraint.match: + kw["match"] = fk_constraint.match + + ( + source_schema, + source_table, + source_columns, + target_schema, + target_table, + target_columns, + onupdate, + ondelete, + deferrable, + initially, + ) = sqla_compat._fk_spec(fk_constraint) + + kw["source_schema"] = source_schema + kw["referent_schema"] = target_schema + kw.update(fk_constraint.dialect_kwargs) + return cls( + sqla_compat.constraint_name_or_none(fk_constraint.name), + source_table, + target_table, + source_columns, + target_columns, + **kw, + ) + + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> ForeignKeyConstraint: + schema_obj = schemaobj.SchemaObjects(migration_context) + return schema_obj.foreign_key_constraint( + self.constraint_name, + self.source_table, + self.referent_table, + self.local_cols, + self.remote_cols, + **self.kw, + ) + + @classmethod + def create_foreign_key( + cls, + operations: Operations, + constraint_name: Optional[str], + source_table: str, + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + *, + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + source_schema: Optional[str] = None, + referent_schema: Optional[str] = None, + **dialect_kw: Any, + ) -> None: + """Issue a "create foreign key" instruction using the + current migration context. + + e.g.:: + + from alembic import op + + op.create_foreign_key( + "fk_user_address", + "address", + "user", + ["user_id"], + ["id"], + ) + + This internally generates a :class:`~sqlalchemy.schema.Table` object + containing the necessary columns, then generates a new + :class:`~sqlalchemy.schema.ForeignKeyConstraint` + object which it then associates with the + :class:`~sqlalchemy.schema.Table`. + Any event listeners associated with this action will be fired + off normally. The :class:`~sqlalchemy.schema.AddConstraint` + construct is ultimately used to generate the ALTER statement. + + :param constraint_name: Name of the foreign key constraint. The name + is necessary so that an ALTER statement can be emitted. For setups + that use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param source_table: String name of the source table. + :param referent_table: String name of the destination table. + :param local_cols: a list of string column names in the + source table. + :param remote_cols: a list of string column names in the + remote table. + :param onupdate: Optional string. If set, emit ON UPDATE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param ondelete: Optional string. If set, emit ON DELETE when + issuing DDL for this constraint. Typical values include CASCADE, + DELETE and RESTRICT. + :param deferrable: optional bool. If set, emit DEFERRABLE or NOT + DEFERRABLE when issuing DDL for this constraint. + :param source_schema: Optional schema name of the source table. + :param referent_schema: Optional schema name of the destination table. + + """ + + op = cls( + constraint_name, + source_table, + referent_table, + local_cols, + remote_cols, + onupdate=onupdate, + ondelete=ondelete, + deferrable=deferrable, + source_schema=source_schema, + referent_schema=referent_schema, + initially=initially, + match=match, + **dialect_kw, + ) + return operations.invoke(op) + + @classmethod + def batch_create_foreign_key( + cls, + operations: BatchOperations, + constraint_name: Optional[str], + referent_table: str, + local_cols: List[str], + remote_cols: List[str], + *, + referent_schema: Optional[str] = None, + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + **dialect_kw: Any, + ) -> None: + """Issue a "create foreign key" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``source_schema`` + arguments from the call. + + e.g.:: + + with batch_alter_table("address") as batch_op: + batch_op.create_foreign_key( + "fk_user_address", + "user", + ["user_id"], + ["id"], + ) + + .. seealso:: + + :meth:`.Operations.create_foreign_key` + + """ + op = cls( + constraint_name, + operations.impl.table_name, + referent_table, + local_cols, + remote_cols, + onupdate=onupdate, + ondelete=ondelete, + deferrable=deferrable, + source_schema=operations.impl.schema, + referent_schema=referent_schema, + initially=initially, + match=match, + **dialect_kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("create_check_constraint") +@BatchOperations.register_operation( + "create_check_constraint", "batch_create_check_constraint" +) +@AddConstraintOp.register_add_constraint("check_constraint") +@AddConstraintOp.register_add_constraint("table_or_column_check_constraint") +@AddConstraintOp.register_add_constraint("column_check_constraint") +class CreateCheckConstraintOp(AddConstraintOp): + """Represent a create check constraint operation.""" + + constraint_type = "check" + + def __init__( + self, + constraint_name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + condition: Union[str, TextClause, ColumnElement[Any]], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + self.constraint_name = constraint_name + self.table_name = table_name + self.condition = condition + self.schema = schema + self.kw = kw + + @classmethod + def from_constraint( + cls, constraint: Constraint + ) -> CreateCheckConstraintOp: + constraint_table = sqla_compat._table_for_constraint(constraint) + + ck_constraint = cast("CheckConstraint", constraint) + return cls( + sqla_compat.constraint_name_or_none(ck_constraint.name), + constraint_table.name, + cast("ColumnElement[Any]", ck_constraint.sqltext), + schema=constraint_table.schema, + **ck_constraint.dialect_kwargs, + ) + + def to_constraint( + self, migration_context: Optional[MigrationContext] = None + ) -> CheckConstraint: + schema_obj = schemaobj.SchemaObjects(migration_context) + return schema_obj.check_constraint( + self.constraint_name, + self.table_name, + self.condition, + schema=self.schema, + **self.kw, + ) + + @classmethod + def create_check_constraint( + cls, + operations: Operations, + constraint_name: Optional[str], + table_name: str, + condition: Union[str, ColumnElement[bool], TextClause], + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue a "create check constraint" instruction using the + current migration context. + + e.g.:: + + from alembic import op + from sqlalchemy.sql import column, func + + op.create_check_constraint( + "ck_user_name_len", + "user", + func.len(column("name")) > 5, + ) + + CHECK constraints are usually against a SQL expression, so ad-hoc + table metadata is usually needed. The function will convert the given + arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound + to an anonymous table in order to emit the CREATE statement. + + :param name: Name of the check constraint. The name is necessary + so that an ALTER statement can be emitted. For setups that + use an automated naming scheme such as that described at + :ref:`sqla:constraint_naming_conventions`, + ``name`` here can be ``None``, as the event listener will + apply the name to the constraint object when it is associated + with the table. + :param table_name: String name of the source table. + :param condition: SQL expression that's the condition of the + constraint. Can be a string or SQLAlchemy expression language + structure. + :param deferrable: optional bool. If set, emit DEFERRABLE or + NOT DEFERRABLE when issuing DDL for this constraint. + :param initially: optional string. If set, emit INITIALLY + when issuing DDL for this constraint. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + op = cls(constraint_name, table_name, condition, schema=schema, **kw) + return operations.invoke(op) + + @classmethod + def batch_create_check_constraint( + cls, + operations: BatchOperations, + constraint_name: str, + condition: Union[str, ColumnElement[bool], TextClause], + **kw: Any, + ) -> None: + """Issue a "create check constraint" instruction using the + current batch migration context. + + The batch form of this call omits the ``source`` and ``schema`` + arguments from the call. + + .. seealso:: + + :meth:`.Operations.create_check_constraint` + + """ + op = cls( + constraint_name, + operations.impl.table_name, + condition, + schema=operations.impl.schema, + **kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("create_index") +@BatchOperations.register_operation("create_index", "batch_create_index") +class CreateIndexOp(MigrateOperation): + """Represent a create index operation.""" + + def __init__( + self, + index_name: Optional[str], + table_name: str, + columns: Sequence[Union[str, TextClause, ColumnElement[Any]]], + *, + schema: Optional[str] = None, + unique: bool = False, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + self.index_name = index_name + self.table_name = table_name + self.columns = columns + self.schema = schema + self.unique = unique + self.if_not_exists = if_not_exists + self.kw = kw + + def reverse(self) -> DropIndexOp: + return DropIndexOp.from_index(self.to_index()) + + def to_diff_tuple(self) -> Tuple[str, Index]: + return ("add_index", self.to_index()) + + @classmethod + def from_index(cls, index: Index) -> CreateIndexOp: + assert index.table is not None + return cls( + index.name, + index.table.name, + index.expressions, + schema=index.table.schema, + unique=index.unique, + **index.kwargs, + ) + + def to_index( + self, migration_context: Optional[MigrationContext] = None + ) -> Index: + schema_obj = schemaobj.SchemaObjects(migration_context) + + idx = schema_obj.index( + self.index_name, + self.table_name, + self.columns, + schema=self.schema, + unique=self.unique, + **self.kw, + ) + return idx + + @classmethod + def create_index( + cls, + operations: Operations, + index_name: Optional[str], + table_name: str, + columns: Sequence[Union[str, TextClause, ColumnElement[Any]]], + *, + schema: Optional[str] = None, + unique: bool = False, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "create index" instruction using the current + migration context. + + e.g.:: + + from alembic import op + + op.create_index("ik_test", "t1", ["foo", "bar"]) + + Functional indexes can be produced by using the + :func:`sqlalchemy.sql.expression.text` construct:: + + from alembic import op + from sqlalchemy import text + + op.create_index("ik_test", "t1", [text("lower(foo)")]) + + :param index_name: name of the index. + :param table_name: name of the owning table. + :param columns: a list consisting of string column names and/or + :func:`~sqlalchemy.sql.expression.text` constructs. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param unique: If True, create a unique index. + + :param quote: Force quoting of this column's name on or off, + corresponding to ``True`` or ``False``. When left at its default + of ``None``, the column identifier will be quoted according to + whether the name is case sensitive (identifiers with at least one + upper case character are treated as case sensitive), or if it's a + reserved word. This flag is only needed to force quoting of a + reserved word which is not known by the SQLAlchemy dialect. + + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ + op = cls( + index_name, + table_name, + columns, + schema=schema, + unique=unique, + if_not_exists=if_not_exists, + **kw, + ) + return operations.invoke(op) + + @classmethod + def batch_create_index( + cls, + operations: BatchOperations, + index_name: str, + columns: List[str], + **kw: Any, + ) -> None: + """Issue a "create index" instruction using the + current batch migration context. + + .. seealso:: + + :meth:`.Operations.create_index` + + """ + + op = cls( + index_name, + operations.impl.table_name, + columns, + schema=operations.impl.schema, + **kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("drop_index") +@BatchOperations.register_operation("drop_index", "batch_drop_index") +class DropIndexOp(MigrateOperation): + """Represent a drop index operation.""" + + def __init__( + self, + index_name: Union[quoted_name, str, conv], + table_name: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + _reverse: Optional[CreateIndexOp] = None, + **kw: Any, + ) -> None: + self.index_name = index_name + self.table_name = table_name + self.schema = schema + self.if_exists = if_exists + self._reverse = _reverse + self.kw = kw + + def to_diff_tuple(self) -> Tuple[str, Index]: + return ("remove_index", self.to_index()) + + def reverse(self) -> CreateIndexOp: + return CreateIndexOp.from_index(self.to_index()) + + @classmethod + def from_index(cls, index: Index) -> DropIndexOp: + assert index.table is not None + return cls( + index.name, # type: ignore[arg-type] + table_name=index.table.name, + schema=index.table.schema, + _reverse=CreateIndexOp.from_index(index), + unique=index.unique, + **index.kwargs, + ) + + def to_index( + self, migration_context: Optional[MigrationContext] = None + ) -> Index: + schema_obj = schemaobj.SchemaObjects(migration_context) + + # need a dummy column name here since SQLAlchemy + # 0.7.6 and further raises on Index with no columns + return schema_obj.index( + self.index_name, + self.table_name, + self._reverse.columns if self._reverse else ["x"], + schema=self.schema, + **self.kw, + ) + + @classmethod + def drop_index( + cls, + operations: Operations, + index_name: str, + table_name: Optional[str] = None, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "drop index" instruction using the current + migration context. + + e.g.:: + + drop_index("accounts") + + :param index_name: name of the index. + :param table_name: name of the owning table. Some + backends such as Microsoft SQL Server require this. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + :param if_exists: If True, adds IF EXISTS operator when + dropping the index. + + .. versionadded:: 1.12.0 + + :param \**kw: Additional keyword arguments not mentioned above are + dialect specific, and passed in the form + ``_``. + See the documentation regarding an individual dialect at + :ref:`dialect_toplevel` for detail on documented arguments. + + """ + op = cls( + index_name, + table_name=table_name, + schema=schema, + if_exists=if_exists, + **kw, + ) + return operations.invoke(op) + + @classmethod + def batch_drop_index( + cls, operations: BatchOperations, index_name: str, **kw: Any + ) -> None: + """Issue a "drop index" instruction using the + current batch migration context. + + .. seealso:: + + :meth:`.Operations.drop_index` + + """ + + op = cls( + index_name, + table_name=operations.impl.table_name, + schema=operations.impl.schema, + **kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("create_table") +class CreateTableOp(MigrateOperation): + """Represent a create table operation.""" + + def __init__( + self, + table_name: str, + columns: Sequence[SchemaItem], + *, + schema: Optional[str] = None, + if_not_exists: Optional[bool] = None, + _namespace_metadata: Optional[MetaData] = None, + _constraints_included: bool = False, + **kw: Any, + ) -> None: + self.table_name = table_name + self.columns = columns + self.schema = schema + self.if_not_exists = if_not_exists + self.info = kw.pop("info", {}) + self.comment = kw.pop("comment", None) + self.prefixes = kw.pop("prefixes", None) + self.kw = kw + self._namespace_metadata = _namespace_metadata + self._constraints_included = _constraints_included + + def reverse(self) -> DropTableOp: + return DropTableOp.from_table( + self.to_table(), _namespace_metadata=self._namespace_metadata + ) + + def to_diff_tuple(self) -> Tuple[str, Table]: + return ("add_table", self.to_table()) + + @classmethod + def from_table( + cls, table: Table, *, _namespace_metadata: Optional[MetaData] = None + ) -> CreateTableOp: + if _namespace_metadata is None: + _namespace_metadata = table.metadata + + return cls( + table.name, + list(table.c) + list(table.constraints), + schema=table.schema, + _namespace_metadata=_namespace_metadata, + # given a Table() object, this Table will contain full Index() + # and UniqueConstraint objects already constructed in response to + # each unique=True / index=True flag on a Column. Carry this + # state along so that when we re-convert back into a Table, we + # skip unique=True/index=True so that these constraints are + # not doubled up. see #844 #848 + _constraints_included=True, + comment=table.comment, + info=dict(table.info), + prefixes=list(table._prefixes), + **table.kwargs, + ) + + def to_table( + self, migration_context: Optional[MigrationContext] = None + ) -> Table: + schema_obj = schemaobj.SchemaObjects(migration_context) + + return schema_obj.table( + self.table_name, + *self.columns, + schema=self.schema, + prefixes=list(self.prefixes) if self.prefixes else [], + comment=self.comment, + info=self.info.copy() if self.info else {}, + _constraints_included=self._constraints_included, + **self.kw, + ) + + @classmethod + def create_table( + cls, + operations: Operations, + table_name: str, + *columns: SchemaItem, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> Table: + r"""Issue a "create table" instruction using the current migration + context. + + This directive receives an argument list similar to that of the + traditional :class:`sqlalchemy.schema.Table` construct, but without the + metadata:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + Note that :meth:`.create_table` accepts + :class:`~sqlalchemy.schema.Column` + constructs directly from the SQLAlchemy library. In particular, + default values to be created on the database side are + specified using the ``server_default`` parameter, and not + ``default`` which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the "timestamp" column + op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + The function also returns a newly created + :class:`~sqlalchemy.schema.Table` object, corresponding to the table + specification given, which is suitable for + immediate SQL operations, in particular + :meth:`.Operations.bulk_insert`:: + + from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column + from alembic import op + + account_table = op.create_table( + "account", + Column("id", INTEGER, primary_key=True), + Column("name", VARCHAR(50), nullable=False), + Column("description", NVARCHAR(200)), + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + op.bulk_insert( + account_table, + [ + {"name": "A1", "description": "account 1"}, + {"name": "A2", "description": "account 2"}, + ], + ) + + :param table_name: Name of the table + :param \*columns: collection of :class:`~sqlalchemy.schema.Column` + objects within + the table, as well as optional :class:`~sqlalchemy.schema.Constraint` + objects + and :class:`~.sqlalchemy.schema.Index` objects. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator when + creating the new table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + :return: the :class:`~sqlalchemy.schema.Table` object corresponding + to the parameters given. + + """ + op = cls(table_name, columns, if_not_exists=if_not_exists, **kw) + return operations.invoke(op) + + +@Operations.register_operation("drop_table") +class DropTableOp(MigrateOperation): + """Represent a drop table operation.""" + + def __init__( + self, + table_name: str, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + table_kw: Optional[MutableMapping[Any, Any]] = None, + _reverse: Optional[CreateTableOp] = None, + ) -> None: + self.table_name = table_name + self.schema = schema + self.if_exists = if_exists + self.table_kw = table_kw or {} + self.comment = self.table_kw.pop("comment", None) + self.info = self.table_kw.pop("info", None) + self.prefixes = self.table_kw.pop("prefixes", None) + self._reverse = _reverse + + def to_diff_tuple(self) -> Tuple[str, Table]: + return ("remove_table", self.to_table()) + + def reverse(self) -> CreateTableOp: + return CreateTableOp.from_table(self.to_table()) + + @classmethod + def from_table( + cls, table: Table, *, _namespace_metadata: Optional[MetaData] = None + ) -> DropTableOp: + return cls( + table.name, + schema=table.schema, + table_kw={ + "comment": table.comment, + "info": dict(table.info), + "prefixes": list(table._prefixes), + **table.kwargs, + }, + _reverse=CreateTableOp.from_table( + table, _namespace_metadata=_namespace_metadata + ), + ) + + def to_table( + self, migration_context: Optional[MigrationContext] = None + ) -> Table: + if self._reverse: + cols_and_constraints = self._reverse.columns + else: + cols_and_constraints = [] + + schema_obj = schemaobj.SchemaObjects(migration_context) + t = schema_obj.table( + self.table_name, + *cols_and_constraints, + comment=self.comment, + info=self.info.copy() if self.info else {}, + prefixes=list(self.prefixes) if self.prefixes else [], + schema=self.schema, + _constraints_included=( + self._reverse._constraints_included if self._reverse else False + ), + **self.table_kw, + ) + return t + + @classmethod + def drop_table( + cls, + operations: Operations, + table_name: str, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + r"""Issue a "drop table" instruction using the current + migration context. + + + e.g.:: + + drop_table("accounts") + + :param table_name: Name of the table + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the table. + + .. versionadded:: 1.13.3 + :param \**kw: Other keyword arguments are passed to the underlying + :class:`sqlalchemy.schema.Table` object created for the command. + + """ + op = cls(table_name, schema=schema, if_exists=if_exists, table_kw=kw) + operations.invoke(op) + + +class AlterTableOp(MigrateOperation): + """Represent an alter table operation.""" + + def __init__( + self, + table_name: str, + *, + schema: Optional[str] = None, + ) -> None: + self.table_name = table_name + self.schema = schema + + +@Operations.register_operation("rename_table") +class RenameTableOp(AlterTableOp): + """Represent a rename table operation.""" + + def __init__( + self, + old_table_name: str, + new_table_name: str, + *, + schema: Optional[str] = None, + ) -> None: + super().__init__(old_table_name, schema=schema) + self.new_table_name = new_table_name + + @classmethod + def rename_table( + cls, + operations: Operations, + old_table_name: str, + new_table_name: str, + *, + schema: Optional[str] = None, + ) -> None: + """Emit an ALTER TABLE to rename a table. + + :param old_table_name: old name. + :param new_table_name: new name. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + + """ + op = cls(old_table_name, new_table_name, schema=schema) + return operations.invoke(op) + + +@Operations.register_operation("create_table_comment") +@BatchOperations.register_operation( + "create_table_comment", "batch_create_table_comment" +) +class CreateTableCommentOp(AlterTableOp): + """Represent a COMMENT ON `table` operation.""" + + def __init__( + self, + table_name: str, + comment: Optional[str], + *, + schema: Optional[str] = None, + existing_comment: Optional[str] = None, + ) -> None: + self.table_name = table_name + self.comment = comment + self.existing_comment = existing_comment + self.schema = schema + + @classmethod + def create_table_comment( + cls, + operations: Operations, + table_name: str, + comment: Optional[str], + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + ) -> None: + """Emit a COMMENT ON operation to set the comment for a table. + + :param table_name: string name of the target table. + :param comment: string value of the comment being registered against + the specified table. + :param existing_comment: String value of a comment + already registered on the specified table, used within autogenerate + so that the operation is reversible, but not required for direct + use. + + .. seealso:: + + :meth:`.Operations.drop_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ + + op = cls( + table_name, + comment, + existing_comment=existing_comment, + schema=schema, + ) + return operations.invoke(op) + + @classmethod + def batch_create_table_comment( + cls, + operations: BatchOperations, + comment: Optional[str], + *, + existing_comment: Optional[str] = None, + ) -> None: + """Emit a COMMENT ON operation to set the comment for a table + using the current batch migration context. + + :param comment: string value of the comment being registered against + the specified table. + :param existing_comment: String value of a comment + already registered on the specified table, used within autogenerate + so that the operation is reversible, but not required for direct + use. + + """ + + op = cls( + operations.impl.table_name, + comment, + existing_comment=existing_comment, + schema=operations.impl.schema, + ) + return operations.invoke(op) + + def reverse(self) -> Union[CreateTableCommentOp, DropTableCommentOp]: + """Reverses the COMMENT ON operation against a table.""" + if self.existing_comment is None: + return DropTableCommentOp( + self.table_name, + existing_comment=self.comment, + schema=self.schema, + ) + else: + return CreateTableCommentOp( + self.table_name, + self.existing_comment, + existing_comment=self.comment, + schema=self.schema, + ) + + def to_table( + self, migration_context: Optional[MigrationContext] = None + ) -> Table: + schema_obj = schemaobj.SchemaObjects(migration_context) + + return schema_obj.table( + self.table_name, schema=self.schema, comment=self.comment + ) + + def to_diff_tuple(self) -> Tuple[Any, ...]: + return ("add_table_comment", self.to_table(), self.existing_comment) + + +@Operations.register_operation("drop_table_comment") +@BatchOperations.register_operation( + "drop_table_comment", "batch_drop_table_comment" +) +class DropTableCommentOp(AlterTableOp): + """Represent an operation to remove the comment from a table.""" + + def __init__( + self, + table_name: str, + *, + schema: Optional[str] = None, + existing_comment: Optional[str] = None, + ) -> None: + self.table_name = table_name + self.existing_comment = existing_comment + self.schema = schema + + @classmethod + def drop_table_comment( + cls, + operations: Operations, + table_name: str, + *, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + ) -> None: + """Issue a "drop table comment" operation to + remove an existing comment set on a table. + + :param table_name: string name of the target table. + :param existing_comment: An optional string value of a comment already + registered on the specified table. + + .. seealso:: + + :meth:`.Operations.create_table_comment` + + :paramref:`.Operations.alter_column.comment` + + """ + + op = cls(table_name, existing_comment=existing_comment, schema=schema) + return operations.invoke(op) + + @classmethod + def batch_drop_table_comment( + cls, + operations: BatchOperations, + *, + existing_comment: Optional[str] = None, + ) -> None: + """Issue a "drop table comment" operation to + remove an existing comment set on a table using the current + batch operations context. + + :param existing_comment: An optional string value of a comment already + registered on the specified table. + + """ + + op = cls( + operations.impl.table_name, + existing_comment=existing_comment, + schema=operations.impl.schema, + ) + return operations.invoke(op) + + def reverse(self) -> CreateTableCommentOp: + """Reverses the COMMENT ON operation against a table.""" + return CreateTableCommentOp( + self.table_name, self.existing_comment, schema=self.schema + ) + + def to_table( + self, migration_context: Optional[MigrationContext] = None + ) -> Table: + schema_obj = schemaobj.SchemaObjects(migration_context) + + return schema_obj.table(self.table_name, schema=self.schema) + + def to_diff_tuple(self) -> Tuple[Any, ...]: + return ("remove_table_comment", self.to_table()) + + +@Operations.register_operation("alter_column") +@BatchOperations.register_operation("alter_column", "batch_alter_column") +class AlterColumnOp(AlterTableOp): + """Represent an alter column operation.""" + + def __init__( + self, + table_name: str, + column_name: str, + *, + schema: Optional[str] = None, + existing_type: Optional[Any] = None, + existing_server_default: Any = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + modify_nullable: Optional[bool] = None, + modify_comment: Optional[Union[str, Literal[False]]] = False, + modify_server_default: Any = False, + modify_name: Optional[str] = None, + modify_type: Optional[Any] = None, + **kw: Any, + ) -> None: + super().__init__(table_name, schema=schema) + self.column_name = column_name + self.existing_type = existing_type + self.existing_server_default = existing_server_default + self.existing_nullable = existing_nullable + self.existing_comment = existing_comment + self.modify_nullable = modify_nullable + self.modify_comment = modify_comment + self.modify_server_default = modify_server_default + self.modify_name = modify_name + self.modify_type = modify_type + self.kw = kw + + def to_diff_tuple(self) -> Any: + col_diff = [] + schema, tname, cname = self.schema, self.table_name, self.column_name + + if self.modify_type is not None: + col_diff.append( + ( + "modify_type", + schema, + tname, + cname, + { + "existing_nullable": self.existing_nullable, + "existing_server_default": ( + self.existing_server_default + ), + "existing_comment": self.existing_comment, + }, + self.existing_type, + self.modify_type, + ) + ) + + if self.modify_nullable is not None: + col_diff.append( + ( + "modify_nullable", + schema, + tname, + cname, + { + "existing_type": self.existing_type, + "existing_server_default": ( + self.existing_server_default + ), + "existing_comment": self.existing_comment, + }, + self.existing_nullable, + self.modify_nullable, + ) + ) + + if self.modify_server_default is not False: + col_diff.append( + ( + "modify_default", + schema, + tname, + cname, + { + "existing_nullable": self.existing_nullable, + "existing_type": self.existing_type, + "existing_comment": self.existing_comment, + }, + self.existing_server_default, + self.modify_server_default, + ) + ) + + if self.modify_comment is not False: + col_diff.append( + ( + "modify_comment", + schema, + tname, + cname, + { + "existing_nullable": self.existing_nullable, + "existing_type": self.existing_type, + "existing_server_default": ( + self.existing_server_default + ), + }, + self.existing_comment, + self.modify_comment, + ) + ) + + return col_diff + + def has_changes(self) -> bool: + hc1 = ( + self.modify_nullable is not None + or self.modify_server_default is not False + or self.modify_type is not None + or self.modify_comment is not False + ) + if hc1: + return True + for kw in self.kw: + if kw.startswith("modify_"): + return True + else: + return False + + def reverse(self) -> AlterColumnOp: + kw = self.kw.copy() + kw["existing_type"] = self.existing_type + kw["existing_nullable"] = self.existing_nullable + kw["existing_server_default"] = self.existing_server_default + kw["existing_comment"] = self.existing_comment + if self.modify_type is not None: + kw["modify_type"] = self.modify_type + if self.modify_nullable is not None: + kw["modify_nullable"] = self.modify_nullable + if self.modify_server_default is not False: + kw["modify_server_default"] = self.modify_server_default + if self.modify_comment is not False: + kw["modify_comment"] = self.modify_comment + + # TODO: make this a little simpler + all_keys = { + m.group(1) + for m in [re.match(r"^(?:existing_|modify_)(.+)$", k) for k in kw] + if m + } + + for k in all_keys: + if "modify_%s" % k in kw: + swap = kw["existing_%s" % k] + kw["existing_%s" % k] = kw["modify_%s" % k] + kw["modify_%s" % k] = swap + + return self.__class__( + self.table_name, self.column_name, schema=self.schema, **kw + ) + + @classmethod + def alter_column( + cls, + operations: Operations, + table_name: str, + column_name: str, + *, + nullable: Optional[bool] = None, + comment: Optional[Union[str, Literal[False]]] = False, + server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + new_column_name: Optional[str] = None, + type_: Optional[Union[TypeEngine[Any], Type[TypeEngine[Any]]]] = None, + existing_type: Optional[ + Union[TypeEngine[Any], Type[TypeEngine[Any]]] + ] = None, + existing_server_default: Union[ + str, bool, Identity, Computed, TextClause, None + ] = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + r"""Issue an "alter column" instruction using the + current migration context. + + Generally, only that aspect of the column which + is being changed, i.e. name, type, nullability, + default, needs to be specified. Multiple changes + can also be specified at once and the backend should + "do the right thing", emitting each change either + separately or together as the backend allows. + + MySQL has special requirements here, since MySQL + cannot ALTER a column without a full specification. + When producing MySQL-compatible migration files, + it is recommended that the ``existing_type``, + ``existing_server_default``, and ``existing_nullable`` + parameters be present, if not being altered. + + Type changes which are against the SQLAlchemy + "schema" types :class:`~sqlalchemy.types.Boolean` + and :class:`~sqlalchemy.types.Enum` may also + add or drop constraints which accompany those + types on backends that don't support them natively. + The ``existing_type`` argument is + used in this case to identify and remove a previous + constraint that was bound to the type object. + + :param table_name: string name of the target table. + :param column_name: string name of the target column, + as it exists before the operation begins. + :param nullable: Optional; specify ``True`` or ``False`` + to alter the column's nullability. + :param server_default: Optional; specify a string + SQL expression, :func:`~sqlalchemy.sql.expression.text`, + or :class:`~sqlalchemy.schema.DefaultClause` to indicate + an alteration to the column's default value. + Set to ``None`` to have the default removed. + :param comment: optional string text of a new comment to add to the + column. + :param new_column_name: Optional; specify a string name here to + indicate the new name within a column rename operation. + :param type\_: Optional; a :class:`~sqlalchemy.types.TypeEngine` + type object to specify a change to the column's type. + For SQLAlchemy types that also indicate a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, :class:`~sqlalchemy.types.Enum`), + the constraint is also generated. + :param autoincrement: set the ``AUTO_INCREMENT`` flag of the column; + currently understood by the MySQL dialect. + :param existing_type: Optional; a + :class:`~sqlalchemy.types.TypeEngine` + type object to specify the previous type. This + is required for all MySQL column alter operations that + don't otherwise specify a new type, as well as for + when nullability is being changed on a SQL Server + column. It is also used if the type is a so-called + SQLAlchemy "schema" type which may define a constraint (i.e. + :class:`~sqlalchemy.types.Boolean`, + :class:`~sqlalchemy.types.Enum`), + so that the constraint can be dropped. + :param existing_server_default: Optional; The existing + default value of the column. Required on MySQL if + an existing default is not being changed; else MySQL + removes the default. + :param existing_nullable: Optional; the existing nullability + of the column. Required on MySQL if the existing nullability + is not being changed; else MySQL sets this to NULL. + :param existing_autoincrement: Optional; the existing autoincrement + of the column. Used for MySQL's system of altering a column + that specifies ``AUTO_INCREMENT``. + :param existing_comment: string text of the existing comment on the + column to be maintained. Required on MySQL if the existing comment + on the column is not being changed. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param postgresql_using: String argument which will indicate a + SQL expression to render within the Postgresql-specific USING clause + within ALTER COLUMN. This string is taken directly as raw SQL which + must explicitly include any necessary quoting or escaping of tokens + within the expression. + + """ + + alt = cls( + table_name, + column_name, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + modify_name=new_column_name, + modify_type=type_, + modify_server_default=server_default, + modify_nullable=nullable, + modify_comment=comment, + **kw, + ) + + return operations.invoke(alt) + + @classmethod + def batch_alter_column( + cls, + operations: BatchOperations, + column_name: str, + *, + nullable: Optional[bool] = None, + comment: Optional[Union[str, Literal[False]]] = False, + server_default: Any = False, + new_column_name: Optional[str] = None, + type_: Optional[Union[TypeEngine[Any], Type[TypeEngine[Any]]]] = None, + existing_type: Optional[ + Union[TypeEngine[Any], Type[TypeEngine[Any]]] + ] = None, + existing_server_default: Optional[ + Union[str, bool, Identity, Computed] + ] = False, + existing_nullable: Optional[bool] = None, + existing_comment: Optional[str] = None, + insert_before: Optional[str] = None, + insert_after: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue an "alter column" instruction using the current + batch migration context. + + Parameters are the same as that of :meth:`.Operations.alter_column`, + as well as the following option(s): + + :param insert_before: String name of an existing column which this + column should be placed before, when creating the new table. + + :param insert_after: String name of an existing column which this + column should be placed after, when creating the new table. If + both :paramref:`.BatchOperations.alter_column.insert_before` + and :paramref:`.BatchOperations.alter_column.insert_after` are + omitted, the column is inserted after the last existing column + in the table. + + .. seealso:: + + :meth:`.Operations.alter_column` + + + """ + alt = cls( + operations.impl.table_name, + column_name, + schema=operations.impl.schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + existing_comment=existing_comment, + modify_name=new_column_name, + modify_type=type_, + modify_server_default=server_default, + modify_nullable=nullable, + modify_comment=comment, + insert_before=insert_before, + insert_after=insert_after, + **kw, + ) + + return operations.invoke(alt) + + +@Operations.register_operation("add_column") +@BatchOperations.register_operation("add_column", "batch_add_column") +class AddColumnOp(AlterTableOp): + """Represent an add column operation.""" + + def __init__( + self, + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + if_not_exists: Optional[bool] = None, + **kw: Any, + ) -> None: + super().__init__(table_name, schema=schema) + self.column = column + self.if_not_exists = if_not_exists + self.kw = kw + + def reverse(self) -> DropColumnOp: + op = DropColumnOp.from_column_and_tablename( + self.schema, self.table_name, self.column + ) + op.if_exists = self.if_not_exists + return op + + def to_diff_tuple( + self, + ) -> Tuple[str, Optional[str], str, Column[Any]]: + return ("add_column", self.schema, self.table_name, self.column) + + def to_column(self) -> Column[Any]: + return self.column + + @classmethod + def from_column(cls, col: Column[Any]) -> AddColumnOp: + return cls(col.table.name, col, schema=col.table.schema) + + @classmethod + def from_column_and_tablename( + cls, + schema: Optional[str], + tname: str, + col: Column[Any], + ) -> AddColumnOp: + return cls(tname, col, schema=schema) + + @classmethod + def add_column( + cls, + operations: Operations, + table_name: str, + column: Column[Any], + *, + schema: Optional[str] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + """Issue an "add column" instruction using the current + migration context. + + e.g.:: + + from alembic import op + from sqlalchemy import Column, String + + op.add_column("organization", Column("name", String())) + + The :meth:`.Operations.add_column` method typically corresponds + to the SQL command "ALTER TABLE... ADD COLUMN". Within the scope + of this command, the column's name, datatype, nullability, + and optional server-generated defaults may be indicated. + + .. note:: + + With the exception of NOT NULL constraints or single-column FOREIGN + KEY constraints, other kinds of constraints such as PRIMARY KEY, + UNIQUE or CHECK constraints **cannot** be generated using this + method; for these constraints, refer to operations such as + :meth:`.Operations.create_primary_key` and + :meth:`.Operations.create_check_constraint`. In particular, the + following :class:`~sqlalchemy.schema.Column` parameters are + **ignored**: + + * :paramref:`~sqlalchemy.schema.Column.primary_key` - SQL databases + typically do not support an ALTER operation that can add + individual columns one at a time to an existing primary key + constraint, therefore it's less ambiguous to use the + :meth:`.Operations.create_primary_key` method, which assumes no + existing primary key constraint is present. + * :paramref:`~sqlalchemy.schema.Column.unique` - use the + :meth:`.Operations.create_unique_constraint` method + * :paramref:`~sqlalchemy.schema.Column.index` - use the + :meth:`.Operations.create_index` method + + + The provided :class:`~sqlalchemy.schema.Column` object may include a + :class:`~sqlalchemy.schema.ForeignKey` constraint directive, + referencing a remote table name. For this specific type of constraint, + Alembic will automatically emit a second ALTER statement in order to + add the single-column FOREIGN KEY constraint separately:: + + from alembic import op + from sqlalchemy import Column, INTEGER, ForeignKey + + op.add_column( + "organization", + Column("account_id", INTEGER, ForeignKey("accounts.id")), + ) + + The column argument passed to :meth:`.Operations.add_column` is a + :class:`~sqlalchemy.schema.Column` construct, used in the same way it's + used in SQLAlchemy. In particular, values or functions to be indicated + as producing the column's default value on the database side are + specified using the ``server_default`` parameter, and not ``default`` + which only specifies Python-side defaults:: + + from alembic import op + from sqlalchemy import Column, TIMESTAMP, func + + # specify "DEFAULT NOW" along with the column add + op.add_column( + "account", + Column("timestamp", TIMESTAMP, server_default=func.now()), + ) + + :param table_name: String name of the parent table. + :param column: a :class:`sqlalchemy.schema.Column` object + representing the new column. + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_not_exists: If True, adds IF NOT EXISTS operator + when creating the new column for compatible dialects + + .. versionadded:: 1.16.0 + + """ + + op = cls( + table_name, + column, + schema=schema, + if_not_exists=if_not_exists, + ) + return operations.invoke(op) + + @classmethod + def batch_add_column( + cls, + operations: BatchOperations, + column: Column[Any], + *, + insert_before: Optional[str] = None, + insert_after: Optional[str] = None, + if_not_exists: Optional[bool] = None, + ) -> None: + """Issue an "add column" instruction using the current + batch migration context. + + .. seealso:: + + :meth:`.Operations.add_column` + + """ + + kw = {} + if insert_before: + kw["insert_before"] = insert_before + if insert_after: + kw["insert_after"] = insert_after + + op = cls( + operations.impl.table_name, + column, + schema=operations.impl.schema, + if_not_exists=if_not_exists, + **kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("drop_column") +@BatchOperations.register_operation("drop_column", "batch_drop_column") +class DropColumnOp(AlterTableOp): + """Represent a drop column operation.""" + + def __init__( + self, + table_name: str, + column_name: str, + *, + schema: Optional[str] = None, + if_exists: Optional[bool] = None, + _reverse: Optional[AddColumnOp] = None, + **kw: Any, + ) -> None: + super().__init__(table_name, schema=schema) + self.column_name = column_name + self.kw = kw + self.if_exists = if_exists + self._reverse = _reverse + + def to_diff_tuple( + self, + ) -> Tuple[str, Optional[str], str, Column[Any]]: + return ( + "remove_column", + self.schema, + self.table_name, + self.to_column(), + ) + + def reverse(self) -> AddColumnOp: + if self._reverse is None: + raise ValueError( + "operation is not reversible; " + "original column is not present" + ) + + op = AddColumnOp.from_column_and_tablename( + self.schema, self.table_name, self._reverse.column + ) + op.if_not_exists = self.if_exists + return op + + @classmethod + def from_column_and_tablename( + cls, + schema: Optional[str], + tname: str, + col: Column[Any], + ) -> DropColumnOp: + return cls( + tname, + col.name, + schema=schema, + _reverse=AddColumnOp.from_column_and_tablename(schema, tname, col), + ) + + def to_column( + self, migration_context: Optional[MigrationContext] = None + ) -> Column[Any]: + if self._reverse is not None: + return self._reverse.column + schema_obj = schemaobj.SchemaObjects(migration_context) + return schema_obj.column(self.column_name, NULLTYPE) + + @classmethod + def drop_column( + cls, + operations: Operations, + table_name: str, + column_name: str, + *, + schema: Optional[str] = None, + **kw: Any, + ) -> None: + """Issue a "drop column" instruction using the current + migration context. + + e.g.:: + + drop_column("organization", "account_id") + + :param table_name: name of table + :param column_name: name of column + :param schema: Optional schema name to operate within. To control + quoting of the schema outside of the default behavior, use + the SQLAlchemy construct + :class:`~sqlalchemy.sql.elements.quoted_name`. + :param if_exists: If True, adds IF EXISTS operator when + dropping the new column for compatible dialects + + .. versionadded:: 1.16.0 + + :param mssql_drop_check: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the CHECK constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.check_constraints, + then exec's a separate DROP CONSTRAINT for that constraint. + :param mssql_drop_default: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop the DEFAULT constraint on the column using a + SQL-script-compatible + block that selects into a @variable from sys.default_constraints, + then exec's a separate DROP CONSTRAINT for that default. + :param mssql_drop_foreign_key: Optional boolean. When ``True``, on + Microsoft SQL Server only, first + drop a single FOREIGN KEY constraint on the column using a + SQL-script-compatible + block that selects into a @variable from + sys.foreign_keys/sys.foreign_key_columns, + then exec's a separate DROP CONSTRAINT for that default. Only + works if the column has exactly one FK constraint which refers to + it, at the moment. + """ + + op = cls(table_name, column_name, schema=schema, **kw) + return operations.invoke(op) + + @classmethod + def batch_drop_column( + cls, operations: BatchOperations, column_name: str, **kw: Any + ) -> None: + """Issue a "drop column" instruction using the current + batch migration context. + + .. seealso:: + + :meth:`.Operations.drop_column` + + """ + op = cls( + operations.impl.table_name, + column_name, + schema=operations.impl.schema, + **kw, + ) + return operations.invoke(op) + + +@Operations.register_operation("bulk_insert") +class BulkInsertOp(MigrateOperation): + """Represent a bulk insert operation.""" + + def __init__( + self, + table: Union[Table, TableClause], + rows: List[Dict[str, Any]], + *, + multiinsert: bool = True, + ) -> None: + self.table = table + self.rows = rows + self.multiinsert = multiinsert + + @classmethod + def bulk_insert( + cls, + operations: Operations, + table: Union[Table, TableClause], + rows: List[Dict[str, Any]], + *, + multiinsert: bool = True, + ) -> None: + """Issue a "bulk insert" operation using the current + migration context. + + This provides a means of representing an INSERT of multiple rows + which works equally well in the context of executing on a live + connection as well as that of generating a SQL script. In the + case of a SQL script, the values are rendered inline into the + statement. + + e.g.:: + + from alembic import op + from datetime import date + from sqlalchemy.sql import table, column + from sqlalchemy import String, Integer, Date + + # Create an ad-hoc table to use for the insert statement. + accounts_table = table( + "account", + column("id", Integer), + column("name", String), + column("create_date", Date), + ) + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": date(2010, 10, 5), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": date(2007, 5, 27), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": date(2008, 8, 15), + }, + ], + ) + + When using --sql mode, some datatypes may not render inline + automatically, such as dates and other special types. When this + issue is present, :meth:`.Operations.inline_literal` may be used:: + + op.bulk_insert( + accounts_table, + [ + { + "id": 1, + "name": "John Smith", + "create_date": op.inline_literal("2010-10-05"), + }, + { + "id": 2, + "name": "Ed Williams", + "create_date": op.inline_literal("2007-05-27"), + }, + { + "id": 3, + "name": "Wendy Jones", + "create_date": op.inline_literal("2008-08-15"), + }, + ], + multiinsert=False, + ) + + When using :meth:`.Operations.inline_literal` in conjunction with + :meth:`.Operations.bulk_insert`, in order for the statement to work + in "online" (e.g. non --sql) mode, the + :paramref:`~.Operations.bulk_insert.multiinsert` + flag should be set to ``False``, which will have the effect of + individual INSERT statements being emitted to the database, each + with a distinct VALUES clause, so that the "inline" values can + still be rendered, rather than attempting to pass the values + as bound parameters. + + :param table: a table object which represents the target of the INSERT. + + :param rows: a list of dictionaries indicating rows. + + :param multiinsert: when at its default of True and --sql mode is not + enabled, the INSERT statement will be executed using + "executemany()" style, where all elements in the list of + dictionaries are passed as bound parameters in a single + list. Setting this to False results in individual INSERT + statements being emitted per parameter set, and is needed + in those cases where non-literal values are present in the + parameter sets. + + """ + + op = cls(table, rows, multiinsert=multiinsert) + operations.invoke(op) + + +@Operations.register_operation("execute") +@BatchOperations.register_operation("execute", "batch_execute") +class ExecuteSQLOp(MigrateOperation): + """Represent an execute SQL operation.""" + + def __init__( + self, + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + self.sqltext = sqltext + self.execution_options = execution_options + + @classmethod + def execute( + cls, + operations: Operations, + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + r"""Execute the given SQL using the current migration context. + + The given SQL can be a plain string, e.g.:: + + op.execute("INSERT INTO table (foo) VALUES ('some value')") + + Or it can be any kind of Core SQL Expression construct, such as + below where we use an update construct:: + + from sqlalchemy.sql import table, column + from sqlalchemy import String + from alembic import op + + account = table("account", column("name", String)) + op.execute( + account.update() + .where(account.c.name == op.inline_literal("account 1")) + .values({"name": op.inline_literal("account 2")}) + ) + + Above, we made use of the SQLAlchemy + :func:`sqlalchemy.sql.expression.table` and + :func:`sqlalchemy.sql.expression.column` constructs to make a brief, + ad-hoc table construct just for our UPDATE statement. A full + :class:`~sqlalchemy.schema.Table` construct of course works perfectly + fine as well, though note it's a recommended practice to at least + ensure the definition of a table is self-contained within the migration + script, rather than imported from a module that may break compatibility + with older migrations. + + In a SQL script context, the statement is emitted directly to the + output stream. There is *no* return result, however, as this + function is oriented towards generating a change script + that can run in "offline" mode. Additionally, parameterized + statements are discouraged here, as they *will not work* in offline + mode. Above, we use :meth:`.inline_literal` where parameters are + to be used. + + For full interaction with a connected database where parameters can + also be used normally, use the "bind" available from the context:: + + from alembic import op + + connection = op.get_bind() + + connection.execute( + account.update() + .where(account.c.name == "account 1") + .values({"name": "account 2"}) + ) + + Additionally, when passing the statement as a plain string, it is first + coerced into a :func:`sqlalchemy.sql.expression.text` construct + before being passed along. In the less likely case that the + literal SQL string contains a colon, it must be escaped with a + backslash, as:: + + op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')") + + + :param sqltext: Any legal SQLAlchemy expression, including: + + * a string + * a :func:`sqlalchemy.sql.expression.text` construct. + * a :func:`sqlalchemy.sql.expression.insert` construct. + * a :func:`sqlalchemy.sql.expression.update` construct. + * a :func:`sqlalchemy.sql.expression.delete` construct. + * Any "executable" described in SQLAlchemy Core documentation, + noting that no result set is returned. + + .. note:: when passing a plain string, the statement is coerced into + a :func:`sqlalchemy.sql.expression.text` construct. This construct + considers symbols with colons, e.g. ``:foo`` to be bound parameters. + To avoid this, ensure that colon symbols are escaped, e.g. + ``\:foo``. + + :param execution_options: Optional dictionary of + execution options, will be passed to + :meth:`sqlalchemy.engine.Connection.execution_options`. + """ + op = cls(sqltext, execution_options=execution_options) + return operations.invoke(op) + + @classmethod + def batch_execute( + cls, + operations: Operations, + sqltext: Union[Executable, str], + *, + execution_options: Optional[dict[str, Any]] = None, + ) -> None: + """Execute the given SQL using the current migration context. + + .. seealso:: + + :meth:`.Operations.execute` + + """ + return cls.execute( + operations, sqltext, execution_options=execution_options + ) + + def to_diff_tuple(self) -> Tuple[str, Union[Executable, str]]: + return ("execute", self.sqltext) + + +class OpContainer(MigrateOperation): + """Represent a sequence of operations operation.""" + + def __init__(self, ops: Sequence[MigrateOperation] = ()) -> None: + self.ops = list(ops) + + def is_empty(self) -> bool: + return not self.ops + + def as_diffs(self) -> Any: + return list(OpContainer._ops_as_diffs(self)) + + @classmethod + def _ops_as_diffs( + cls, migrations: OpContainer + ) -> Iterator[Tuple[Any, ...]]: + for op in migrations.ops: + if hasattr(op, "ops"): + yield from cls._ops_as_diffs(cast("OpContainer", op)) + else: + yield op.to_diff_tuple() + + +class ModifyTableOps(OpContainer): + """Contains a sequence of operations that all apply to a single Table.""" + + def __init__( + self, + table_name: str, + ops: Sequence[MigrateOperation], + *, + schema: Optional[str] = None, + ) -> None: + super().__init__(ops) + self.table_name = table_name + self.schema = schema + + def reverse(self) -> ModifyTableOps: + return ModifyTableOps( + self.table_name, + ops=list(reversed([op.reverse() for op in self.ops])), + schema=self.schema, + ) + + +class UpgradeOps(OpContainer): + """contains a sequence of operations that would apply to the + 'upgrade' stream of a script. + + .. seealso:: + + :ref:`customizing_revision` + + """ + + def __init__( + self, + ops: Sequence[MigrateOperation] = (), + upgrade_token: str = "upgrades", + ) -> None: + super().__init__(ops=ops) + self.upgrade_token = upgrade_token + + def reverse_into(self, downgrade_ops: DowngradeOps) -> DowngradeOps: + downgrade_ops.ops[:] = list( + reversed([op.reverse() for op in self.ops]) + ) + return downgrade_ops + + def reverse(self) -> DowngradeOps: + return self.reverse_into(DowngradeOps(ops=[])) + + +class DowngradeOps(OpContainer): + """contains a sequence of operations that would apply to the + 'downgrade' stream of a script. + + .. seealso:: + + :ref:`customizing_revision` + + """ + + def __init__( + self, + ops: Sequence[MigrateOperation] = (), + downgrade_token: str = "downgrades", + ) -> None: + super().__init__(ops=ops) + self.downgrade_token = downgrade_token + + def reverse(self) -> UpgradeOps: + return UpgradeOps( + ops=list(reversed([op.reverse() for op in self.ops])) + ) + + +class MigrationScript(MigrateOperation): + """represents a migration script. + + E.g. when autogenerate encounters this object, this corresponds to the + production of an actual script file. + + A normal :class:`.MigrationScript` object would contain a single + :class:`.UpgradeOps` and a single :class:`.DowngradeOps` directive. + These are accessible via the ``.upgrade_ops`` and ``.downgrade_ops`` + attributes. + + In the case of an autogenerate operation that runs multiple times, + such as the multiple database example in the "multidb" template, + the ``.upgrade_ops`` and ``.downgrade_ops`` attributes are disabled, + and instead these objects should be accessed via the ``.upgrade_ops_list`` + and ``.downgrade_ops_list`` list-based attributes. These latter + attributes are always available at the very least as single-element lists. + + .. seealso:: + + :ref:`customizing_revision` + + """ + + _needs_render: Optional[bool] + _upgrade_ops: List[UpgradeOps] + _downgrade_ops: List[DowngradeOps] + + def __init__( + self, + rev_id: Optional[str], + upgrade_ops: UpgradeOps, + downgrade_ops: DowngradeOps, + *, + message: Optional[str] = None, + imports: Set[str] = set(), + head: Optional[str] = None, + splice: Optional[bool] = None, + branch_label: Optional[_RevIdType] = None, + version_path: Union[str, os.PathLike[str], None] = None, + depends_on: Optional[_RevIdType] = None, + ) -> None: + self.rev_id = rev_id + self.message = message + self.imports = imports + self.head = head + self.splice = splice + self.branch_label = branch_label + self.version_path = ( + pathlib.Path(version_path).as_posix() if version_path else None + ) + self.depends_on = depends_on + self.upgrade_ops = upgrade_ops + self.downgrade_ops = downgrade_ops + + @property + def upgrade_ops(self) -> Optional[UpgradeOps]: + """An instance of :class:`.UpgradeOps`. + + .. seealso:: + + :attr:`.MigrationScript.upgrade_ops_list` + """ + if len(self._upgrade_ops) > 1: + raise ValueError( + "This MigrationScript instance has a multiple-entry " + "list for UpgradeOps; please use the " + "upgrade_ops_list attribute." + ) + elif not self._upgrade_ops: + return None + else: + return self._upgrade_ops[0] + + @upgrade_ops.setter + def upgrade_ops( + self, upgrade_ops: Union[UpgradeOps, List[UpgradeOps]] + ) -> None: + self._upgrade_ops = util.to_list(upgrade_ops) + for elem in self._upgrade_ops: + assert isinstance(elem, UpgradeOps) + + @property + def downgrade_ops(self) -> Optional[DowngradeOps]: + """An instance of :class:`.DowngradeOps`. + + .. seealso:: + + :attr:`.MigrationScript.downgrade_ops_list` + """ + if len(self._downgrade_ops) > 1: + raise ValueError( + "This MigrationScript instance has a multiple-entry " + "list for DowngradeOps; please use the " + "downgrade_ops_list attribute." + ) + elif not self._downgrade_ops: + return None + else: + return self._downgrade_ops[0] + + @downgrade_ops.setter + def downgrade_ops( + self, downgrade_ops: Union[DowngradeOps, List[DowngradeOps]] + ) -> None: + self._downgrade_ops = util.to_list(downgrade_ops) + for elem in self._downgrade_ops: + assert isinstance(elem, DowngradeOps) + + @property + def upgrade_ops_list(self) -> List[UpgradeOps]: + """A list of :class:`.UpgradeOps` instances. + + This is used in place of the :attr:`.MigrationScript.upgrade_ops` + attribute when dealing with a revision operation that does + multiple autogenerate passes. + + """ + return self._upgrade_ops + + @property + def downgrade_ops_list(self) -> List[DowngradeOps]: + """A list of :class:`.DowngradeOps` instances. + + This is used in place of the :attr:`.MigrationScript.downgrade_ops` + attribute when dealing with a revision operation that does + multiple autogenerate passes. + + """ + return self._downgrade_ops diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/schemaobj.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/schemaobj.py new file mode 100644 index 000000000..59c1002f1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/schemaobj.py @@ -0,0 +1,290 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +from typing import Any +from typing import Dict +from typing import List +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import schema as sa_schema +from sqlalchemy.sql.schema import Column +from sqlalchemy.sql.schema import Constraint +from sqlalchemy.sql.schema import Index +from sqlalchemy.types import Integer +from sqlalchemy.types import NULLTYPE + +from .. import util +from ..util import sqla_compat + +if TYPE_CHECKING: + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.elements import TextClause + from sqlalchemy.sql.schema import CheckConstraint + from sqlalchemy.sql.schema import ForeignKey + from sqlalchemy.sql.schema import ForeignKeyConstraint + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import PrimaryKeyConstraint + from sqlalchemy.sql.schema import Table + from sqlalchemy.sql.schema import UniqueConstraint + from sqlalchemy.sql.type_api import TypeEngine + + from ..runtime.migration import MigrationContext + + +class SchemaObjects: + def __init__( + self, migration_context: Optional[MigrationContext] = None + ) -> None: + self.migration_context = migration_context + + def primary_key_constraint( + self, + name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + cols: Sequence[str], + schema: Optional[str] = None, + **dialect_kw, + ) -> PrimaryKeyConstraint: + m = self.metadata() + columns = [sa_schema.Column(n, NULLTYPE) for n in cols] + t = sa_schema.Table(table_name, m, *columns, schema=schema) + # SQLAlchemy primary key constraint name arg is wrongly typed on + # the SQLAlchemy side through 2.0.5 at least + p = sa_schema.PrimaryKeyConstraint( + *[t.c[n] for n in cols], name=name, **dialect_kw # type: ignore + ) + return p + + def foreign_key_constraint( + self, + name: Optional[sqla_compat._ConstraintNameDefined], + source: str, + referent: str, + local_cols: List[str], + remote_cols: List[str], + onupdate: Optional[str] = None, + ondelete: Optional[str] = None, + deferrable: Optional[bool] = None, + source_schema: Optional[str] = None, + referent_schema: Optional[str] = None, + initially: Optional[str] = None, + match: Optional[str] = None, + **dialect_kw, + ) -> ForeignKeyConstraint: + m = self.metadata() + if source == referent and source_schema == referent_schema: + t1_cols = local_cols + remote_cols + else: + t1_cols = local_cols + sa_schema.Table( + referent, + m, + *[sa_schema.Column(n, NULLTYPE) for n in remote_cols], + schema=referent_schema, + ) + + t1 = sa_schema.Table( + source, + m, + *[ + sa_schema.Column(n, NULLTYPE) + for n in util.unique_list(t1_cols) + ], + schema=source_schema, + ) + + tname = ( + "%s.%s" % (referent_schema, referent) + if referent_schema + else referent + ) + + dialect_kw["match"] = match + + f = sa_schema.ForeignKeyConstraint( + local_cols, + ["%s.%s" % (tname, n) for n in remote_cols], + name=name, + onupdate=onupdate, + ondelete=ondelete, + deferrable=deferrable, + initially=initially, + **dialect_kw, + ) + t1.append_constraint(f) + + return f + + def unique_constraint( + self, + name: Optional[sqla_compat._ConstraintNameDefined], + source: str, + local_cols: Sequence[str], + schema: Optional[str] = None, + **kw, + ) -> UniqueConstraint: + t = sa_schema.Table( + source, + self.metadata(), + *[sa_schema.Column(n, NULLTYPE) for n in local_cols], + schema=schema, + ) + kw["name"] = name + uq = sa_schema.UniqueConstraint(*[t.c[n] for n in local_cols], **kw) + # TODO: need event tests to ensure the event + # is fired off here + t.append_constraint(uq) + return uq + + def check_constraint( + self, + name: Optional[sqla_compat._ConstraintNameDefined], + source: str, + condition: Union[str, TextClause, ColumnElement[Any]], + schema: Optional[str] = None, + **kw, + ) -> Union[CheckConstraint]: + t = sa_schema.Table( + source, + self.metadata(), + sa_schema.Column("x", Integer), + schema=schema, + ) + ck = sa_schema.CheckConstraint(condition, name=name, **kw) + t.append_constraint(ck) + return ck + + def generic_constraint( + self, + name: Optional[sqla_compat._ConstraintNameDefined], + table_name: str, + type_: Optional[str], + schema: Optional[str] = None, + **kw, + ) -> Any: + t = self.table(table_name, schema=schema) + types: Dict[Optional[str], Any] = { + "foreignkey": lambda name: sa_schema.ForeignKeyConstraint( + [], [], name=name + ), + "primary": sa_schema.PrimaryKeyConstraint, + "unique": sa_schema.UniqueConstraint, + "check": lambda name: sa_schema.CheckConstraint("", name=name), + None: sa_schema.Constraint, + } + try: + const = types[type_] + except KeyError as ke: + raise TypeError( + "'type' can be one of %s" + % ", ".join(sorted(repr(x) for x in types)) + ) from ke + else: + const = const(name=name) + t.append_constraint(const) + return const + + def metadata(self) -> MetaData: + kw = {} + if ( + self.migration_context is not None + and "target_metadata" in self.migration_context.opts + ): + mt = self.migration_context.opts["target_metadata"] + if hasattr(mt, "naming_convention"): + kw["naming_convention"] = mt.naming_convention + return sa_schema.MetaData(**kw) + + def table(self, name: str, *columns, **kw) -> Table: + m = self.metadata() + + cols = [ + sqla_compat._copy(c) if c.table is not None else c + for c in columns + if isinstance(c, Column) + ] + # these flags have already added their UniqueConstraint / + # Index objects to the table, so flip them off here. + # SQLAlchemy tometadata() avoids this instead by preserving the + # flags and skipping the constraints that have _type_bound on them, + # but for a migration we'd rather list out the constraints + # explicitly. + _constraints_included = kw.pop("_constraints_included", False) + if _constraints_included: + for c in cols: + c.unique = c.index = False + + t = sa_schema.Table(name, m, *cols, **kw) + + constraints = [ + ( + sqla_compat._copy(elem, target_table=t) + if getattr(elem, "parent", None) is not t + and getattr(elem, "parent", None) is not None + else elem + ) + for elem in columns + if isinstance(elem, (Constraint, Index)) + ] + + for const in constraints: + t.append_constraint(const) + + for f in t.foreign_keys: + self._ensure_table_for_fk(m, f) + return t + + def column(self, name: str, type_: TypeEngine, **kw) -> Column: + return sa_schema.Column(name, type_, **kw) + + def index( + self, + name: Optional[str], + tablename: Optional[str], + columns: Sequence[Union[str, TextClause, ColumnElement[Any]]], + schema: Optional[str] = None, + **kw, + ) -> Index: + t = sa_schema.Table( + tablename or "no_table", + self.metadata(), + schema=schema, + ) + kw["_table"] = t + idx = sa_schema.Index( + name, + *[util.sqla_compat._textual_index_column(t, n) for n in columns], + **kw, + ) + return idx + + def _parse_table_key(self, table_key: str) -> Tuple[Optional[str], str]: + if "." in table_key: + tokens = table_key.split(".") + sname: Optional[str] = ".".join(tokens[0:-1]) + tname = tokens[-1] + else: + tname = table_key + sname = None + return (sname, tname) + + def _ensure_table_for_fk(self, metadata: MetaData, fk: ForeignKey) -> None: + """create a placeholder Table object for the referent of a + ForeignKey. + + """ + if isinstance(fk._colspec, str): + table_key, cname = fk._colspec.rsplit(".", 1) + sname, tname = self._parse_table_key(table_key) + if table_key not in metadata.tables: + rel_t = sa_schema.Table(tname, metadata, schema=sname) + else: + rel_t = metadata.tables[table_key] + if cname not in rel_t.c: + rel_t.append_column(sa_schema.Column(cname, NULLTYPE)) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/toimpl.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/toimpl.py new file mode 100644 index 000000000..c18ec7901 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/operations/toimpl.py @@ -0,0 +1,242 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from typing import TYPE_CHECKING + +from sqlalchemy import schema as sa_schema + +from . import ops +from .base import Operations +from ..util.sqla_compat import _copy +from ..util.sqla_compat import sqla_2 + +if TYPE_CHECKING: + from sqlalchemy.sql.schema import Table + + +@Operations.implementation_for(ops.AlterColumnOp) +def alter_column( + operations: "Operations", operation: "ops.AlterColumnOp" +) -> None: + compiler = operations.impl.dialect.statement_compiler( + operations.impl.dialect, None + ) + + existing_type = operation.existing_type + existing_nullable = operation.existing_nullable + existing_server_default = operation.existing_server_default + type_ = operation.modify_type + column_name = operation.column_name + table_name = operation.table_name + schema = operation.schema + server_default = operation.modify_server_default + new_column_name = operation.modify_name + nullable = operation.modify_nullable + comment = operation.modify_comment + existing_comment = operation.existing_comment + + def _count_constraint(constraint): + return not isinstance(constraint, sa_schema.PrimaryKeyConstraint) and ( + not constraint._create_rule or constraint._create_rule(compiler) + ) + + if existing_type and type_: + t = operations.schema_obj.table( + table_name, + sa_schema.Column(column_name, existing_type), + schema=schema, + ) + for constraint in t.constraints: + if _count_constraint(constraint): + operations.impl.drop_constraint(constraint) + + operations.impl.alter_column( + table_name, + column_name, + nullable=nullable, + server_default=server_default, + name=new_column_name, + type_=type_, + schema=schema, + existing_type=existing_type, + existing_server_default=existing_server_default, + existing_nullable=existing_nullable, + comment=comment, + existing_comment=existing_comment, + **operation.kw, + ) + + if type_: + t = operations.schema_obj.table( + table_name, + operations.schema_obj.column(column_name, type_), + schema=schema, + ) + for constraint in t.constraints: + if _count_constraint(constraint): + operations.impl.add_constraint(constraint) + + +@Operations.implementation_for(ops.DropTableOp) +def drop_table(operations: "Operations", operation: "ops.DropTableOp") -> None: + kw = {} + if operation.if_exists is not None: + kw["if_exists"] = operation.if_exists + operations.impl.drop_table( + operation.to_table(operations.migration_context), **kw + ) + + +@Operations.implementation_for(ops.DropColumnOp) +def drop_column( + operations: "Operations", operation: "ops.DropColumnOp" +) -> None: + column = operation.to_column(operations.migration_context) + operations.impl.drop_column( + operation.table_name, + column, + schema=operation.schema, + if_exists=operation.if_exists, + **operation.kw, + ) + + +@Operations.implementation_for(ops.CreateIndexOp) +def create_index( + operations: "Operations", operation: "ops.CreateIndexOp" +) -> None: + idx = operation.to_index(operations.migration_context) + kw = {} + if operation.if_not_exists is not None: + kw["if_not_exists"] = operation.if_not_exists + operations.impl.create_index(idx, **kw) + + +@Operations.implementation_for(ops.DropIndexOp) +def drop_index(operations: "Operations", operation: "ops.DropIndexOp") -> None: + kw = {} + if operation.if_exists is not None: + kw["if_exists"] = operation.if_exists + + operations.impl.drop_index( + operation.to_index(operations.migration_context), + **kw, + ) + + +@Operations.implementation_for(ops.CreateTableOp) +def create_table( + operations: "Operations", operation: "ops.CreateTableOp" +) -> "Table": + kw = {} + if operation.if_not_exists is not None: + kw["if_not_exists"] = operation.if_not_exists + table = operation.to_table(operations.migration_context) + operations.impl.create_table(table, **kw) + return table + + +@Operations.implementation_for(ops.RenameTableOp) +def rename_table( + operations: "Operations", operation: "ops.RenameTableOp" +) -> None: + operations.impl.rename_table( + operation.table_name, operation.new_table_name, schema=operation.schema + ) + + +@Operations.implementation_for(ops.CreateTableCommentOp) +def create_table_comment( + operations: "Operations", operation: "ops.CreateTableCommentOp" +) -> None: + table = operation.to_table(operations.migration_context) + operations.impl.create_table_comment(table) + + +@Operations.implementation_for(ops.DropTableCommentOp) +def drop_table_comment( + operations: "Operations", operation: "ops.DropTableCommentOp" +) -> None: + table = operation.to_table(operations.migration_context) + operations.impl.drop_table_comment(table) + + +@Operations.implementation_for(ops.AddColumnOp) +def add_column(operations: "Operations", operation: "ops.AddColumnOp") -> None: + table_name = operation.table_name + column = operation.column + schema = operation.schema + kw = operation.kw + + if column.table is not None: + column = _copy(column) + + t = operations.schema_obj.table(table_name, column, schema=schema) + operations.impl.add_column( + table_name, + column, + schema=schema, + if_not_exists=operation.if_not_exists, + **kw, + ) + + for constraint in t.constraints: + if not isinstance(constraint, sa_schema.PrimaryKeyConstraint): + operations.impl.add_constraint(constraint) + for index in t.indexes: + operations.impl.create_index(index) + + with_comment = ( + operations.impl.dialect.supports_comments + and not operations.impl.dialect.inline_comments + ) + comment = column.comment + if comment and with_comment: + operations.impl.create_column_comment(column) + + +@Operations.implementation_for(ops.AddConstraintOp) +def create_constraint( + operations: "Operations", operation: "ops.AddConstraintOp" +) -> None: + operations.impl.add_constraint( + operation.to_constraint(operations.migration_context) + ) + + +@Operations.implementation_for(ops.DropConstraintOp) +def drop_constraint( + operations: "Operations", operation: "ops.DropConstraintOp" +) -> None: + kw = {} + if operation.if_exists is not None: + if not sqla_2: + raise NotImplementedError("SQLAlchemy 2.0 required") + kw["if_exists"] = operation.if_exists + operations.impl.drop_constraint( + operations.schema_obj.generic_constraint( + operation.constraint_name, + operation.table_name, + operation.constraint_type, + schema=operation.schema, + ), + **kw, + ) + + +@Operations.implementation_for(ops.BulkInsertOp) +def bulk_insert( + operations: "Operations", operation: "ops.BulkInsertOp" +) -> None: + operations.impl.bulk_insert( # type: ignore[union-attr] + operation.table, operation.rows, multiinsert=operation.multiinsert + ) + + +@Operations.implementation_for(ops.ExecuteSQLOp) +def execute_sql( + operations: "Operations", operation: "ops.ExecuteSQLOp" +) -> None: + operations.migration_context.impl.execute( + operation.sqltext, execution_options=operation.execution_options + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/environment.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/environment.py new file mode 100644 index 000000000..80ca2b6ca --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/environment.py @@ -0,0 +1,1051 @@ +from __future__ import annotations + +from typing import Any +from typing import Callable +from typing import Collection +from typing import Dict +from typing import List +from typing import Mapping +from typing import MutableMapping +from typing import Optional +from typing import overload +from typing import Sequence +from typing import TextIO +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy.sql.schema import Column +from sqlalchemy.sql.schema import FetchedValue +from typing_extensions import ContextManager +from typing_extensions import Literal + +from .migration import _ProxyTransaction +from .migration import MigrationContext +from .. import util +from ..operations import Operations +from ..script.revision import _GetRevArg + +if TYPE_CHECKING: + from sqlalchemy.engine import URL + from sqlalchemy.engine.base import Connection + from sqlalchemy.sql import Executable + from sqlalchemy.sql.schema import MetaData + from sqlalchemy.sql.schema import SchemaItem + from sqlalchemy.sql.type_api import TypeEngine + + from .migration import MigrationInfo + from ..autogenerate.api import AutogenContext + from ..config import Config + from ..ddl import DefaultImpl + from ..operations.ops import MigrationScript + from ..script.base import ScriptDirectory + +_RevNumber = Optional[Union[str, Tuple[str, ...]]] + +ProcessRevisionDirectiveFn = Callable[ + [MigrationContext, _GetRevArg, List["MigrationScript"]], None +] + +RenderItemFn = Callable[ + [str, Any, "AutogenContext"], Union[str, Literal[False]] +] + +NameFilterType = Literal[ + "schema", + "table", + "column", + "index", + "unique_constraint", + "foreign_key_constraint", +] +NameFilterParentNames = MutableMapping[ + Literal["schema_name", "table_name", "schema_qualified_table_name"], + Optional[str], +] +IncludeNameFn = Callable[ + [Optional[str], NameFilterType, NameFilterParentNames], bool +] + +IncludeObjectFn = Callable[ + [ + "SchemaItem", + Optional[str], + NameFilterType, + bool, + Optional["SchemaItem"], + ], + bool, +] + +OnVersionApplyFn = Callable[ + [MigrationContext, "MigrationInfo", Collection[Any], Mapping[str, Any]], + None, +] + +CompareServerDefault = Callable[ + [ + MigrationContext, + "Column[Any]", + "Column[Any]", + Optional[str], + Optional[FetchedValue], + Optional[str], + ], + Optional[bool], +] + +CompareType = Callable[ + [ + MigrationContext, + "Column[Any]", + "Column[Any]", + "TypeEngine[Any]", + "TypeEngine[Any]", + ], + Optional[bool], +] + + +class EnvironmentContext(util.ModuleClsProxy): + """A configurational facade made available in an ``env.py`` script. + + The :class:`.EnvironmentContext` acts as a *facade* to the more + nuts-and-bolts objects of :class:`.MigrationContext` as well as certain + aspects of :class:`.Config`, + within the context of the ``env.py`` script that is invoked by + most Alembic commands. + + :class:`.EnvironmentContext` is normally instantiated + when a command in :mod:`alembic.command` is run. It then makes + itself available in the ``alembic.context`` module for the scope + of the command. From within an ``env.py`` script, the current + :class:`.EnvironmentContext` is available by importing this module. + + :class:`.EnvironmentContext` also supports programmatic usage. + At this level, it acts as a Python context manager, that is, is + intended to be used using the + ``with:`` statement. A typical use of :class:`.EnvironmentContext`:: + + from alembic.config import Config + from alembic.script import ScriptDirectory + + config = Config() + config.set_main_option("script_location", "myapp:migrations") + script = ScriptDirectory.from_config(config) + + + def my_function(rev, context): + '''do something with revision "rev", which + will be the current database revision, + and "context", which is the MigrationContext + that the env.py will create''' + + + with EnvironmentContext( + config, + script, + fn=my_function, + as_sql=False, + starting_rev="base", + destination_rev="head", + tag="sometag", + ): + script.run_env() + + The above script will invoke the ``env.py`` script + within the migration environment. If and when ``env.py`` + calls :meth:`.MigrationContext.run_migrations`, the + ``my_function()`` function above will be called + by the :class:`.MigrationContext`, given the context + itself as well as the current revision in the database. + + .. note:: + + For most API usages other than full blown + invocation of migration scripts, the :class:`.MigrationContext` + and :class:`.ScriptDirectory` objects can be created and + used directly. The :class:`.EnvironmentContext` object + is *only* needed when you need to actually invoke the + ``env.py`` module present in the migration environment. + + """ + + _migration_context: Optional[MigrationContext] = None + + config: Config = None # type:ignore[assignment] + """An instance of :class:`.Config` representing the + configuration file contents as well as other variables + set programmatically within it.""" + + script: ScriptDirectory = None # type:ignore[assignment] + """An instance of :class:`.ScriptDirectory` which provides + programmatic access to version files within the ``versions/`` + directory. + + """ + + def __init__( + self, config: Config, script: ScriptDirectory, **kw: Any + ) -> None: + r"""Construct a new :class:`.EnvironmentContext`. + + :param config: a :class:`.Config` instance. + :param script: a :class:`.ScriptDirectory` instance. + :param \**kw: keyword options that will be ultimately + passed along to the :class:`.MigrationContext` when + :meth:`.EnvironmentContext.configure` is called. + + """ + self.config = config + self.script = script + self.context_opts = kw + + def __enter__(self) -> EnvironmentContext: + """Establish a context which provides a + :class:`.EnvironmentContext` object to + env.py scripts. + + The :class:`.EnvironmentContext` will + be made available as ``from alembic import context``. + + """ + self._install_proxy() + return self + + def __exit__(self, *arg: Any, **kw: Any) -> None: + self._remove_proxy() + + def is_offline_mode(self) -> bool: + """Return True if the current migrations environment + is running in "offline mode". + + This is ``True`` or ``False`` depending + on the ``--sql`` flag passed. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + return self.context_opts.get("as_sql", False) # type: ignore[no-any-return] # noqa: E501 + + def is_transactional_ddl(self) -> bool: + """Return True if the context is configured to expect a + transactional DDL capable backend. + + This defaults to the type of database in use, and + can be overridden by the ``transactional_ddl`` argument + to :meth:`.configure` + + This function requires that a :class:`.MigrationContext` + has first been made available via :meth:`.configure`. + + """ + return self.get_context().impl.transactional_ddl + + def requires_connection(self) -> bool: + return not self.is_offline_mode() + + def get_head_revision(self) -> _RevNumber: + """Return the hex identifier of the 'head' script revision. + + If the script directory has multiple heads, this + method raises a :class:`.CommandError`; + :meth:`.EnvironmentContext.get_head_revisions` should be preferred. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: :meth:`.EnvironmentContext.get_head_revisions` + + """ + return self.script.as_revision_number("head") + + def get_head_revisions(self) -> _RevNumber: + """Return the hex identifier of the 'heads' script revision(s). + + This returns a tuple containing the version number of all + heads in the script directory. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + return self.script.as_revision_number("heads") + + def get_starting_revision_argument(self) -> _RevNumber: + """Return the 'starting revision' argument, + if the revision was passed using ``start:end``. + + This is only meaningful in "offline" mode. + Returns ``None`` if no value is available + or was configured. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + if self._migration_context is not None: + return self.script.as_revision_number( + self.get_context()._start_from_rev + ) + elif "starting_rev" in self.context_opts: + return self.script.as_revision_number( + self.context_opts["starting_rev"] + ) + else: + # this should raise only in the case that a command + # is being run where the "starting rev" is never applicable; + # this is to catch scripts which rely upon this in + # non-sql mode or similar + raise util.CommandError( + "No starting revision argument is available." + ) + + def get_revision_argument(self) -> _RevNumber: + """Get the 'destination' revision argument. + + This is typically the argument passed to the + ``upgrade`` or ``downgrade`` command. + + If it was specified as ``head``, the actual + version number is returned; if specified + as ``base``, ``None`` is returned. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + """ + return self.script.as_revision_number( + self.context_opts["destination_rev"] + ) + + def get_tag_argument(self) -> Optional[str]: + """Return the value passed for the ``--tag`` argument, if any. + + The ``--tag`` argument is not used directly by Alembic, + but is available for custom ``env.py`` configurations that + wish to use it; particularly for offline generation scripts + that wish to generate tagged filenames. + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: + + :meth:`.EnvironmentContext.get_x_argument` - a newer and more + open ended system of extending ``env.py`` scripts via the command + line. + + """ + return self.context_opts.get("tag", None) + + @overload + def get_x_argument(self, as_dictionary: Literal[False]) -> List[str]: ... + + @overload + def get_x_argument( + self, as_dictionary: Literal[True] + ) -> Dict[str, str]: ... + + @overload + def get_x_argument( + self, as_dictionary: bool = ... + ) -> Union[List[str], Dict[str, str]]: ... + + def get_x_argument( + self, as_dictionary: bool = False + ) -> Union[List[str], Dict[str, str]]: + """Return the value(s) passed for the ``-x`` argument, if any. + + The ``-x`` argument is an open ended flag that allows any user-defined + value or values to be passed on the command line, then available + here for consumption by a custom ``env.py`` script. + + The return value is a list, returned directly from the ``argparse`` + structure. If ``as_dictionary=True`` is passed, the ``x`` arguments + are parsed using ``key=value`` format into a dictionary that is + then returned. If there is no ``=`` in the argument, value is an empty + string. + + .. versionchanged:: 1.13.1 Support ``as_dictionary=True`` when + arguments are passed without the ``=`` symbol. + + For example, to support passing a database URL on the command line, + the standard ``env.py`` script can be modified like this:: + + cmd_line_url = context.get_x_argument( + as_dictionary=True).get('dbname') + if cmd_line_url: + engine = create_engine(cmd_line_url) + else: + engine = engine_from_config( + config.get_section(config.config_ini_section), + prefix='sqlalchemy.', + poolclass=pool.NullPool) + + This then takes effect by running the ``alembic`` script as:: + + alembic -x dbname=postgresql://user:pass@host/dbname upgrade head + + This function does not require that the :class:`.MigrationContext` + has been configured. + + .. seealso:: + + :meth:`.EnvironmentContext.get_tag_argument` + + :attr:`.Config.cmd_opts` + + """ + if self.config.cmd_opts is not None: + value = self.config.cmd_opts.x or [] + else: + value = [] + if as_dictionary: + dict_value = {} + for arg in value: + x_key, _, x_value = arg.partition("=") + dict_value[x_key] = x_value + value = dict_value + + return value + + def configure( + self, + connection: Optional[Connection] = None, + url: Optional[Union[str, URL]] = None, + dialect_name: Optional[str] = None, + dialect_opts: Optional[Dict[str, Any]] = None, + transactional_ddl: Optional[bool] = None, + transaction_per_migration: bool = False, + output_buffer: Optional[TextIO] = None, + starting_rev: Optional[str] = None, + tag: Optional[str] = None, + template_args: Optional[Dict[str, Any]] = None, + render_as_batch: bool = False, + target_metadata: Union[MetaData, Sequence[MetaData], None] = None, + include_name: Optional[IncludeNameFn] = None, + include_object: Optional[IncludeObjectFn] = None, + include_schemas: bool = False, + process_revision_directives: Optional[ + ProcessRevisionDirectiveFn + ] = None, + compare_type: Union[bool, CompareType] = True, + compare_server_default: Union[bool, CompareServerDefault] = False, + render_item: Optional[RenderItemFn] = None, + literal_binds: bool = False, + upgrade_token: str = "upgrades", + downgrade_token: str = "downgrades", + alembic_module_prefix: str = "op.", + sqlalchemy_module_prefix: str = "sa.", + user_module_prefix: Optional[str] = None, + on_version_apply: Optional[OnVersionApplyFn] = None, + **kw: Any, + ) -> None: + """Configure a :class:`.MigrationContext` within this + :class:`.EnvironmentContext` which will provide database + connectivity and other configuration to a series of + migration scripts. + + Many methods on :class:`.EnvironmentContext` require that + this method has been called in order to function, as they + ultimately need to have database access or at least access + to the dialect in use. Those which do are documented as such. + + The important thing needed by :meth:`.configure` is a + means to determine what kind of database dialect is in use. + An actual connection to that database is needed only if + the :class:`.MigrationContext` is to be used in + "online" mode. + + If the :meth:`.is_offline_mode` function returns ``True``, + then no connection is needed here. Otherwise, the + ``connection`` parameter should be present as an + instance of :class:`sqlalchemy.engine.Connection`. + + This function is typically called from the ``env.py`` + script within a migration environment. It can be called + multiple times for an invocation. The most recent + :class:`~sqlalchemy.engine.Connection` + for which it was called is the one that will be operated upon + by the next call to :meth:`.run_migrations`. + + General parameters: + + :param connection: a :class:`~sqlalchemy.engine.Connection` + to use + for SQL execution in "online" mode. When present, is also + used to determine the type of dialect in use. + :param url: a string database url, or a + :class:`sqlalchemy.engine.url.URL` object. + The type of dialect to be used will be derived from this if + ``connection`` is not passed. + :param dialect_name: string name of a dialect, such as + "postgresql", "mssql", etc. + The type of dialect to be used will be derived from this if + ``connection`` and ``url`` are not passed. + :param dialect_opts: dictionary of options to be passed to dialect + constructor. + :param transactional_ddl: Force the usage of "transactional" + DDL on or off; + this otherwise defaults to whether or not the dialect in + use supports it. + :param transaction_per_migration: if True, nest each migration script + in a transaction rather than the full series of migrations to + run. + :param output_buffer: a file-like object that will be used + for textual output + when the ``--sql`` option is used to generate SQL scripts. + Defaults to + ``sys.stdout`` if not passed here and also not present on + the :class:`.Config` + object. The value here overrides that of the :class:`.Config` + object. + :param output_encoding: when using ``--sql`` to generate SQL + scripts, apply this encoding to the string output. + :param literal_binds: when using ``--sql`` to generate SQL + scripts, pass through the ``literal_binds`` flag to the compiler + so that any literal values that would ordinarily be bound + parameters are converted to plain strings. + + .. warning:: Dialects can typically only handle simple datatypes + like strings and numbers for auto-literal generation. Datatypes + like dates, intervals, and others may still require manual + formatting, typically using :meth:`.Operations.inline_literal`. + + .. note:: the ``literal_binds`` flag is ignored on SQLAlchemy + versions prior to 0.8 where this feature is not supported. + + .. seealso:: + + :meth:`.Operations.inline_literal` + + :param starting_rev: Override the "starting revision" argument + when using ``--sql`` mode. + :param tag: a string tag for usage by custom ``env.py`` scripts. + Set via the ``--tag`` option, can be overridden here. + :param template_args: dictionary of template arguments which + will be added to the template argument environment when + running the "revision" command. Note that the script environment + is only run within the "revision" command if the --autogenerate + option is used, or if the option "revision_environment=true" + is present in the alembic.ini file. + + :param version_table: The name of the Alembic version table. + The default is ``'alembic_version'``. + :param version_table_schema: Optional schema to place version + table within. + :param version_table_pk: boolean, whether the Alembic version table + should use a primary key constraint for the "value" column; this + only takes effect when the table is first created. + Defaults to True; setting to False should not be necessary and is + here for backwards compatibility reasons. + :param on_version_apply: a callable or collection of callables to be + run for each migration step. + The callables will be run in the order they are given, once for + each migration step, after the respective operation has been + applied but before its transaction is finalized. + Each callable accepts no positional arguments and the following + keyword arguments: + + * ``ctx``: the :class:`.MigrationContext` running the migration, + * ``step``: a :class:`.MigrationInfo` representing the + step currently being applied, + * ``heads``: a collection of version strings representing the + current heads, + * ``run_args``: the ``**kwargs`` passed to :meth:`.run_migrations`. + + Parameters specific to the autogenerate feature, when + ``alembic revision`` is run with the ``--autogenerate`` feature: + + :param target_metadata: a :class:`sqlalchemy.schema.MetaData` + object, or a sequence of :class:`~sqlalchemy.schema.MetaData` + objects, that will be consulted during autogeneration. + The tables present in each :class:`~sqlalchemy.schema.MetaData` + will be compared against + what is locally available on the target + :class:`~sqlalchemy.engine.Connection` + to produce candidate upgrade/downgrade operations. + :param compare_type: Indicates type comparison behavior during + an autogenerate + operation. Defaults to ``True`` turning on type comparison, which + has good accuracy on most backends. See :ref:`compare_types` + for an example as well as information on other type + comparison options. Set to ``False`` which disables type + comparison. A callable can also be passed to provide custom type + comparison, see :ref:`compare_types` for additional details. + + .. versionchanged:: 1.12.0 The default value of + :paramref:`.EnvironmentContext.configure.compare_type` has been + changed to ``True``. + + .. seealso:: + + :ref:`compare_types` + + :paramref:`.EnvironmentContext.configure.compare_server_default` + + :param compare_server_default: Indicates server default comparison + behavior during + an autogenerate operation. Defaults to ``False`` which disables + server default + comparison. Set to ``True`` to turn on server default comparison, + which has + varied accuracy depending on backend. + + To customize server default comparison behavior, a callable may + be specified + which can filter server default comparisons during an + autogenerate operation. + defaults during an autogenerate operation. The format of this + callable is:: + + def my_compare_server_default(context, inspected_column, + metadata_column, inspected_default, metadata_default, + rendered_metadata_default): + # return True if the defaults are different, + # False if not, or None to allow the default implementation + # to compare these defaults + return None + + context.configure( + # ... + compare_server_default = my_compare_server_default + ) + + ``inspected_column`` is a dictionary structure as returned by + :meth:`sqlalchemy.engine.reflection.Inspector.get_columns`, whereas + ``metadata_column`` is a :class:`sqlalchemy.schema.Column` from + the local model environment. + + A return value of ``None`` indicates to allow default server default + comparison + to proceed. Note that some backends such as Postgresql actually + execute + the two defaults on the database side to compare for equivalence. + + .. seealso:: + + :paramref:`.EnvironmentContext.configure.compare_type` + + :param include_name: A callable function which is given + the chance to return ``True`` or ``False`` for any database reflected + object based on its name, including database schema names when + the :paramref:`.EnvironmentContext.configure.include_schemas` flag + is set to ``True``. + + The function accepts the following positional arguments: + + * ``name``: the name of the object, such as schema name or table name. + Will be ``None`` when indicating the default schema name of the + database connection. + * ``type``: a string describing the type of object; currently + ``"schema"``, ``"table"``, ``"column"``, ``"index"``, + ``"unique_constraint"``, or ``"foreign_key_constraint"`` + * ``parent_names``: a dictionary of "parent" object names, that are + relative to the name being given. Keys in this dictionary may + include: ``"schema_name"``, ``"table_name"`` or + ``"schema_qualified_table_name"``. + + E.g.:: + + def include_name(name, type_, parent_names): + if type_ == "schema": + return name in ["schema_one", "schema_two"] + else: + return True + + context.configure( + # ... + include_schemas = True, + include_name = include_name + ) + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_object` + + :paramref:`.EnvironmentContext.configure.include_schemas` + + + :param include_object: A callable function which is given + the chance to return ``True`` or ``False`` for any object, + indicating if the given object should be considered in the + autogenerate sweep. + + The function accepts the following positional arguments: + + * ``object``: a :class:`~sqlalchemy.schema.SchemaItem` object such + as a :class:`~sqlalchemy.schema.Table`, + :class:`~sqlalchemy.schema.Column`, + :class:`~sqlalchemy.schema.Index` + :class:`~sqlalchemy.schema.UniqueConstraint`, + or :class:`~sqlalchemy.schema.ForeignKeyConstraint` object + * ``name``: the name of the object. This is typically available + via ``object.name``. + * ``type``: a string describing the type of object; currently + ``"table"``, ``"column"``, ``"index"``, ``"unique_constraint"``, + or ``"foreign_key_constraint"`` + * ``reflected``: ``True`` if the given object was produced based on + table reflection, ``False`` if it's from a local :class:`.MetaData` + object. + * ``compare_to``: the object being compared against, if available, + else ``None``. + + E.g.:: + + def include_object(object, name, type_, reflected, compare_to): + if (type_ == "column" and + not reflected and + object.info.get("skip_autogenerate", False)): + return False + else: + return True + + context.configure( + # ... + include_object = include_object + ) + + For the use case of omitting specific schemas from a target database + when :paramref:`.EnvironmentContext.configure.include_schemas` is + set to ``True``, the :attr:`~sqlalchemy.schema.Table.schema` + attribute can be checked for each :class:`~sqlalchemy.schema.Table` + object passed to the hook, however it is much more efficient + to filter on schemas before reflection of objects takes place + using the :paramref:`.EnvironmentContext.configure.include_name` + hook. + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_name` + + :paramref:`.EnvironmentContext.configure.include_schemas` + + :param render_as_batch: if True, commands which alter elements + within a table will be placed under a ``with batch_alter_table():`` + directive, so that batch migrations will take place. + + .. seealso:: + + :ref:`batch_migrations` + + :param include_schemas: If True, autogenerate will scan across + all schemas located by the SQLAlchemy + :meth:`~sqlalchemy.engine.reflection.Inspector.get_schema_names` + method, and include all differences in tables found across all + those schemas. When using this option, you may want to also + use the :paramref:`.EnvironmentContext.configure.include_name` + parameter to specify a callable which + can filter the tables/schemas that get included. + + .. seealso:: + + :ref:`autogenerate_include_hooks` + + :paramref:`.EnvironmentContext.configure.include_name` + + :paramref:`.EnvironmentContext.configure.include_object` + + :param render_item: Callable that can be used to override how + any schema item, i.e. column, constraint, type, + etc., is rendered for autogenerate. The callable receives a + string describing the type of object, the object, and + the autogen context. If it returns False, the + default rendering method will be used. If it returns None, + the item will not be rendered in the context of a Table + construct, that is, can be used to skip columns or constraints + within op.create_table():: + + def my_render_column(type_, col, autogen_context): + if type_ == "column" and isinstance(col, MySpecialCol): + return repr(col) + else: + return False + + context.configure( + # ... + render_item = my_render_column + ) + + Available values for the type string include: ``"column"``, + ``"primary_key"``, ``"foreign_key"``, ``"unique"``, ``"check"``, + ``"type"``, ``"server_default"``. + + .. seealso:: + + :ref:`autogen_render_types` + + :param upgrade_token: When autogenerate completes, the text of the + candidate upgrade operations will be present in this template + variable when ``script.py.mako`` is rendered. Defaults to + ``upgrades``. + :param downgrade_token: When autogenerate completes, the text of the + candidate downgrade operations will be present in this + template variable when ``script.py.mako`` is rendered. Defaults to + ``downgrades``. + + :param alembic_module_prefix: When autogenerate refers to Alembic + :mod:`alembic.operations` constructs, this prefix will be used + (i.e. ``op.create_table``) Defaults to "``op.``". + Can be ``None`` to indicate no prefix. + + :param sqlalchemy_module_prefix: When autogenerate refers to + SQLAlchemy + :class:`~sqlalchemy.schema.Column` or type classes, this prefix + will be used + (i.e. ``sa.Column("somename", sa.Integer)``) Defaults to "``sa.``". + Can be ``None`` to indicate no prefix. + Note that when dialect-specific types are rendered, autogenerate + will render them using the dialect module name, i.e. ``mssql.BIT()``, + ``postgresql.UUID()``. + + :param user_module_prefix: When autogenerate refers to a SQLAlchemy + type (e.g. :class:`.TypeEngine`) where the module name is not + under the ``sqlalchemy`` namespace, this prefix will be used + within autogenerate. If left at its default of + ``None``, the ``__module__`` attribute of the type is used to + render the import module. It's a good practice to set this + and to have all custom types be available from a fixed module space, + in order to future-proof migration files against reorganizations + in modules. + + .. seealso:: + + :ref:`autogen_module_prefix` + + :param process_revision_directives: a callable function that will + be passed a structure representing the end result of an autogenerate + or plain "revision" operation, which can be manipulated to affect + how the ``alembic revision`` command ultimately outputs new + revision scripts. The structure of the callable is:: + + def process_revision_directives(context, revision, directives): + pass + + The ``directives`` parameter is a Python list containing + a single :class:`.MigrationScript` directive, which represents + the revision file to be generated. This list as well as its + contents may be freely modified to produce any set of commands. + The section :ref:`customizing_revision` shows an example of + doing this. The ``context`` parameter is the + :class:`.MigrationContext` in use, + and ``revision`` is a tuple of revision identifiers representing the + current revision of the database. + + The callable is invoked at all times when the ``--autogenerate`` + option is passed to ``alembic revision``. If ``--autogenerate`` + is not passed, the callable is invoked only if the + ``revision_environment`` variable is set to True in the Alembic + configuration, in which case the given ``directives`` collection + will contain empty :class:`.UpgradeOps` and :class:`.DowngradeOps` + collections for ``.upgrade_ops`` and ``.downgrade_ops``. The + ``--autogenerate`` option itself can be inferred by inspecting + ``context.config.cmd_opts.autogenerate``. + + The callable function may optionally be an instance of + a :class:`.Rewriter` object. This is a helper object that + assists in the production of autogenerate-stream rewriter functions. + + .. seealso:: + + :ref:`customizing_revision` + + :ref:`autogen_rewriter` + + :paramref:`.command.revision.process_revision_directives` + + Parameters specific to individual backends: + + :param mssql_batch_separator: The "batch separator" which will + be placed between each statement when generating offline SQL Server + migrations. Defaults to ``GO``. Note this is in addition to the + customary semicolon ``;`` at the end of each statement; SQL Server + considers the "batch separator" to denote the end of an + individual statement execution, and cannot group certain + dependent operations in one step. + :param oracle_batch_separator: The "batch separator" which will + be placed between each statement when generating offline + Oracle migrations. Defaults to ``/``. Oracle doesn't add a + semicolon between statements like most other backends. + + """ + opts = self.context_opts + if transactional_ddl is not None: + opts["transactional_ddl"] = transactional_ddl + if output_buffer is not None: + opts["output_buffer"] = output_buffer + elif self.config.output_buffer is not None: + opts["output_buffer"] = self.config.output_buffer + if starting_rev: + opts["starting_rev"] = starting_rev + if tag: + opts["tag"] = tag + if template_args and "template_args" in opts: + opts["template_args"].update(template_args) + opts["transaction_per_migration"] = transaction_per_migration + opts["target_metadata"] = target_metadata + opts["include_name"] = include_name + opts["include_object"] = include_object + opts["include_schemas"] = include_schemas + opts["render_as_batch"] = render_as_batch + opts["upgrade_token"] = upgrade_token + opts["downgrade_token"] = downgrade_token + opts["sqlalchemy_module_prefix"] = sqlalchemy_module_prefix + opts["alembic_module_prefix"] = alembic_module_prefix + opts["user_module_prefix"] = user_module_prefix + opts["literal_binds"] = literal_binds + opts["process_revision_directives"] = process_revision_directives + opts["on_version_apply"] = util.to_tuple(on_version_apply, default=()) + + if render_item is not None: + opts["render_item"] = render_item + opts["compare_type"] = compare_type + if compare_server_default is not None: + opts["compare_server_default"] = compare_server_default + opts["script"] = self.script + + opts.update(kw) + + self._migration_context = MigrationContext.configure( + connection=connection, + url=url, + dialect_name=dialect_name, + environment_context=self, + dialect_opts=dialect_opts, + opts=opts, + ) + + def run_migrations(self, **kw: Any) -> None: + """Run migrations as determined by the current command line + configuration + as well as versioning information present (or not) in the current + database connection (if one is present). + + The function accepts optional ``**kw`` arguments. If these are + passed, they are sent directly to the ``upgrade()`` and + ``downgrade()`` + functions within each target revision file. By modifying the + ``script.py.mako`` file so that the ``upgrade()`` and ``downgrade()`` + functions accept arguments, parameters can be passed here so that + contextual information, usually information to identify a particular + database in use, can be passed from a custom ``env.py`` script + to the migration functions. + + This function requires that a :class:`.MigrationContext` has + first been made available via :meth:`.configure`. + + """ + assert self._migration_context is not None + with Operations.context(self._migration_context): + self.get_context().run_migrations(**kw) + + def execute( + self, + sql: Union[Executable, str], + execution_options: Optional[Dict[str, Any]] = None, + ) -> None: + """Execute the given SQL using the current change context. + + The behavior of :meth:`.execute` is the same + as that of :meth:`.Operations.execute`. Please see that + function's documentation for full detail including + caveats and limitations. + + This function requires that a :class:`.MigrationContext` has + first been made available via :meth:`.configure`. + + """ + self.get_context().execute(sql, execution_options=execution_options) + + def static_output(self, text: str) -> None: + """Emit text directly to the "offline" SQL stream. + + Typically this is for emitting comments that + start with --. The statement is not treated + as a SQL execution, no ; or batch separator + is added, etc. + + """ + self.get_context().impl.static_output(text) + + def begin_transaction( + self, + ) -> Union[_ProxyTransaction, ContextManager[None, Optional[bool]]]: + """Return a context manager that will + enclose an operation within a "transaction", + as defined by the environment's offline + and transactional DDL settings. + + e.g.:: + + with context.begin_transaction(): + context.run_migrations() + + :meth:`.begin_transaction` is intended to + "do the right thing" regardless of + calling context: + + * If :meth:`.is_transactional_ddl` is ``False``, + returns a "do nothing" context manager + which otherwise produces no transactional + state or directives. + * If :meth:`.is_offline_mode` is ``True``, + returns a context manager that will + invoke the :meth:`.DefaultImpl.emit_begin` + and :meth:`.DefaultImpl.emit_commit` + methods, which will produce the string + directives ``BEGIN`` and ``COMMIT`` on + the output stream, as rendered by the + target backend (e.g. SQL Server would + emit ``BEGIN TRANSACTION``). + * Otherwise, calls :meth:`sqlalchemy.engine.Connection.begin` + on the current online connection, which + returns a :class:`sqlalchemy.engine.Transaction` + object. This object demarcates a real + transaction and is itself a context manager, + which will roll back if an exception + is raised. + + Note that a custom ``env.py`` script which + has more specific transactional needs can of course + manipulate the :class:`~sqlalchemy.engine.Connection` + directly to produce transactional state in "online" + mode. + + """ + + return self.get_context().begin_transaction() + + def get_context(self) -> MigrationContext: + """Return the current :class:`.MigrationContext` object. + + If :meth:`.EnvironmentContext.configure` has not been + called yet, raises an exception. + + """ + + if self._migration_context is None: + raise Exception("No context has been configured yet.") + return self._migration_context + + def get_bind(self) -> Connection: + """Return the current 'bind'. + + In "online" mode, this is the + :class:`sqlalchemy.engine.Connection` currently being used + to emit SQL to the database. + + This function requires that a :class:`.MigrationContext` + has first been made available via :meth:`.configure`. + + """ + return self.get_context().bind # type: ignore[return-value] + + def get_impl(self) -> DefaultImpl: + return self.get_context().impl diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/migration.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/migration.py new file mode 100644 index 000000000..c1c7b0fc5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/runtime/migration.py @@ -0,0 +1,1395 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +from contextlib import contextmanager +from contextlib import nullcontext +import logging +import sys +from typing import Any +from typing import Callable +from typing import cast +from typing import Collection +from typing import Dict +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Optional +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from sqlalchemy import Column +from sqlalchemy import literal_column +from sqlalchemy import select +from sqlalchemy.engine import Engine +from sqlalchemy.engine import url as sqla_url +from sqlalchemy.engine.strategies import MockEngineStrategy +from typing_extensions import ContextManager + +from .. import ddl +from .. import util +from ..util import sqla_compat +from ..util.compat import EncodedIO + +if TYPE_CHECKING: + from sqlalchemy.engine import Dialect + from sqlalchemy.engine import URL + from sqlalchemy.engine.base import Connection + from sqlalchemy.engine.base import Transaction + from sqlalchemy.engine.mock import MockConnection + from sqlalchemy.sql import Executable + + from .environment import EnvironmentContext + from ..config import Config + from ..script.base import Script + from ..script.base import ScriptDirectory + from ..script.revision import _RevisionOrBase + from ..script.revision import Revision + from ..script.revision import RevisionMap + +log = logging.getLogger(__name__) + + +class _ProxyTransaction: + def __init__(self, migration_context: MigrationContext) -> None: + self.migration_context = migration_context + + @property + def _proxied_transaction(self) -> Optional[Transaction]: + return self.migration_context._transaction + + def rollback(self) -> None: + t = self._proxied_transaction + assert t is not None + t.rollback() + self.migration_context._transaction = None + + def commit(self) -> None: + t = self._proxied_transaction + assert t is not None + t.commit() + self.migration_context._transaction = None + + def __enter__(self) -> _ProxyTransaction: + return self + + def __exit__(self, type_: Any, value: Any, traceback: Any) -> None: + if self._proxied_transaction is not None: + self._proxied_transaction.__exit__(type_, value, traceback) + self.migration_context._transaction = None + + +class MigrationContext: + """Represent the database state made available to a migration + script. + + :class:`.MigrationContext` is the front end to an actual + database connection, or alternatively a string output + stream given a particular database dialect, + from an Alembic perspective. + + When inside the ``env.py`` script, the :class:`.MigrationContext` + is available via the + :meth:`.EnvironmentContext.get_context` method, + which is available at ``alembic.context``:: + + # from within env.py script + from alembic import context + + migration_context = context.get_context() + + For usage outside of an ``env.py`` script, such as for + utility routines that want to check the current version + in the database, the :meth:`.MigrationContext.configure` + method to create new :class:`.MigrationContext` objects. + For example, to get at the current revision in the + database using :meth:`.MigrationContext.get_current_revision`:: + + # in any application, outside of an env.py script + from alembic.migration import MigrationContext + from sqlalchemy import create_engine + + engine = create_engine("postgresql://mydatabase") + conn = engine.connect() + + context = MigrationContext.configure(conn) + current_rev = context.get_current_revision() + + The above context can also be used to produce + Alembic migration operations with an :class:`.Operations` + instance:: + + # in any application, outside of the normal Alembic environment + from alembic.operations import Operations + + op = Operations(context) + op.alter_column("mytable", "somecolumn", nullable=True) + + """ + + def __init__( + self, + dialect: Dialect, + connection: Optional[Connection], + opts: Dict[str, Any], + environment_context: Optional[EnvironmentContext] = None, + ) -> None: + self.environment_context = environment_context + self.opts = opts + self.dialect = dialect + self.script: Optional[ScriptDirectory] = opts.get("script") + as_sql: bool = opts.get("as_sql", False) + transactional_ddl = opts.get("transactional_ddl") + self._transaction_per_migration = opts.get( + "transaction_per_migration", False + ) + self.on_version_apply_callbacks = opts.get("on_version_apply", ()) + self._transaction: Optional[Transaction] = None + + if as_sql: + self.connection = cast( + Optional["Connection"], self._stdout_connection(connection) + ) + assert self.connection is not None + self._in_external_transaction = False + else: + self.connection = connection + self._in_external_transaction = ( + sqla_compat._get_connection_in_transaction(connection) + ) + + self._migrations_fn: Optional[ + Callable[..., Iterable[RevisionStep]] + ] = opts.get("fn") + self.as_sql = as_sql + + self.purge = opts.get("purge", False) + + if "output_encoding" in opts: + self.output_buffer = EncodedIO( + opts.get("output_buffer") + or sys.stdout, # type:ignore[arg-type] + opts["output_encoding"], + ) + else: + self.output_buffer = opts.get( + "output_buffer", sys.stdout + ) # type:ignore[assignment] # noqa: E501 + + self.transactional_ddl = transactional_ddl + + self._user_compare_type = opts.get("compare_type", True) + self._user_compare_server_default = opts.get( + "compare_server_default", False + ) + self.version_table = version_table = opts.get( + "version_table", "alembic_version" + ) + self.version_table_schema = version_table_schema = opts.get( + "version_table_schema", None + ) + + self._start_from_rev: Optional[str] = opts.get("starting_rev") + self.impl = ddl.DefaultImpl.get_by_dialect(dialect)( + dialect, + self.connection, + self.as_sql, + transactional_ddl, + self.output_buffer, + opts, + ) + + self._version = self.impl.version_table_impl( + version_table=version_table, + version_table_schema=version_table_schema, + version_table_pk=opts.get("version_table_pk", True), + ) + + log.info("Context impl %s.", self.impl.__class__.__name__) + if self.as_sql: + log.info("Generating static SQL") + log.info( + "Will assume %s DDL.", + ( + "transactional" + if self.impl.transactional_ddl + else "non-transactional" + ), + ) + + @classmethod + def configure( + cls, + connection: Optional[Connection] = None, + url: Optional[Union[str, URL]] = None, + dialect_name: Optional[str] = None, + dialect: Optional[Dialect] = None, + environment_context: Optional[EnvironmentContext] = None, + dialect_opts: Optional[Dict[str, str]] = None, + opts: Optional[Any] = None, + ) -> MigrationContext: + """Create a new :class:`.MigrationContext`. + + This is a factory method usually called + by :meth:`.EnvironmentContext.configure`. + + :param connection: a :class:`~sqlalchemy.engine.Connection` + to use for SQL execution in "online" mode. When present, + is also used to determine the type of dialect in use. + :param url: a string database url, or a + :class:`sqlalchemy.engine.url.URL` object. + The type of dialect to be used will be derived from this if + ``connection`` is not passed. + :param dialect_name: string name of a dialect, such as + "postgresql", "mssql", etc. The type of dialect to be used will be + derived from this if ``connection`` and ``url`` are not passed. + :param opts: dictionary of options. Most other options + accepted by :meth:`.EnvironmentContext.configure` are passed via + this dictionary. + + """ + if opts is None: + opts = {} + if dialect_opts is None: + dialect_opts = {} + + if connection: + if isinstance(connection, Engine): + raise util.CommandError( + "'connection' argument to configure() is expected " + "to be a sqlalchemy.engine.Connection instance, " + "got %r" % connection, + ) + + dialect = connection.dialect + elif url: + url_obj = sqla_url.make_url(url) + dialect = url_obj.get_dialect()(**dialect_opts) + elif dialect_name: + url_obj = sqla_url.make_url("%s://" % dialect_name) + dialect = url_obj.get_dialect()(**dialect_opts) + elif not dialect: + raise Exception("Connection, url, or dialect_name is required.") + assert dialect is not None + return MigrationContext(dialect, connection, opts, environment_context) + + @contextmanager + def autocommit_block(self) -> Iterator[None]: + """Enter an "autocommit" block, for databases that support AUTOCOMMIT + isolation levels. + + This special directive is intended to support the occasional database + DDL or system operation that specifically has to be run outside of + any kind of transaction block. The PostgreSQL database platform + is the most common target for this style of operation, as many + of its DDL operations must be run outside of transaction blocks, even + though the database overall supports transactional DDL. + + The method is used as a context manager within a migration script, by + calling on :meth:`.Operations.get_context` to retrieve the + :class:`.MigrationContext`, then invoking + :meth:`.MigrationContext.autocommit_block` using the ``with:`` + statement:: + + def upgrade(): + with op.get_context().autocommit_block(): + op.execute("ALTER TYPE mood ADD VALUE 'soso'") + + Above, a PostgreSQL "ALTER TYPE..ADD VALUE" directive is emitted, + which must be run outside of a transaction block at the database level. + The :meth:`.MigrationContext.autocommit_block` method makes use of the + SQLAlchemy ``AUTOCOMMIT`` isolation level setting, which against the + psycogp2 DBAPI corresponds to the ``connection.autocommit`` setting, + to ensure that the database driver is not inside of a DBAPI level + transaction block. + + .. warning:: + + As is necessary, **the database transaction preceding the block is + unconditionally committed**. This means that the run of migrations + preceding the operation will be committed, before the overall + migration operation is complete. + + It is recommended that when an application includes migrations with + "autocommit" blocks, that + :paramref:`.EnvironmentContext.transaction_per_migration` be used + so that the calling environment is tuned to expect short per-file + migrations whether or not one of them has an autocommit block. + + + """ + _in_connection_transaction = self._in_connection_transaction() + + if self.impl.transactional_ddl and self.as_sql: + self.impl.emit_commit() + + elif _in_connection_transaction: + assert self._transaction is not None + + self._transaction.commit() + self._transaction = None + + if not self.as_sql: + assert self.connection is not None + current_level = self.connection.get_isolation_level() + base_connection = self.connection + + # in 1.3 and 1.4 non-future mode, the connection gets switched + # out. we can use the base connection with the new mode + # except that it will not know it's in "autocommit" and will + # emit deprecation warnings when an autocommit action takes + # place. + self.connection = self.impl.connection = ( + base_connection.execution_options(isolation_level="AUTOCOMMIT") + ) + + # sqlalchemy future mode will "autobegin" in any case, so take + # control of that "transaction" here + fake_trans: Optional[Transaction] = self.connection.begin() + else: + fake_trans = None + try: + yield + finally: + if not self.as_sql: + assert self.connection is not None + if fake_trans is not None: + fake_trans.commit() + self.connection.execution_options( + isolation_level=current_level + ) + self.connection = self.impl.connection = base_connection + + if self.impl.transactional_ddl and self.as_sql: + self.impl.emit_begin() + + elif _in_connection_transaction: + assert self.connection is not None + self._transaction = self.connection.begin() + + def begin_transaction( + self, _per_migration: bool = False + ) -> Union[_ProxyTransaction, ContextManager[None, Optional[bool]]]: + """Begin a logical transaction for migration operations. + + This method is used within an ``env.py`` script to demarcate where + the outer "transaction" for a series of migrations begins. Example:: + + def run_migrations_online(): + connectable = create_engine(...) + + with connectable.connect() as connection: + context.configure( + connection=connection, target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + + Above, :meth:`.MigrationContext.begin_transaction` is used to demarcate + where the outer logical transaction occurs around the + :meth:`.MigrationContext.run_migrations` operation. + + A "Logical" transaction means that the operation may or may not + correspond to a real database transaction. If the target database + supports transactional DDL (or + :paramref:`.EnvironmentContext.configure.transactional_ddl` is true), + the :paramref:`.EnvironmentContext.configure.transaction_per_migration` + flag is not set, and the migration is against a real database + connection (as opposed to using "offline" ``--sql`` mode), a real + transaction will be started. If ``--sql`` mode is in effect, the + operation would instead correspond to a string such as "BEGIN" being + emitted to the string output. + + The returned object is a Python context manager that should only be + used in the context of a ``with:`` statement as indicated above. + The object has no other guaranteed API features present. + + .. seealso:: + + :meth:`.MigrationContext.autocommit_block` + + """ + + if self._in_external_transaction: + return nullcontext() + + if self.impl.transactional_ddl: + transaction_now = _per_migration == self._transaction_per_migration + else: + transaction_now = _per_migration is True + + if not transaction_now: + return nullcontext() + + elif not self.impl.transactional_ddl: + assert _per_migration + + if self.as_sql: + return nullcontext() + else: + # track our own notion of a "transaction block", which must be + # committed when complete. Don't rely upon whether or not the + # SQLAlchemy connection reports as "in transaction"; this + # because SQLAlchemy future connection features autobegin + # behavior, so it may already be in a transaction from our + # emitting of queries like "has_version_table", etc. While we + # could track these operations as well, that leaves open the + # possibility of new operations or other things happening in + # the user environment that still may be triggering + # "autobegin". + + in_transaction = self._transaction is not None + + if in_transaction: + return nullcontext() + else: + assert self.connection is not None + self._transaction = ( + sqla_compat._safe_begin_connection_transaction( + self.connection + ) + ) + return _ProxyTransaction(self) + elif self.as_sql: + + @contextmanager + def begin_commit(): + self.impl.emit_begin() + yield + self.impl.emit_commit() + + return begin_commit() + else: + assert self.connection is not None + self._transaction = sqla_compat._safe_begin_connection_transaction( + self.connection + ) + return _ProxyTransaction(self) + + def get_current_revision(self) -> Optional[str]: + """Return the current revision, usually that which is present + in the ``alembic_version`` table in the database. + + This method intends to be used only for a migration stream that + does not contain unmerged branches in the target database; + if there are multiple branches present, an exception is raised. + The :meth:`.MigrationContext.get_current_heads` should be preferred + over this method going forward in order to be compatible with + branch migration support. + + If this :class:`.MigrationContext` was configured in "offline" + mode, that is with ``as_sql=True``, the ``starting_rev`` + parameter is returned instead, if any. + + """ + heads = self.get_current_heads() + if len(heads) == 0: + return None + elif len(heads) > 1: + raise util.CommandError( + "Version table '%s' has more than one head present; " + "please use get_current_heads()" % self.version_table + ) + else: + return heads[0] + + def get_current_heads(self) -> Tuple[str, ...]: + """Return a tuple of the current 'head versions' that are represented + in the target database. + + For a migration stream without branches, this will be a single + value, synonymous with that of + :meth:`.MigrationContext.get_current_revision`. However when multiple + unmerged branches exist within the target database, the returned tuple + will contain a value for each head. + + If this :class:`.MigrationContext` was configured in "offline" + mode, that is with ``as_sql=True``, the ``starting_rev`` + parameter is returned in a one-length tuple. + + If no version table is present, or if there are no revisions + present, an empty tuple is returned. + + """ + if self.as_sql: + start_from_rev: Any = self._start_from_rev + if start_from_rev == "base": + start_from_rev = None + elif start_from_rev is not None and self.script: + start_from_rev = [ + self.script.get_revision(sfr).revision + for sfr in util.to_list(start_from_rev) + if sfr not in (None, "base") + ] + return util.to_tuple(start_from_rev, default=()) + else: + if self._start_from_rev: + raise util.CommandError( + "Can't specify current_rev to context " + "when using a database connection" + ) + if not self._has_version_table(): + return () + assert self.connection is not None + return tuple( + row[0] + for row in self.connection.execute( + select(self._version.c.version_num) + ) + ) + + def _ensure_version_table(self, purge: bool = False) -> None: + with sqla_compat._ensure_scope_for_ddl(self.connection): + assert self.connection is not None + self._version.create(self.connection, checkfirst=True) + if purge: + assert self.connection is not None + self.connection.execute(self._version.delete()) + + def _has_version_table(self) -> bool: + assert self.connection is not None + return sqla_compat._connectable_has_table( + self.connection, self.version_table, self.version_table_schema + ) + + def stamp(self, script_directory: ScriptDirectory, revision: str) -> None: + """Stamp the version table with a specific revision. + + This method calculates those branches to which the given revision + can apply, and updates those branches as though they were migrated + towards that revision (either up or down). If no current branches + include the revision, it is added as a new branch head. + + """ + heads = self.get_current_heads() + if not self.as_sql and not heads: + self._ensure_version_table() + head_maintainer = HeadMaintainer(self, heads) + for step in script_directory._stamp_revs(revision, heads): + head_maintainer.update_to_step(step) + + def run_migrations(self, **kw: Any) -> None: + r"""Run the migration scripts established for this + :class:`.MigrationContext`, if any. + + The commands in :mod:`alembic.command` will set up a function + that is ultimately passed to the :class:`.MigrationContext` + as the ``fn`` argument. This function represents the "work" + that will be done when :meth:`.MigrationContext.run_migrations` + is called, typically from within the ``env.py`` script of the + migration environment. The "work function" then provides an iterable + of version callables and other version information which + in the case of the ``upgrade`` or ``downgrade`` commands are the + list of version scripts to invoke. Other commands yield nothing, + in the case that a command wants to run some other operation + against the database such as the ``current`` or ``stamp`` commands. + + :param \**kw: keyword arguments here will be passed to each + migration callable, that is the ``upgrade()`` or ``downgrade()`` + method within revision scripts. + + """ + self.impl.start_migrations() + + heads: Tuple[str, ...] + if self.purge: + if self.as_sql: + raise util.CommandError("Can't use --purge with --sql mode") + self._ensure_version_table(purge=True) + heads = () + else: + heads = self.get_current_heads() + + dont_mutate = self.opts.get("dont_mutate", False) + + if not self.as_sql and not heads and not dont_mutate: + self._ensure_version_table() + + head_maintainer = HeadMaintainer(self, heads) + + assert self._migrations_fn is not None + for step in self._migrations_fn(heads, self): + with self.begin_transaction(_per_migration=True): + if self.as_sql and not head_maintainer.heads: + # for offline mode, include a CREATE TABLE from + # the base + assert self.connection is not None + self._version.create(self.connection) + log.info("Running %s", step) + if self.as_sql: + self.impl.static_output( + "-- Running %s" % (step.short_log,) + ) + step.migration_fn(**kw) + + # previously, we wouldn't stamp per migration + # if we were in a transaction, however given the more + # complex model that involves any number of inserts + # and row-targeted updates and deletes, it's simpler for now + # just to run the operations on every version + head_maintainer.update_to_step(step) + for callback in self.on_version_apply_callbacks: + callback( + ctx=self, + step=step.info, + heads=set(head_maintainer.heads), + run_args=kw, + ) + + if self.as_sql and not head_maintainer.heads: + assert self.connection is not None + self._version.drop(self.connection) + + def _in_connection_transaction(self) -> bool: + try: + meth = self.connection.in_transaction # type:ignore[union-attr] + except AttributeError: + return False + else: + return meth() + + def execute( + self, + sql: Union[Executable, str], + execution_options: Optional[Dict[str, Any]] = None, + ) -> None: + """Execute a SQL construct or string statement. + + The underlying execution mechanics are used, that is + if this is "offline mode" the SQL is written to the + output buffer, otherwise the SQL is emitted on + the current SQLAlchemy connection. + + """ + self.impl._exec(sql, execution_options) + + def _stdout_connection( + self, connection: Optional[Connection] + ) -> MockConnection: + def dump(construct, *multiparams, **params): + self.impl._exec(construct) + + return MockEngineStrategy.MockConnection(self.dialect, dump) + + @property + def bind(self) -> Optional[Connection]: + """Return the current "bind". + + In online mode, this is an instance of + :class:`sqlalchemy.engine.Connection`, and is suitable + for ad-hoc execution of any kind of usage described + in SQLAlchemy Core documentation as well as + for usage with the :meth:`sqlalchemy.schema.Table.create` + and :meth:`sqlalchemy.schema.MetaData.create_all` methods + of :class:`~sqlalchemy.schema.Table`, + :class:`~sqlalchemy.schema.MetaData`. + + Note that when "standard output" mode is enabled, + this bind will be a "mock" connection handler that cannot + return results and is only appropriate for a very limited + subset of commands. + + """ + return self.connection + + @property + def config(self) -> Optional[Config]: + """Return the :class:`.Config` used by the current environment, + if any.""" + + if self.environment_context: + return self.environment_context.config + else: + return None + + def _compare_type( + self, inspector_column: Column[Any], metadata_column: Column + ) -> bool: + if self._user_compare_type is False: + return False + + if callable(self._user_compare_type): + user_value = self._user_compare_type( + self, + inspector_column, + metadata_column, + inspector_column.type, + metadata_column.type, + ) + if user_value is not None: + return user_value + + return self.impl.compare_type(inspector_column, metadata_column) + + def _compare_server_default( + self, + inspector_column: Column[Any], + metadata_column: Column[Any], + rendered_metadata_default: Optional[str], + rendered_column_default: Optional[str], + ) -> bool: + if self._user_compare_server_default is False: + return False + + if callable(self._user_compare_server_default): + user_value = self._user_compare_server_default( + self, + inspector_column, + metadata_column, + rendered_column_default, + metadata_column.server_default, + rendered_metadata_default, + ) + if user_value is not None: + return user_value + + return self.impl.compare_server_default( + inspector_column, + metadata_column, + rendered_metadata_default, + rendered_column_default, + ) + + +class HeadMaintainer: + def __init__(self, context: MigrationContext, heads: Any) -> None: + self.context = context + self.heads = set(heads) + + def _insert_version(self, version: str) -> None: + assert version not in self.heads + self.heads.add(version) + + self.context.impl._exec( + self.context._version.insert().values( + version_num=literal_column("'%s'" % version) + ) + ) + + def _delete_version(self, version: str) -> None: + self.heads.remove(version) + + ret = self.context.impl._exec( + self.context._version.delete().where( + self.context._version.c.version_num + == literal_column("'%s'" % version) + ) + ) + + if ( + not self.context.as_sql + and self.context.dialect.supports_sane_rowcount + and ret is not None + and ret.rowcount != 1 + ): + raise util.CommandError( + "Online migration expected to match one " + "row when deleting '%s' in '%s'; " + "%d found" + % (version, self.context.version_table, ret.rowcount) + ) + + def _update_version(self, from_: str, to_: str) -> None: + assert to_ not in self.heads + self.heads.remove(from_) + self.heads.add(to_) + + ret = self.context.impl._exec( + self.context._version.update() + .values(version_num=literal_column("'%s'" % to_)) + .where( + self.context._version.c.version_num + == literal_column("'%s'" % from_) + ) + ) + + if ( + not self.context.as_sql + and self.context.dialect.supports_sane_rowcount + and ret is not None + and ret.rowcount != 1 + ): + raise util.CommandError( + "Online migration expected to match one " + "row when updating '%s' to '%s' in '%s'; " + "%d found" + % (from_, to_, self.context.version_table, ret.rowcount) + ) + + def update_to_step(self, step: Union[RevisionStep, StampStep]) -> None: + if step.should_delete_branch(self.heads): + vers = step.delete_version_num + log.debug("branch delete %s", vers) + self._delete_version(vers) + elif step.should_create_branch(self.heads): + vers = step.insert_version_num + log.debug("new branch insert %s", vers) + self._insert_version(vers) + elif step.should_merge_branches(self.heads): + # delete revs, update from rev, update to rev + ( + delete_revs, + update_from_rev, + update_to_rev, + ) = step.merge_branch_idents(self.heads) + log.debug( + "merge, delete %s, update %s to %s", + delete_revs, + update_from_rev, + update_to_rev, + ) + for delrev in delete_revs: + self._delete_version(delrev) + self._update_version(update_from_rev, update_to_rev) + elif step.should_unmerge_branches(self.heads): + ( + update_from_rev, + update_to_rev, + insert_revs, + ) = step.unmerge_branch_idents(self.heads) + log.debug( + "unmerge, insert %s, update %s to %s", + insert_revs, + update_from_rev, + update_to_rev, + ) + for insrev in insert_revs: + self._insert_version(insrev) + self._update_version(update_from_rev, update_to_rev) + else: + from_, to_ = step.update_version_num(self.heads) + log.debug("update %s to %s", from_, to_) + self._update_version(from_, to_) + + +class MigrationInfo: + """Exposes information about a migration step to a callback listener. + + The :class:`.MigrationInfo` object is available exclusively for the + benefit of the :paramref:`.EnvironmentContext.on_version_apply` + callback hook. + + """ + + is_upgrade: bool + """True/False: indicates whether this operation ascends or descends the + version tree.""" + + is_stamp: bool + """True/False: indicates whether this operation is a stamp (i.e. whether + it results in any actual database operations).""" + + up_revision_id: Optional[str] + """Version string corresponding to :attr:`.Revision.revision`. + + In the case of a stamp operation, it is advised to use the + :attr:`.MigrationInfo.up_revision_ids` tuple as a stamp operation can + make a single movement from one or more branches down to a single + branchpoint, in which case there will be multiple "up" revisions. + + .. seealso:: + + :attr:`.MigrationInfo.up_revision_ids` + + """ + + up_revision_ids: Tuple[str, ...] + """Tuple of version strings corresponding to :attr:`.Revision.revision`. + + In the majority of cases, this tuple will be a single value, synonymous + with the scalar value of :attr:`.MigrationInfo.up_revision_id`. + It can be multiple revision identifiers only in the case of an + ``alembic stamp`` operation which is moving downwards from multiple + branches down to their common branch point. + + """ + + down_revision_ids: Tuple[str, ...] + """Tuple of strings representing the base revisions of this migration step. + + If empty, this represents a root revision; otherwise, the first item + corresponds to :attr:`.Revision.down_revision`, and the rest are inferred + from dependencies. + """ + + revision_map: RevisionMap + """The revision map inside of which this operation occurs.""" + + def __init__( + self, + revision_map: RevisionMap, + is_upgrade: bool, + is_stamp: bool, + up_revisions: Union[str, Tuple[str, ...]], + down_revisions: Union[str, Tuple[str, ...]], + ) -> None: + self.revision_map = revision_map + self.is_upgrade = is_upgrade + self.is_stamp = is_stamp + self.up_revision_ids = util.to_tuple(up_revisions, default=()) + if self.up_revision_ids: + self.up_revision_id = self.up_revision_ids[0] + else: + # this should never be the case with + # "upgrade", "downgrade", or "stamp" as we are always + # measuring movement in terms of at least one upgrade version + self.up_revision_id = None + self.down_revision_ids = util.to_tuple(down_revisions, default=()) + + @property + def is_migration(self) -> bool: + """True/False: indicates whether this operation is a migration. + + At present this is true if and only the migration is not a stamp. + If other operation types are added in the future, both this attribute + and :attr:`~.MigrationInfo.is_stamp` will be false. + """ + return not self.is_stamp + + @property + def source_revision_ids(self) -> Tuple[str, ...]: + """Active revisions before this migration step is applied.""" + return ( + self.down_revision_ids if self.is_upgrade else self.up_revision_ids + ) + + @property + def destination_revision_ids(self) -> Tuple[str, ...]: + """Active revisions after this migration step is applied.""" + return ( + self.up_revision_ids if self.is_upgrade else self.down_revision_ids + ) + + @property + def up_revision(self) -> Optional[Revision]: + """Get :attr:`~.MigrationInfo.up_revision_id` as + a :class:`.Revision`. + + """ + return self.revision_map.get_revision(self.up_revision_id) + + @property + def up_revisions(self) -> Tuple[Optional[_RevisionOrBase], ...]: + """Get :attr:`~.MigrationInfo.up_revision_ids` as a + :class:`.Revision`.""" + return self.revision_map.get_revisions(self.up_revision_ids) + + @property + def down_revisions(self) -> Tuple[Optional[_RevisionOrBase], ...]: + """Get :attr:`~.MigrationInfo.down_revision_ids` as a tuple of + :class:`Revisions <.Revision>`.""" + return self.revision_map.get_revisions(self.down_revision_ids) + + @property + def source_revisions(self) -> Tuple[Optional[_RevisionOrBase], ...]: + """Get :attr:`~MigrationInfo.source_revision_ids` as a tuple of + :class:`Revisions <.Revision>`.""" + return self.revision_map.get_revisions(self.source_revision_ids) + + @property + def destination_revisions(self) -> Tuple[Optional[_RevisionOrBase], ...]: + """Get :attr:`~MigrationInfo.destination_revision_ids` as a tuple of + :class:`Revisions <.Revision>`.""" + return self.revision_map.get_revisions(self.destination_revision_ids) + + +class MigrationStep: + from_revisions_no_deps: Tuple[str, ...] + to_revisions_no_deps: Tuple[str, ...] + is_upgrade: bool + migration_fn: Any + + if TYPE_CHECKING: + + @property + def doc(self) -> Optional[str]: ... + + @property + def name(self) -> str: + return self.migration_fn.__name__ + + @classmethod + def upgrade_from_script( + cls, revision_map: RevisionMap, script: Script + ) -> RevisionStep: + return RevisionStep(revision_map, script, True) + + @classmethod + def downgrade_from_script( + cls, revision_map: RevisionMap, script: Script + ) -> RevisionStep: + return RevisionStep(revision_map, script, False) + + @property + def is_downgrade(self) -> bool: + return not self.is_upgrade + + @property + def short_log(self) -> str: + return "%s %s -> %s" % ( + self.name, + util.format_as_comma(self.from_revisions_no_deps), + util.format_as_comma(self.to_revisions_no_deps), + ) + + def __str__(self): + if self.doc: + return "%s %s -> %s, %s" % ( + self.name, + util.format_as_comma(self.from_revisions_no_deps), + util.format_as_comma(self.to_revisions_no_deps), + self.doc, + ) + else: + return self.short_log + + +class RevisionStep(MigrationStep): + def __init__( + self, revision_map: RevisionMap, revision: Script, is_upgrade: bool + ) -> None: + self.revision_map = revision_map + self.revision = revision + self.is_upgrade = is_upgrade + if is_upgrade: + self.migration_fn = revision.module.upgrade + else: + self.migration_fn = revision.module.downgrade + + def __repr__(self): + return "RevisionStep(%r, is_upgrade=%r)" % ( + self.revision.revision, + self.is_upgrade, + ) + + def __eq__(self, other: object) -> bool: + return ( + isinstance(other, RevisionStep) + and other.revision == self.revision + and self.is_upgrade == other.is_upgrade + ) + + @property + def doc(self) -> Optional[str]: + return self.revision.doc + + @property + def from_revisions(self) -> Tuple[str, ...]: + if self.is_upgrade: + return self.revision._normalized_down_revisions + else: + return (self.revision.revision,) + + @property + def from_revisions_no_deps( # type:ignore[override] + self, + ) -> Tuple[str, ...]: + if self.is_upgrade: + return self.revision._versioned_down_revisions + else: + return (self.revision.revision,) + + @property + def to_revisions(self) -> Tuple[str, ...]: + if self.is_upgrade: + return (self.revision.revision,) + else: + return self.revision._normalized_down_revisions + + @property + def to_revisions_no_deps( # type:ignore[override] + self, + ) -> Tuple[str, ...]: + if self.is_upgrade: + return (self.revision.revision,) + else: + return self.revision._versioned_down_revisions + + @property + def _has_scalar_down_revision(self) -> bool: + return len(self.revision._normalized_down_revisions) == 1 + + def should_delete_branch(self, heads: Set[str]) -> bool: + """A delete is when we are a. in a downgrade and b. + we are going to the "base" or we are going to a version that + is implied as a dependency on another version that is remaining. + + """ + if not self.is_downgrade: + return False + + if self.revision.revision not in heads: + return False + + downrevs = self.revision._normalized_down_revisions + + if not downrevs: + # is a base + return True + else: + # determine what the ultimate "to_revisions" for an + # unmerge would be. If there are none, then we're a delete. + to_revisions = self._unmerge_to_revisions(heads) + return not to_revisions + + def merge_branch_idents( + self, heads: Set[str] + ) -> Tuple[List[str], str, str]: + other_heads = set(heads).difference(self.from_revisions) + + if other_heads: + ancestors = { + r.revision + for r in self.revision_map._get_ancestor_nodes( + self.revision_map.get_revisions(other_heads), check=False + ) + } + from_revisions = list( + set(self.from_revisions).difference(ancestors) + ) + else: + from_revisions = list(self.from_revisions) + + return ( + # delete revs, update from rev, update to rev + list(from_revisions[0:-1]), + from_revisions[-1], + self.to_revisions[0], + ) + + def _unmerge_to_revisions(self, heads: Set[str]) -> Tuple[str, ...]: + other_heads = set(heads).difference([self.revision.revision]) + if other_heads: + ancestors = { + r.revision + for r in self.revision_map._get_ancestor_nodes( + self.revision_map.get_revisions(other_heads), check=False + ) + } + return tuple(set(self.to_revisions).difference(ancestors)) + else: + # for each revision we plan to return, compute its ancestors + # (excluding self), and remove those from the final output since + # they are already accounted for. + ancestors = { + r.revision + for to_revision in self.to_revisions + for r in self.revision_map._get_ancestor_nodes( + self.revision_map.get_revisions(to_revision), check=False + ) + if r.revision != to_revision + } + return tuple(set(self.to_revisions).difference(ancestors)) + + def unmerge_branch_idents( + self, heads: Set[str] + ) -> Tuple[str, str, Tuple[str, ...]]: + to_revisions = self._unmerge_to_revisions(heads) + + return ( + # update from rev, update to rev, insert revs + self.from_revisions[0], + to_revisions[-1], + to_revisions[0:-1], + ) + + def should_create_branch(self, heads: Set[str]) -> bool: + if not self.is_upgrade: + return False + + downrevs = self.revision._normalized_down_revisions + + if not downrevs: + # is a base + return True + else: + # none of our downrevs are present, so... + # we have to insert our version. This is true whether + # or not there is only one downrev, or multiple (in the latter + # case, we're a merge point.) + if not heads.intersection(downrevs): + return True + else: + return False + + def should_merge_branches(self, heads: Set[str]) -> bool: + if not self.is_upgrade: + return False + + downrevs = self.revision._normalized_down_revisions + + if len(downrevs) > 1 and len(heads.intersection(downrevs)) > 1: + return True + + return False + + def should_unmerge_branches(self, heads: Set[str]) -> bool: + if not self.is_downgrade: + return False + + downrevs = self.revision._normalized_down_revisions + + if self.revision.revision in heads and len(downrevs) > 1: + return True + + return False + + def update_version_num(self, heads: Set[str]) -> Tuple[str, str]: + if not self._has_scalar_down_revision: + downrev = heads.intersection( + self.revision._normalized_down_revisions + ) + assert ( + len(downrev) == 1 + ), "Can't do an UPDATE because downrevision is ambiguous" + down_revision = list(downrev)[0] + else: + down_revision = self.revision._normalized_down_revisions[0] + + if self.is_upgrade: + return down_revision, self.revision.revision + else: + return self.revision.revision, down_revision + + @property + def delete_version_num(self) -> str: + return self.revision.revision + + @property + def insert_version_num(self) -> str: + return self.revision.revision + + @property + def info(self) -> MigrationInfo: + return MigrationInfo( + revision_map=self.revision_map, + up_revisions=self.revision.revision, + down_revisions=self.revision._normalized_down_revisions, + is_upgrade=self.is_upgrade, + is_stamp=False, + ) + + +class StampStep(MigrationStep): + def __init__( + self, + from_: Optional[Union[str, Collection[str]]], + to_: Optional[Union[str, Collection[str]]], + is_upgrade: bool, + branch_move: bool, + revision_map: Optional[RevisionMap] = None, + ) -> None: + self.from_: Tuple[str, ...] = util.to_tuple(from_, default=()) + self.to_: Tuple[str, ...] = util.to_tuple(to_, default=()) + self.is_upgrade = is_upgrade + self.branch_move = branch_move + self.migration_fn = self.stamp_revision + self.revision_map = revision_map + + doc: Optional[str] = None + + def stamp_revision(self, **kw: Any) -> None: + return None + + def __eq__(self, other): + return ( + isinstance(other, StampStep) + and other.from_revisions == self.from_revisions + and other.to_revisions == self.to_revisions + and other.branch_move == self.branch_move + and self.is_upgrade == other.is_upgrade + ) + + @property + def from_revisions(self): + return self.from_ + + @property + def to_revisions(self) -> Tuple[str, ...]: + return self.to_ + + @property + def from_revisions_no_deps( # type:ignore[override] + self, + ) -> Tuple[str, ...]: + return self.from_ + + @property + def to_revisions_no_deps( # type:ignore[override] + self, + ) -> Tuple[str, ...]: + return self.to_ + + @property + def delete_version_num(self) -> str: + assert len(self.from_) == 1 + return self.from_[0] + + @property + def insert_version_num(self) -> str: + assert len(self.to_) == 1 + return self.to_[0] + + def update_version_num(self, heads: Set[str]) -> Tuple[str, str]: + assert len(self.from_) == 1 + assert len(self.to_) == 1 + return self.from_[0], self.to_[0] + + def merge_branch_idents( + self, heads: Union[Set[str], List[str]] + ) -> Union[Tuple[List[Any], str, str], Tuple[List[str], str, str]]: + return ( + # delete revs, update from rev, update to rev + list(self.from_[0:-1]), + self.from_[-1], + self.to_[0], + ) + + def unmerge_branch_idents( + self, heads: Set[str] + ) -> Tuple[str, str, List[str]]: + return ( + # update from rev, update to rev, insert revs + self.from_[0], + self.to_[-1], + list(self.to_[0:-1]), + ) + + def should_delete_branch(self, heads: Set[str]) -> bool: + # TODO: we probably need to look for self.to_ inside of heads, + # in a similar manner as should_create_branch, however we have + # no tests for this yet (stamp downgrades w/ branches) + return self.is_downgrade and self.branch_move + + def should_create_branch(self, heads: Set[str]) -> Union[Set[str], bool]: + return ( + self.is_upgrade + and (self.branch_move or set(self.from_).difference(heads)) + and set(self.to_).difference(heads) + ) + + def should_merge_branches(self, heads: Set[str]) -> bool: + return len(self.from_) > 1 + + def should_unmerge_branches(self, heads: Set[str]) -> bool: + return len(self.to_) > 1 + + @property + def info(self) -> MigrationInfo: + up, down = ( + (self.to_, self.from_) + if self.is_upgrade + else (self.from_, self.to_) + ) + assert self.revision_map is not None + return MigrationInfo( + revision_map=self.revision_map, + up_revisions=up, + down_revisions=down, + is_upgrade=self.is_upgrade, + is_stamp=True, + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/__init__.py new file mode 100644 index 000000000..d78f3f1dc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/__init__.py @@ -0,0 +1,4 @@ +from .base import Script +from .base import ScriptDirectory + +__all__ = ["ScriptDirectory", "Script"] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/base.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/base.py new file mode 100644 index 000000000..c3b1c3115 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/base.py @@ -0,0 +1,1057 @@ +from __future__ import annotations + +from contextlib import contextmanager +import datetime +import os +from pathlib import Path +import re +import shutil +import sys +from types import ModuleType +from typing import Any +from typing import cast +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from . import revision +from . import write_hooks +from .. import util +from ..runtime import migration +from ..util import compat +from ..util import not_none +from ..util.pyfiles import _preserving_path_as_str + +if TYPE_CHECKING: + from .revision import _GetRevArg + from .revision import _RevIdType + from .revision import Revision + from ..config import Config + from ..config import MessagingOptions + from ..config import PostWriteHookConfig + from ..runtime.migration import RevisionStep + from ..runtime.migration import StampStep + +try: + if compat.py39: + from zoneinfo import ZoneInfo + from zoneinfo import ZoneInfoNotFoundError + else: + from backports.zoneinfo import ZoneInfo # type: ignore[import-not-found,no-redef] # noqa: E501 + from backports.zoneinfo import ZoneInfoNotFoundError # type: ignore[no-redef] # noqa: E501 +except ImportError: + ZoneInfo = None # type: ignore[assignment, misc] + +_sourceless_rev_file = re.compile(r"(?!\.\#|__init__)(.*\.py)(c|o)?$") +_only_source_rev_file = re.compile(r"(?!\.\#|__init__)(.*\.py)$") +_legacy_rev = re.compile(r"([a-f0-9]+)\.py$") +_slug_re = re.compile(r"\w+") +_default_file_template = "%(rev)s_%(slug)s" + + +class ScriptDirectory: + """Provides operations upon an Alembic script directory. + + This object is useful to get information as to current revisions, + most notably being able to get at the "head" revision, for schemes + that want to test if the current revision in the database is the most + recent:: + + from alembic.script import ScriptDirectory + from alembic.config import Config + config = Config() + config.set_main_option("script_location", "myapp:migrations") + script = ScriptDirectory.from_config(config) + + head_revision = script.get_current_head() + + + + """ + + def __init__( + self, + dir: Union[str, os.PathLike[str]], # noqa: A002 + file_template: str = _default_file_template, + truncate_slug_length: Optional[int] = 40, + version_locations: Optional[ + Sequence[Union[str, os.PathLike[str]]] + ] = None, + sourceless: bool = False, + output_encoding: str = "utf-8", + timezone: Optional[str] = None, + hooks: list[PostWriteHookConfig] = [], + recursive_version_locations: bool = False, + messaging_opts: MessagingOptions = cast( + "MessagingOptions", util.EMPTY_DICT + ), + ) -> None: + self.dir = _preserving_path_as_str(dir) + self.version_locations = [ + _preserving_path_as_str(p) for p in version_locations or () + ] + self.file_template = file_template + self.truncate_slug_length = truncate_slug_length or 40 + self.sourceless = sourceless + self.output_encoding = output_encoding + self.revision_map = revision.RevisionMap(self._load_revisions) + self.timezone = timezone + self.hooks = hooks + self.recursive_version_locations = recursive_version_locations + self.messaging_opts = messaging_opts + + if not os.access(dir, os.F_OK): + raise util.CommandError( + f"Path doesn't exist: {dir}. Please use " + "the 'init' command to create a new " + "scripts folder." + ) + + @property + def versions(self) -> str: + """return a single version location based on the sole path passed + within version_locations. + + If multiple version locations are configured, an error is raised. + + + """ + return str(self._singular_version_location) + + @util.memoized_property + def _singular_version_location(self) -> Path: + loc = self._version_locations + if len(loc) > 1: + raise util.CommandError("Multiple version_locations present") + else: + return loc[0] + + @util.memoized_property + def _version_locations(self) -> Sequence[Path]: + if self.version_locations: + return [ + util.coerce_resource_to_filename(location).absolute() + for location in self.version_locations + ] + else: + return [Path(self.dir, "versions").absolute()] + + def _load_revisions(self) -> Iterator[Script]: + paths = [vers for vers in self._version_locations if vers.exists()] + + dupes = set() + for vers in paths: + for file_path in Script._list_py_dir(self, vers): + real_path = file_path.resolve() + if real_path in dupes: + util.warn( + f"File {real_path} loaded twice! ignoring. " + "Please ensure version_locations is unique." + ) + continue + dupes.add(real_path) + + script = Script._from_path(self, real_path) + if script is None: + continue + yield script + + @classmethod + def from_config(cls, config: Config) -> ScriptDirectory: + """Produce a new :class:`.ScriptDirectory` given a :class:`.Config` + instance. + + The :class:`.Config` need only have the ``script_location`` key + present. + + """ + script_location = config.get_alembic_option("script_location") + if script_location is None: + raise util.CommandError( + "No 'script_location' key found in configuration." + ) + truncate_slug_length: Optional[int] + tsl = config.get_alembic_option("truncate_slug_length") + if tsl is not None: + truncate_slug_length = int(tsl) + else: + truncate_slug_length = None + + prepend_sys_path = config.get_prepend_sys_paths_list() + if prepend_sys_path: + sys.path[:0] = prepend_sys_path + + rvl = ( + config.get_alembic_option("recursive_version_locations") == "true" + ) + return ScriptDirectory( + util.coerce_resource_to_filename(script_location), + file_template=config.get_alembic_option( + "file_template", _default_file_template + ), + truncate_slug_length=truncate_slug_length, + sourceless=config.get_alembic_option("sourceless") == "true", + output_encoding=config.get_alembic_option( + "output_encoding", "utf-8" + ), + version_locations=config.get_version_locations_list(), + timezone=config.get_alembic_option("timezone"), + hooks=config.get_hooks_list(), + recursive_version_locations=rvl, + messaging_opts=config.messaging_opts, + ) + + @contextmanager + def _catch_revision_errors( + self, + ancestor: Optional[str] = None, + multiple_heads: Optional[str] = None, + start: Optional[str] = None, + end: Optional[str] = None, + resolution: Optional[str] = None, + ) -> Iterator[None]: + try: + yield + except revision.RangeNotAncestorError as rna: + if start is None: + start = cast(Any, rna.lower) + if end is None: + end = cast(Any, rna.upper) + if not ancestor: + ancestor = ( + "Requested range %(start)s:%(end)s does not refer to " + "ancestor/descendant revisions along the same branch" + ) + ancestor = ancestor % {"start": start, "end": end} + raise util.CommandError(ancestor) from rna + except revision.MultipleHeads as mh: + if not multiple_heads: + multiple_heads = ( + "Multiple head revisions are present for given " + "argument '%(head_arg)s'; please " + "specify a specific target revision, " + "'@%(head_arg)s' to " + "narrow to a specific head, or 'heads' for all heads" + ) + multiple_heads = multiple_heads % { + "head_arg": end or mh.argument, + "heads": util.format_as_comma(mh.heads), + } + raise util.CommandError(multiple_heads) from mh + except revision.ResolutionError as re: + if resolution is None: + resolution = "Can't locate revision identified by '%s'" % ( + re.argument + ) + raise util.CommandError(resolution) from re + except revision.RevisionError as err: + raise util.CommandError(err.args[0]) from err + + def walk_revisions( + self, base: str = "base", head: str = "heads" + ) -> Iterator[Script]: + """Iterate through all revisions. + + :param base: the base revision, or "base" to start from the + empty revision. + + :param head: the head revision; defaults to "heads" to indicate + all head revisions. May also be "head" to indicate a single + head revision. + + """ + with self._catch_revision_errors(start=base, end=head): + for rev in self.revision_map.iterate_revisions( + head, base, inclusive=True, assert_relative_length=False + ): + yield cast(Script, rev) + + def get_revisions(self, id_: _GetRevArg) -> Tuple[Script, ...]: + """Return the :class:`.Script` instance with the given rev identifier, + symbolic name, or sequence of identifiers. + + """ + with self._catch_revision_errors(): + return cast( + Tuple[Script, ...], + self.revision_map.get_revisions(id_), + ) + + def get_all_current(self, id_: Tuple[str, ...]) -> Set[Script]: + with self._catch_revision_errors(): + return cast(Set[Script], self.revision_map._get_all_current(id_)) + + def get_revision(self, id_: str) -> Script: + """Return the :class:`.Script` instance with the given rev id. + + .. seealso:: + + :meth:`.ScriptDirectory.get_revisions` + + """ + + with self._catch_revision_errors(): + return cast(Script, self.revision_map.get_revision(id_)) + + def as_revision_number( + self, id_: Optional[str] + ) -> Optional[Union[str, Tuple[str, ...]]]: + """Convert a symbolic revision, i.e. 'head' or 'base', into + an actual revision number.""" + + with self._catch_revision_errors(): + rev, branch_name = self.revision_map._resolve_revision_number(id_) + + if not rev: + # convert () to None + return None + elif id_ == "heads": + return rev + else: + return rev[0] + + def iterate_revisions( + self, + upper: Union[str, Tuple[str, ...], None], + lower: Union[str, Tuple[str, ...], None], + **kw: Any, + ) -> Iterator[Script]: + """Iterate through script revisions, starting at the given + upper revision identifier and ending at the lower. + + The traversal uses strictly the `down_revision` + marker inside each migration script, so + it is a requirement that upper >= lower, + else you'll get nothing back. + + The iterator yields :class:`.Script` objects. + + .. seealso:: + + :meth:`.RevisionMap.iterate_revisions` + + """ + return cast( + Iterator[Script], + self.revision_map.iterate_revisions(upper, lower, **kw), + ) + + def get_current_head(self) -> Optional[str]: + """Return the current head revision. + + If the script directory has multiple heads + due to branching, an error is raised; + :meth:`.ScriptDirectory.get_heads` should be + preferred. + + :return: a string revision number. + + .. seealso:: + + :meth:`.ScriptDirectory.get_heads` + + """ + with self._catch_revision_errors( + multiple_heads=( + "The script directory has multiple heads (due to branching)." + "Please use get_heads(), or merge the branches using " + "alembic merge." + ) + ): + return self.revision_map.get_current_head() + + def get_heads(self) -> List[str]: + """Return all "versioned head" revisions as strings. + + This is normally a list of length one, + unless branches are present. The + :meth:`.ScriptDirectory.get_current_head()` method + can be used normally when a script directory + has only one head. + + :return: a tuple of string revision numbers. + """ + return list(self.revision_map.heads) + + def get_base(self) -> Optional[str]: + """Return the "base" revision as a string. + + This is the revision number of the script that + has a ``down_revision`` of None. + + If the script directory has multiple bases, an error is raised; + :meth:`.ScriptDirectory.get_bases` should be + preferred. + + """ + bases = self.get_bases() + if len(bases) > 1: + raise util.CommandError( + "The script directory has multiple bases. " + "Please use get_bases()." + ) + elif bases: + return bases[0] + else: + return None + + def get_bases(self) -> List[str]: + """return all "base" revisions as strings. + + This is the revision number of all scripts that + have a ``down_revision`` of None. + + """ + return list(self.revision_map.bases) + + def _upgrade_revs( + self, destination: str, current_rev: str + ) -> List[RevisionStep]: + with self._catch_revision_errors( + ancestor="Destination %(end)s is not a valid upgrade " + "target from current head(s)", + end=destination, + ): + revs = self.iterate_revisions( + destination, current_rev, implicit_base=True + ) + return [ + migration.MigrationStep.upgrade_from_script( + self.revision_map, script + ) + for script in reversed(list(revs)) + ] + + def _downgrade_revs( + self, destination: str, current_rev: Optional[str] + ) -> List[RevisionStep]: + with self._catch_revision_errors( + ancestor="Destination %(end)s is not a valid downgrade " + "target from current head(s)", + end=destination, + ): + revs = self.iterate_revisions( + current_rev, destination, select_for_downgrade=True + ) + return [ + migration.MigrationStep.downgrade_from_script( + self.revision_map, script + ) + for script in revs + ] + + def _stamp_revs( + self, revision: _RevIdType, heads: _RevIdType + ) -> List[StampStep]: + with self._catch_revision_errors( + multiple_heads="Multiple heads are present; please specify a " + "single target revision" + ): + heads_revs = self.get_revisions(heads) + + steps = [] + + if not revision: + revision = "base" + + filtered_heads: List[Script] = [] + for rev in util.to_tuple(revision): + if rev: + filtered_heads.extend( + self.revision_map.filter_for_lineage( + cast(Sequence[Script], heads_revs), + rev, + include_dependencies=True, + ) + ) + filtered_heads = util.unique_list(filtered_heads) + + dests = self.get_revisions(revision) or [None] + + for dest in dests: + if dest is None: + # dest is 'base'. Return a "delete branch" migration + # for all applicable heads. + steps.extend( + [ + migration.StampStep( + head.revision, + None, + False, + True, + self.revision_map, + ) + for head in filtered_heads + ] + ) + continue + elif dest in filtered_heads: + # the dest is already in the version table, do nothing. + continue + + # figure out if the dest is a descendant or an + # ancestor of the selected nodes + descendants = set( + self.revision_map._get_descendant_nodes([dest]) + ) + ancestors = set(self.revision_map._get_ancestor_nodes([dest])) + + if descendants.intersection(filtered_heads): + # heads are above the target, so this is a downgrade. + # we can treat them as a "merge", single step. + assert not ancestors.intersection(filtered_heads) + todo_heads = [head.revision for head in filtered_heads] + step = migration.StampStep( + todo_heads, + dest.revision, + False, + False, + self.revision_map, + ) + steps.append(step) + continue + elif ancestors.intersection(filtered_heads): + # heads are below the target, so this is an upgrade. + # we can treat them as a "merge", single step. + todo_heads = [head.revision for head in filtered_heads] + step = migration.StampStep( + todo_heads, + dest.revision, + True, + False, + self.revision_map, + ) + steps.append(step) + continue + else: + # destination is in a branch not represented, + # treat it as new branch + step = migration.StampStep( + (), dest.revision, True, True, self.revision_map + ) + steps.append(step) + continue + + return steps + + def run_env(self) -> None: + """Run the script environment. + + This basically runs the ``env.py`` script present + in the migration environment. It is called exclusively + by the command functions in :mod:`alembic.command`. + + + """ + util.load_python_file(self.dir, "env.py") + + @property + def env_py_location(self) -> str: + return str(Path(self.dir, "env.py")) + + def _append_template(self, src: Path, dest: Path, **kw: Any) -> None: + with util.status( + f"Appending to existing {dest.absolute()}", + **self.messaging_opts, + ): + util.template_to_file( + src, + dest, + self.output_encoding, + append_with_newlines=True, + **kw, + ) + + def _generate_template(self, src: Path, dest: Path, **kw: Any) -> None: + with util.status( + f"Generating {dest.absolute()}", **self.messaging_opts + ): + util.template_to_file(src, dest, self.output_encoding, **kw) + + def _copy_file(self, src: Path, dest: Path) -> None: + with util.status( + f"Generating {dest.absolute()}", **self.messaging_opts + ): + shutil.copy(src, dest) + + def _ensure_directory(self, path: Path) -> None: + path = path.absolute() + if not path.exists(): + with util.status( + f"Creating directory {path}", **self.messaging_opts + ): + os.makedirs(path) + + def _generate_create_date(self) -> datetime.datetime: + if self.timezone is not None: + if ZoneInfo is None: + raise util.CommandError( + "Python >= 3.9 is required for timezone support or " + "the 'backports.zoneinfo' package must be installed." + ) + # First, assume correct capitalization + try: + tzinfo = ZoneInfo(self.timezone) + except ZoneInfoNotFoundError: + tzinfo = None + if tzinfo is None: + try: + tzinfo = ZoneInfo(self.timezone.upper()) + except ZoneInfoNotFoundError: + raise util.CommandError( + "Can't locate timezone: %s" % self.timezone + ) from None + + create_date = datetime.datetime.now( + tz=datetime.timezone.utc + ).astimezone(tzinfo) + else: + create_date = datetime.datetime.now() + return create_date + + def generate_revision( + self, + revid: str, + message: Optional[str], + head: Optional[_RevIdType] = None, + splice: Optional[bool] = False, + branch_labels: Optional[_RevIdType] = None, + version_path: Union[str, os.PathLike[str], None] = None, + file_template: Optional[str] = None, + depends_on: Optional[_RevIdType] = None, + **kw: Any, + ) -> Optional[Script]: + """Generate a new revision file. + + This runs the ``script.py.mako`` template, given + template arguments, and creates a new file. + + :param revid: String revision id. Typically this + comes from ``alembic.util.rev_id()``. + :param message: the revision message, the one passed + by the -m argument to the ``revision`` command. + :param head: the head revision to generate against. Defaults + to the current "head" if no branches are present, else raises + an exception. + :param splice: if True, allow the "head" version to not be an + actual head; otherwise, the selected head must be a head + (e.g. endpoint) revision. + + """ + if head is None: + head = "head" + + try: + Script.verify_rev_id(revid) + except revision.RevisionError as err: + raise util.CommandError(err.args[0]) from err + + with self._catch_revision_errors( + multiple_heads=( + "Multiple heads are present; please specify the head " + "revision on which the new revision should be based, " + "or perform a merge." + ) + ): + heads = cast( + Tuple[Optional["Revision"], ...], + self.revision_map.get_revisions(head), + ) + for h in heads: + assert h != "base" # type: ignore[comparison-overlap] + + if len(set(heads)) != len(heads): + raise util.CommandError("Duplicate head revisions specified") + + create_date = self._generate_create_date() + + if version_path is None: + if len(self._version_locations) > 1: + for head_ in heads: + if head_ is not None: + assert isinstance(head_, Script) + version_path = head_._script_path.parent + break + else: + raise util.CommandError( + "Multiple version locations present, " + "please specify --version-path" + ) + else: + version_path = self._singular_version_location + else: + version_path = Path(version_path) + + assert isinstance(version_path, Path) + norm_path = version_path.absolute() + for vers_path in self._version_locations: + if vers_path.absolute() == norm_path: + break + else: + raise util.CommandError( + f"Path {version_path} is not represented in current " + "version locations" + ) + + if self.version_locations: + self._ensure_directory(version_path) + + path = self._rev_path(version_path, revid, message, create_date) + + if not splice: + for head_ in heads: + if head_ is not None and not head_.is_head: + raise util.CommandError( + "Revision %s is not a head revision; please specify " + "--splice to create a new branch from this revision" + % head_.revision + ) + + resolved_depends_on: Optional[List[str]] + if depends_on: + with self._catch_revision_errors(): + resolved_depends_on = [ + ( + dep + if dep in rev.branch_labels # maintain branch labels + else rev.revision + ) # resolve partial revision identifiers + for rev, dep in [ + (not_none(self.revision_map.get_revision(dep)), dep) + for dep in util.to_list(depends_on) + ] + ] + else: + resolved_depends_on = None + + self._generate_template( + Path(self.dir, "script.py.mako"), + path, + up_revision=str(revid), + down_revision=revision.tuple_rev_as_scalar( + tuple(h.revision if h is not None else None for h in heads) + ), + branch_labels=util.to_tuple(branch_labels), + depends_on=revision.tuple_rev_as_scalar(resolved_depends_on), + create_date=create_date, + comma=util.format_as_comma, + message=message if message is not None else ("empty message"), + **kw, + ) + + post_write_hooks = self.hooks + if post_write_hooks: + write_hooks._run_hooks(path, post_write_hooks) + + try: + script = Script._from_path(self, path) + except revision.RevisionError as err: + raise util.CommandError(err.args[0]) from err + if script is None: + return None + if branch_labels and not script.branch_labels: + raise util.CommandError( + "Version %s specified branch_labels %s, however the " + "migration file %s does not have them; have you upgraded " + "your script.py.mako to include the " + "'branch_labels' section?" + % (script.revision, branch_labels, script.path) + ) + self.revision_map.add_revision(script) + return script + + def _rev_path( + self, + path: Union[str, os.PathLike[str]], + rev_id: str, + message: Optional[str], + create_date: datetime.datetime, + ) -> Path: + epoch = int(create_date.timestamp()) + slug = "_".join(_slug_re.findall(message or "")).lower() + if len(slug) > self.truncate_slug_length: + slug = slug[: self.truncate_slug_length].rsplit("_", 1)[0] + "_" + filename = "%s.py" % ( + self.file_template + % { + "rev": rev_id, + "slug": slug, + "epoch": epoch, + "year": create_date.year, + "month": create_date.month, + "day": create_date.day, + "hour": create_date.hour, + "minute": create_date.minute, + "second": create_date.second, + } + ) + return Path(path) / filename + + +class Script(revision.Revision): + """Represent a single revision file in a ``versions/`` directory. + + The :class:`.Script` instance is returned by methods + such as :meth:`.ScriptDirectory.iterate_revisions`. + + """ + + def __init__( + self, + module: ModuleType, + rev_id: str, + path: Union[str, os.PathLike[str]], + ): + self.module = module + self.path = _preserving_path_as_str(path) + super().__init__( + rev_id, + module.down_revision, + branch_labels=util.to_tuple( + getattr(module, "branch_labels", None), default=() + ), + dependencies=util.to_tuple( + getattr(module, "depends_on", None), default=() + ), + ) + + module: ModuleType + """The Python module representing the actual script itself.""" + + path: str + """Filesystem path of the script.""" + + @property + def _script_path(self) -> Path: + return Path(self.path) + + _db_current_indicator: Optional[bool] = None + """Utility variable which when set will cause string output to indicate + this is a "current" version in some database""" + + @property + def doc(self) -> str: + """Return the docstring given in the script.""" + + return re.split("\n\n", self.longdoc)[0] + + @property + def longdoc(self) -> str: + """Return the docstring given in the script.""" + + doc = self.module.__doc__ + if doc: + if hasattr(self.module, "_alembic_source_encoding"): + doc = doc.decode( # type: ignore[attr-defined] + self.module._alembic_source_encoding + ) + return doc.strip() + else: + return "" + + @property + def log_entry(self) -> str: + entry = "Rev: %s%s%s%s%s\n" % ( + self.revision, + " (head)" if self.is_head else "", + " (branchpoint)" if self.is_branch_point else "", + " (mergepoint)" if self.is_merge_point else "", + " (current)" if self._db_current_indicator else "", + ) + if self.is_merge_point: + entry += "Merges: %s\n" % (self._format_down_revision(),) + else: + entry += "Parent: %s\n" % (self._format_down_revision(),) + + if self.dependencies: + entry += "Also depends on: %s\n" % ( + util.format_as_comma(self.dependencies) + ) + + if self.is_branch_point: + entry += "Branches into: %s\n" % ( + util.format_as_comma(self.nextrev) + ) + + if self.branch_labels: + entry += "Branch names: %s\n" % ( + util.format_as_comma(self.branch_labels), + ) + + entry += "Path: %s\n" % (self.path,) + + entry += "\n%s\n" % ( + "\n".join(" %s" % para for para in self.longdoc.splitlines()) + ) + return entry + + def __str__(self) -> str: + return "%s -> %s%s%s%s, %s" % ( + self._format_down_revision(), + self.revision, + " (head)" if self.is_head else "", + " (branchpoint)" if self.is_branch_point else "", + " (mergepoint)" if self.is_merge_point else "", + self.doc, + ) + + def _head_only( + self, + include_branches: bool = False, + include_doc: bool = False, + include_parents: bool = False, + tree_indicators: bool = True, + head_indicators: bool = True, + ) -> str: + text = self.revision + if include_parents: + if self.dependencies: + text = "%s (%s) -> %s" % ( + self._format_down_revision(), + util.format_as_comma(self.dependencies), + text, + ) + else: + text = "%s -> %s" % (self._format_down_revision(), text) + assert text is not None + if include_branches and self.branch_labels: + text += " (%s)" % util.format_as_comma(self.branch_labels) + if head_indicators or tree_indicators: + text += "%s%s%s" % ( + " (head)" if self._is_real_head else "", + ( + " (effective head)" + if self.is_head and not self._is_real_head + else "" + ), + " (current)" if self._db_current_indicator else "", + ) + if tree_indicators: + text += "%s%s" % ( + " (branchpoint)" if self.is_branch_point else "", + " (mergepoint)" if self.is_merge_point else "", + ) + if include_doc: + text += ", %s" % self.doc + return text + + def cmd_format( + self, + verbose: bool, + include_branches: bool = False, + include_doc: bool = False, + include_parents: bool = False, + tree_indicators: bool = True, + ) -> str: + if verbose: + return self.log_entry + else: + return self._head_only( + include_branches, include_doc, include_parents, tree_indicators + ) + + def _format_down_revision(self) -> str: + if not self.down_revision: + return "" + else: + return util.format_as_comma(self._versioned_down_revisions) + + @classmethod + def _list_py_dir( + cls, scriptdir: ScriptDirectory, path: Path + ) -> List[Path]: + paths = [] + for root, dirs, files in compat.path_walk(path, top_down=True): + if root.name.endswith("__pycache__"): + # a special case - we may include these files + # if a `sourceless` option is specified + continue + + for filename in sorted(files): + paths.append(root / filename) + + if scriptdir.sourceless: + # look for __pycache__ + py_cache_path = root / "__pycache__" + if py_cache_path.exists(): + # add all files from __pycache__ whose filename is not + # already in the names we got from the version directory. + # add as relative paths including __pycache__ token + names = { + Path(filename).name.split(".")[0] for filename in files + } + paths.extend( + py_cache_path / pyc + for pyc in py_cache_path.iterdir() + if pyc.name.split(".")[0] not in names + ) + + if not scriptdir.recursive_version_locations: + break + + # the real script order is defined by revision, + # but it may be undefined if there are many files with a same + # `down_revision`, for a better user experience (ex. debugging), + # we use a deterministic order + dirs.sort() + + return paths + + @classmethod + def _from_path( + cls, scriptdir: ScriptDirectory, path: Union[str, os.PathLike[str]] + ) -> Optional[Script]: + + path = Path(path) + dir_, filename = path.parent, path.name + + if scriptdir.sourceless: + py_match = _sourceless_rev_file.match(filename) + else: + py_match = _only_source_rev_file.match(filename) + + if not py_match: + return None + + py_filename = py_match.group(1) + + if scriptdir.sourceless: + is_c = py_match.group(2) == "c" + is_o = py_match.group(2) == "o" + else: + is_c = is_o = False + + if is_o or is_c: + py_exists = (dir_ / py_filename).exists() + pyc_exists = (dir_ / (py_filename + "c")).exists() + + # prefer .py over .pyc because we'd like to get the + # source encoding; prefer .pyc over .pyo because we'd like to + # have the docstrings which a -OO file would not have + if py_exists or is_o and pyc_exists: + return None + + module = util.load_python_file(dir_, filename) + + if not hasattr(module, "revision"): + # attempt to get the revision id from the script name, + # this for legacy only + m = _legacy_rev.match(filename) + if not m: + raise util.CommandError( + "Could not determine revision id from " + f"filename {filename}. " + "Be sure the 'revision' variable is " + "declared inside the script (please see 'Upgrading " + "from Alembic 0.1 to 0.2' in the documentation)." + ) + else: + revision = m.group(1) + else: + revision = module.revision + return Script(module, revision, dir_ / filename) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/revision.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/revision.py new file mode 100644 index 000000000..587e90497 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/revision.py @@ -0,0 +1,1728 @@ +from __future__ import annotations + +import collections +import re +from typing import Any +from typing import Callable +from typing import cast +from typing import Collection +from typing import Deque +from typing import Dict +from typing import FrozenSet +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Optional +from typing import overload +from typing import Protocol +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy import util as sqlautil + +from .. import util +from ..util import not_none + +if TYPE_CHECKING: + from typing import Literal + +_RevIdType = Union[str, List[str], Tuple[str, ...]] +_GetRevArg = Union[ + str, + Iterable[Optional[str]], + Iterable[str], +] +_RevisionIdentifierType = Union[str, Tuple[str, ...], None] +_RevisionOrStr = Union["Revision", str] +_RevisionOrBase = Union["Revision", "Literal['base']"] +_InterimRevisionMapType = Dict[str, "Revision"] +_RevisionMapType = Dict[Union[None, str, Tuple[()]], Optional["Revision"]] +_T = TypeVar("_T") +_TR = TypeVar("_TR", bound=Optional[_RevisionOrStr]) + +_relative_destination = re.compile(r"(?:(.+?)@)?(\w+)?((?:\+|-)\d+)") +_revision_illegal_chars = ["@", "-", "+"] + + +class _CollectRevisionsProtocol(Protocol): + def __call__( + self, + upper: _RevisionIdentifierType, + lower: _RevisionIdentifierType, + inclusive: bool, + implicit_base: bool, + assert_relative_length: bool, + ) -> Tuple[Set[Revision], Tuple[Optional[_RevisionOrBase], ...]]: ... + + +class RevisionError(Exception): + pass + + +class RangeNotAncestorError(RevisionError): + def __init__( + self, lower: _RevisionIdentifierType, upper: _RevisionIdentifierType + ) -> None: + self.lower = lower + self.upper = upper + super().__init__( + "Revision %s is not an ancestor of revision %s" + % (lower or "base", upper or "base") + ) + + +class MultipleHeads(RevisionError): + def __init__(self, heads: Sequence[str], argument: Optional[str]) -> None: + self.heads = heads + self.argument = argument + super().__init__( + "Multiple heads are present for given argument '%s'; " + "%s" % (argument, ", ".join(heads)) + ) + + +class ResolutionError(RevisionError): + def __init__(self, message: str, argument: str) -> None: + super().__init__(message) + self.argument = argument + + +class CycleDetected(RevisionError): + kind = "Cycle" + + def __init__(self, revisions: Sequence[str]) -> None: + self.revisions = revisions + super().__init__( + "%s is detected in revisions (%s)" + % (self.kind, ", ".join(revisions)) + ) + + +class DependencyCycleDetected(CycleDetected): + kind = "Dependency cycle" + + def __init__(self, revisions: Sequence[str]) -> None: + super().__init__(revisions) + + +class LoopDetected(CycleDetected): + kind = "Self-loop" + + def __init__(self, revision: str) -> None: + super().__init__([revision]) + + +class DependencyLoopDetected(DependencyCycleDetected, LoopDetected): + kind = "Dependency self-loop" + + def __init__(self, revision: Sequence[str]) -> None: + super().__init__(revision) + + +class RevisionMap: + """Maintains a map of :class:`.Revision` objects. + + :class:`.RevisionMap` is used by :class:`.ScriptDirectory` to maintain + and traverse the collection of :class:`.Script` objects, which are + themselves instances of :class:`.Revision`. + + """ + + def __init__(self, generator: Callable[[], Iterable[Revision]]) -> None: + """Construct a new :class:`.RevisionMap`. + + :param generator: a zero-arg callable that will generate an iterable + of :class:`.Revision` instances to be used. These are typically + :class:`.Script` subclasses within regular Alembic use. + + """ + self._generator = generator + + @util.memoized_property + def heads(self) -> Tuple[str, ...]: + """All "head" revisions as strings. + + This is normally a tuple of length one, + unless unmerged branches are present. + + :return: a tuple of string revision numbers. + + """ + self._revision_map + return self.heads + + @util.memoized_property + def bases(self) -> Tuple[str, ...]: + """All "base" revisions as strings. + + These are revisions that have a ``down_revision`` of None, + or empty tuple. + + :return: a tuple of string revision numbers. + + """ + self._revision_map + return self.bases + + @util.memoized_property + def _real_heads(self) -> Tuple[str, ...]: + """All "real" head revisions as strings. + + :return: a tuple of string revision numbers. + + """ + self._revision_map + return self._real_heads + + @util.memoized_property + def _real_bases(self) -> Tuple[str, ...]: + """All "real" base revisions as strings. + + :return: a tuple of string revision numbers. + + """ + self._revision_map + return self._real_bases + + @util.memoized_property + def _revision_map(self) -> _RevisionMapType: + """memoized attribute, initializes the revision map from the + initial collection. + + """ + # Ordering required for some tests to pass (but not required in + # general) + map_: _InterimRevisionMapType = sqlautil.OrderedDict() + + heads: Set[Revision] = sqlautil.OrderedSet() + _real_heads: Set[Revision] = sqlautil.OrderedSet() + bases: Tuple[Revision, ...] = () + _real_bases: Tuple[Revision, ...] = () + + has_branch_labels = set() + all_revisions = set() + + for revision in self._generator(): + all_revisions.add(revision) + + if revision.revision in map_: + util.warn( + "Revision %s is present more than once" % revision.revision + ) + map_[revision.revision] = revision + if revision.branch_labels: + has_branch_labels.add(revision) + + heads.add(revision) + _real_heads.add(revision) + if revision.is_base: + bases += (revision,) + if revision._is_real_base: + _real_bases += (revision,) + + # add the branch_labels to the map_. We'll need these + # to resolve the dependencies. + rev_map = map_.copy() + self._map_branch_labels( + has_branch_labels, cast(_RevisionMapType, map_) + ) + + # resolve dependency names from branch labels and symbolic + # names + self._add_depends_on(all_revisions, cast(_RevisionMapType, map_)) + + for rev in map_.values(): + for downrev in rev._all_down_revisions: + if downrev not in map_: + util.warn( + "Revision %s referenced from %s is not present" + % (downrev, rev) + ) + down_revision = map_[downrev] + down_revision.add_nextrev(rev) + if downrev in rev._versioned_down_revisions: + heads.discard(down_revision) + _real_heads.discard(down_revision) + + # once the map has downrevisions populated, the dependencies + # can be further refined to include only those which are not + # already ancestors + self._normalize_depends_on(all_revisions, cast(_RevisionMapType, map_)) + self._detect_cycles(rev_map, heads, bases, _real_heads, _real_bases) + + revision_map: _RevisionMapType = dict(map_.items()) + revision_map[None] = revision_map[()] = None + self.heads = tuple(rev.revision for rev in heads) + self._real_heads = tuple(rev.revision for rev in _real_heads) + self.bases = tuple(rev.revision for rev in bases) + self._real_bases = tuple(rev.revision for rev in _real_bases) + + self._add_branches(has_branch_labels, revision_map) + return revision_map + + def _detect_cycles( + self, + rev_map: _InterimRevisionMapType, + heads: Set[Revision], + bases: Tuple[Revision, ...], + _real_heads: Set[Revision], + _real_bases: Tuple[Revision, ...], + ) -> None: + if not rev_map: + return + if not heads or not bases: + raise CycleDetected(list(rev_map)) + total_space = { + rev.revision + for rev in self._iterate_related_revisions( + lambda r: r._versioned_down_revisions, + heads, + map_=cast(_RevisionMapType, rev_map), + ) + }.intersection( + rev.revision + for rev in self._iterate_related_revisions( + lambda r: r.nextrev, + bases, + map_=cast(_RevisionMapType, rev_map), + ) + ) + deleted_revs = set(rev_map.keys()) - total_space + if deleted_revs: + raise CycleDetected(sorted(deleted_revs)) + + if not _real_heads or not _real_bases: + raise DependencyCycleDetected(list(rev_map)) + total_space = { + rev.revision + for rev in self._iterate_related_revisions( + lambda r: r._all_down_revisions, + _real_heads, + map_=cast(_RevisionMapType, rev_map), + ) + }.intersection( + rev.revision + for rev in self._iterate_related_revisions( + lambda r: r._all_nextrev, + _real_bases, + map_=cast(_RevisionMapType, rev_map), + ) + ) + deleted_revs = set(rev_map.keys()) - total_space + if deleted_revs: + raise DependencyCycleDetected(sorted(deleted_revs)) + + def _map_branch_labels( + self, revisions: Collection[Revision], map_: _RevisionMapType + ) -> None: + for revision in revisions: + if revision.branch_labels: + assert revision._orig_branch_labels is not None + for branch_label in revision._orig_branch_labels: + if branch_label in map_: + map_rev = map_[branch_label] + assert map_rev is not None + raise RevisionError( + "Branch name '%s' in revision %s already " + "used by revision %s" + % ( + branch_label, + revision.revision, + map_rev.revision, + ) + ) + map_[branch_label] = revision + + def _add_branches( + self, revisions: Collection[Revision], map_: _RevisionMapType + ) -> None: + for revision in revisions: + if revision.branch_labels: + revision.branch_labels.update(revision.branch_labels) + for node in self._get_descendant_nodes( + [revision], map_, include_dependencies=False + ): + node.branch_labels.update(revision.branch_labels) + + parent = node + while ( + parent + and not parent._is_real_branch_point + and not parent.is_merge_point + ): + parent.branch_labels.update(revision.branch_labels) + if parent.down_revision: + parent = map_[parent.down_revision] + else: + break + + def _add_depends_on( + self, revisions: Collection[Revision], map_: _RevisionMapType + ) -> None: + """Resolve the 'dependencies' for each revision in a collection + in terms of actual revision ids, as opposed to branch labels or other + symbolic names. + + The collection is then assigned to the _resolved_dependencies + attribute on each revision object. + + """ + + for revision in revisions: + if revision.dependencies: + deps = [ + map_[dep] for dep in util.to_tuple(revision.dependencies) + ] + revision._resolved_dependencies = tuple( + [d.revision for d in deps if d is not None] + ) + else: + revision._resolved_dependencies = () + + def _normalize_depends_on( + self, revisions: Collection[Revision], map_: _RevisionMapType + ) -> None: + """Create a collection of "dependencies" that omits dependencies + that are already ancestor nodes for each revision in a given + collection. + + This builds upon the _resolved_dependencies collection created in the + _add_depends_on() method, looking in the fully populated revision map + for ancestors, and omitting them as the _resolved_dependencies + collection as it is copied to a new collection. The new collection is + then assigned to the _normalized_resolved_dependencies attribute on + each revision object. + + The collection is then used to determine the immediate "down revision" + identifiers for this revision. + + """ + + for revision in revisions: + if revision._resolved_dependencies: + normalized_resolved = set(revision._resolved_dependencies) + for rev in self._get_ancestor_nodes( + [revision], + include_dependencies=False, + map_=map_, + ): + if rev is revision: + continue + elif rev._resolved_dependencies: + normalized_resolved.difference_update( + rev._resolved_dependencies + ) + + revision._normalized_resolved_dependencies = tuple( + normalized_resolved + ) + else: + revision._normalized_resolved_dependencies = () + + def add_revision(self, revision: Revision, _replace: bool = False) -> None: + """add a single revision to an existing map. + + This method is for single-revision use cases, it's not + appropriate for fully populating an entire revision map. + + """ + map_ = self._revision_map + if not _replace and revision.revision in map_: + util.warn( + "Revision %s is present more than once" % revision.revision + ) + elif _replace and revision.revision not in map_: + raise Exception("revision %s not in map" % revision.revision) + + map_[revision.revision] = revision + + revisions = [revision] + self._add_branches(revisions, map_) + self._map_branch_labels(revisions, map_) + self._add_depends_on(revisions, map_) + + if revision.is_base: + self.bases += (revision.revision,) + if revision._is_real_base: + self._real_bases += (revision.revision,) + + for downrev in revision._all_down_revisions: + if downrev not in map_: + util.warn( + "Revision %s referenced from %s is not present" + % (downrev, revision) + ) + not_none(map_[downrev]).add_nextrev(revision) + + self._normalize_depends_on(revisions, map_) + + if revision._is_real_head: + self._real_heads = tuple( + head + for head in self._real_heads + if head + not in set(revision._all_down_revisions).union( + [revision.revision] + ) + ) + (revision.revision,) + if revision.is_head: + self.heads = tuple( + head + for head in self.heads + if head + not in set(revision._versioned_down_revisions).union( + [revision.revision] + ) + ) + (revision.revision,) + + def get_current_head( + self, branch_label: Optional[str] = None + ) -> Optional[str]: + """Return the current head revision. + + If the script directory has multiple heads + due to branching, an error is raised; + :meth:`.ScriptDirectory.get_heads` should be + preferred. + + :param branch_label: optional branch name which will limit the + heads considered to those which include that branch_label. + + :return: a string revision number. + + .. seealso:: + + :meth:`.ScriptDirectory.get_heads` + + """ + current_heads: Sequence[str] = self.heads + if branch_label: + current_heads = self.filter_for_lineage( + current_heads, branch_label + ) + if len(current_heads) > 1: + raise MultipleHeads( + current_heads, + "%s@head" % branch_label if branch_label else "head", + ) + + if current_heads: + return current_heads[0] + else: + return None + + def _get_base_revisions(self, identifier: str) -> Tuple[str, ...]: + return self.filter_for_lineage(self.bases, identifier) + + def get_revisions( + self, id_: Optional[_GetRevArg] + ) -> Tuple[Optional[_RevisionOrBase], ...]: + """Return the :class:`.Revision` instances with the given rev id + or identifiers. + + May be given a single identifier, a sequence of identifiers, or the + special symbols "head" or "base". The result is a tuple of one + or more identifiers, or an empty tuple in the case of "base". + + In the cases where 'head', 'heads' is requested and the + revision map is empty, returns an empty tuple. + + Supports partial identifiers, where the given identifier + is matched against all identifiers that start with the given + characters; if there is exactly one match, that determines the + full revision. + + """ + + if isinstance(id_, (list, tuple, set, frozenset)): + return sum([self.get_revisions(id_elem) for id_elem in id_], ()) + else: + resolved_id, branch_label = self._resolve_revision_number(id_) + if len(resolved_id) == 1: + try: + rint = int(resolved_id[0]) + if rint < 0: + # branch@-n -> walk down from heads + select_heads = self.get_revisions("heads") + if branch_label is not None: + select_heads = tuple( + head + for head in select_heads + if branch_label + in is_revision(head).branch_labels + ) + return tuple( + self._walk(head, steps=rint) + for head in select_heads + ) + except ValueError: + # couldn't resolve as integer + pass + return tuple( + self._revision_for_ident(rev_id, branch_label) + for rev_id in resolved_id + ) + + def get_revision(self, id_: Optional[str]) -> Optional[Revision]: + """Return the :class:`.Revision` instance with the given rev id. + + If a symbolic name such as "head" or "base" is given, resolves + the identifier into the current head or base revision. If the symbolic + name refers to multiples, :class:`.MultipleHeads` is raised. + + Supports partial identifiers, where the given identifier + is matched against all identifiers that start with the given + characters; if there is exactly one match, that determines the + full revision. + + """ + + resolved_id, branch_label = self._resolve_revision_number(id_) + if len(resolved_id) > 1: + raise MultipleHeads(resolved_id, id_) + + resolved: Union[str, Tuple[()]] = resolved_id[0] if resolved_id else () + return self._revision_for_ident(resolved, branch_label) + + def _resolve_branch(self, branch_label: str) -> Optional[Revision]: + try: + branch_rev = self._revision_map[branch_label] + except KeyError: + try: + nonbranch_rev = self._revision_for_ident(branch_label) + except ResolutionError as re: + raise ResolutionError( + "No such branch: '%s'" % branch_label, branch_label + ) from re + + else: + return nonbranch_rev + else: + return branch_rev + + def _revision_for_ident( + self, + resolved_id: Union[str, Tuple[()], None], + check_branch: Optional[str] = None, + ) -> Optional[Revision]: + branch_rev: Optional[Revision] + if check_branch: + branch_rev = self._resolve_branch(check_branch) + else: + branch_rev = None + + revision: Union[Optional[Revision], Literal[False]] + try: + revision = self._revision_map[resolved_id] + except KeyError: + # break out to avoid misleading py3k stack traces + revision = False + revs: Sequence[str] + if revision is False: + assert resolved_id + # do a partial lookup + revs = [ + x + for x in self._revision_map + if x and len(x) > 3 and x.startswith(resolved_id) + ] + + if branch_rev: + revs = self.filter_for_lineage(revs, check_branch) + if not revs: + raise ResolutionError( + "No such revision or branch '%s'%s" + % ( + resolved_id, + ( + "; please ensure at least four characters are " + "present for partial revision identifier matches" + if len(resolved_id) < 4 + else "" + ), + ), + resolved_id, + ) + elif len(revs) > 1: + raise ResolutionError( + "Multiple revisions start " + "with '%s': %s..." + % (resolved_id, ", ".join("'%s'" % r for r in revs[0:3])), + resolved_id, + ) + else: + revision = self._revision_map[revs[0]] + + if check_branch and revision is not None: + assert branch_rev is not None + assert resolved_id + if not self._shares_lineage( + revision.revision, branch_rev.revision + ): + raise ResolutionError( + "Revision %s is not a member of branch '%s'" + % (revision.revision, check_branch), + resolved_id, + ) + return revision + + def _filter_into_branch_heads( + self, targets: Iterable[Optional[_RevisionOrBase]] + ) -> Set[Optional[_RevisionOrBase]]: + targets = set(targets) + + for rev in list(targets): + assert rev + if targets.intersection( + self._get_descendant_nodes([rev], include_dependencies=False) + ).difference([rev]): + targets.discard(rev) + return targets + + def filter_for_lineage( + self, + targets: Iterable[_TR], + check_against: Optional[str], + include_dependencies: bool = False, + ) -> Tuple[_TR, ...]: + id_, branch_label = self._resolve_revision_number(check_against) + + shares = [] + if branch_label: + shares.append(branch_label) + if id_: + shares.extend(id_) + + return tuple( + tg + for tg in targets + if self._shares_lineage( + tg, shares, include_dependencies=include_dependencies + ) + ) + + def _shares_lineage( + self, + target: Optional[_RevisionOrStr], + test_against_revs: Sequence[_RevisionOrStr], + include_dependencies: bool = False, + ) -> bool: + if not test_against_revs: + return True + if not isinstance(target, Revision): + resolved_target = not_none(self._revision_for_ident(target)) + else: + resolved_target = target + + resolved_test_against_revs = [ + ( + self._revision_for_ident(test_against_rev) + if not isinstance(test_against_rev, Revision) + else test_against_rev + ) + for test_against_rev in util.to_tuple( + test_against_revs, default=() + ) + ] + + return bool( + set( + self._get_descendant_nodes( + [resolved_target], + include_dependencies=include_dependencies, + ) + ) + .union( + self._get_ancestor_nodes( + [resolved_target], + include_dependencies=include_dependencies, + ) + ) + .intersection(resolved_test_against_revs) + ) + + def _resolve_revision_number( + self, id_: Optional[_GetRevArg] + ) -> Tuple[Tuple[str, ...], Optional[str]]: + branch_label: Optional[str] + if isinstance(id_, str) and "@" in id_: + branch_label, id_ = id_.split("@", 1) + + elif id_ is not None and ( + (isinstance(id_, tuple) and id_ and not isinstance(id_[0], str)) + or not isinstance(id_, (str, tuple)) + ): + raise RevisionError( + "revision identifier %r is not a string; ensure database " + "driver settings are correct" % (id_,) + ) + + else: + branch_label = None + + # ensure map is loaded + self._revision_map + if id_ == "heads": + if branch_label: + return ( + self.filter_for_lineage(self.heads, branch_label), + branch_label, + ) + else: + return self._real_heads, branch_label + elif id_ == "head": + current_head = self.get_current_head(branch_label) + if current_head: + return (current_head,), branch_label + else: + return (), branch_label + elif id_ == "base" or id_ is None: + return (), branch_label + else: + return util.to_tuple(id_, default=None), branch_label + + def iterate_revisions( + self, + upper: _RevisionIdentifierType, + lower: _RevisionIdentifierType, + implicit_base: bool = False, + inclusive: bool = False, + assert_relative_length: bool = True, + select_for_downgrade: bool = False, + ) -> Iterator[Revision]: + """Iterate through script revisions, starting at the given + upper revision identifier and ending at the lower. + + The traversal uses strictly the `down_revision` + marker inside each migration script, so + it is a requirement that upper >= lower, + else you'll get nothing back. + + The iterator yields :class:`.Revision` objects. + + """ + fn: _CollectRevisionsProtocol + if select_for_downgrade: + fn = self._collect_downgrade_revisions + else: + fn = self._collect_upgrade_revisions + + revisions, heads = fn( + upper, + lower, + inclusive=inclusive, + implicit_base=implicit_base, + assert_relative_length=assert_relative_length, + ) + + for node in self._topological_sort(revisions, heads): + yield not_none(self.get_revision(node)) + + def _get_descendant_nodes( + self, + targets: Collection[Optional[_RevisionOrBase]], + map_: Optional[_RevisionMapType] = None, + check: bool = False, + omit_immediate_dependencies: bool = False, + include_dependencies: bool = True, + ) -> Iterator[Any]: + if omit_immediate_dependencies: + + def fn(rev: Revision) -> Iterable[str]: + if rev not in targets: + return rev._all_nextrev + else: + return rev.nextrev + + elif include_dependencies: + + def fn(rev: Revision) -> Iterable[str]: + return rev._all_nextrev + + else: + + def fn(rev: Revision) -> Iterable[str]: + return rev.nextrev + + return self._iterate_related_revisions( + fn, targets, map_=map_, check=check + ) + + def _get_ancestor_nodes( + self, + targets: Collection[Optional[_RevisionOrBase]], + map_: Optional[_RevisionMapType] = None, + check: bool = False, + include_dependencies: bool = True, + ) -> Iterator[Revision]: + if include_dependencies: + + def fn(rev: Revision) -> Iterable[str]: + return rev._normalized_down_revisions + + else: + + def fn(rev: Revision) -> Iterable[str]: + return rev._versioned_down_revisions + + return self._iterate_related_revisions( + fn, targets, map_=map_, check=check + ) + + def _iterate_related_revisions( + self, + fn: Callable[[Revision], Iterable[str]], + targets: Collection[Optional[_RevisionOrBase]], + map_: Optional[_RevisionMapType], + check: bool = False, + ) -> Iterator[Revision]: + if map_ is None: + map_ = self._revision_map + + seen = set() + todo: Deque[Revision] = collections.deque() + for target_for in targets: + target = is_revision(target_for) + todo.append(target) + if check: + per_target = set() + + while todo: + rev = todo.pop() + if check: + per_target.add(rev) + + if rev in seen: + continue + seen.add(rev) + # Check for map errors before collecting. + for rev_id in fn(rev): + next_rev = map_[rev_id] + assert next_rev is not None + if next_rev.revision != rev_id: + raise RevisionError( + "Dependency resolution failed; broken map" + ) + todo.append(next_rev) + yield rev + if check: + overlaps = per_target.intersection(targets).difference( + [target] + ) + if overlaps: + raise RevisionError( + "Requested revision %s overlaps with " + "other requested revisions %s" + % ( + target.revision, + ", ".join(r.revision for r in overlaps), + ) + ) + + def _topological_sort( + self, + revisions: Collection[Revision], + heads: Any, + ) -> List[str]: + """Yield revision ids of a collection of Revision objects in + topological sorted order (i.e. revisions always come after their + down_revisions and dependencies). Uses the order of keys in + _revision_map to sort. + + """ + + id_to_rev = self._revision_map + + def get_ancestors(rev_id: str) -> Set[str]: + return { + r.revision + for r in self._get_ancestor_nodes([id_to_rev[rev_id]]) + } + + todo = {d.revision for d in revisions} + + # Use revision map (ordered dict) key order to pre-sort. + inserted_order = list(self._revision_map) + + current_heads = list( + sorted( + {d.revision for d in heads if d.revision in todo}, + key=inserted_order.index, + ) + ) + ancestors_by_idx = [get_ancestors(rev_id) for rev_id in current_heads] + + output = [] + + current_candidate_idx = 0 + while current_heads: + candidate = current_heads[current_candidate_idx] + + for check_head_index, ancestors in enumerate(ancestors_by_idx): + # scan all the heads. see if we can continue walking + # down the current branch indicated by current_candidate_idx. + if ( + check_head_index != current_candidate_idx + and candidate in ancestors + ): + current_candidate_idx = check_head_index + # nope, another head is dependent on us, they have + # to be traversed first + break + else: + # yup, we can emit + if candidate in todo: + output.append(candidate) + todo.remove(candidate) + + # now update the heads with our ancestors. + + candidate_rev = id_to_rev[candidate] + assert candidate_rev is not None + + heads_to_add = [ + r + for r in candidate_rev._normalized_down_revisions + if r in todo and r not in current_heads + ] + + if not heads_to_add: + # no ancestors, so remove this head from the list + del current_heads[current_candidate_idx] + del ancestors_by_idx[current_candidate_idx] + current_candidate_idx = max(current_candidate_idx - 1, 0) + else: + if ( + not candidate_rev._normalized_resolved_dependencies + and len(candidate_rev._versioned_down_revisions) == 1 + ): + current_heads[current_candidate_idx] = heads_to_add[0] + + # for plain movement down a revision line without + # any mergepoints, branchpoints, or deps, we + # can update the ancestors collection directly + # by popping out the candidate we just emitted + ancestors_by_idx[current_candidate_idx].discard( + candidate + ) + + else: + # otherwise recalculate it again, things get + # complicated otherwise. This can possibly be + # improved to not run the whole ancestor thing + # each time but it was getting complicated + current_heads[current_candidate_idx] = heads_to_add[0] + current_heads.extend(heads_to_add[1:]) + ancestors_by_idx[current_candidate_idx] = ( + get_ancestors(heads_to_add[0]) + ) + ancestors_by_idx.extend( + get_ancestors(head) for head in heads_to_add[1:] + ) + + assert not todo + return output + + def _walk( + self, + start: Optional[Union[str, Revision]], + steps: int, + branch_label: Optional[str] = None, + no_overwalk: bool = True, + ) -> Optional[_RevisionOrBase]: + """ + Walk the requested number of :steps up (steps > 0) or down (steps < 0) + the revision tree. + + :branch_label is used to select branches only when walking up. + + If the walk goes past the boundaries of the tree and :no_overwalk is + True, None is returned, otherwise the walk terminates early. + + A RevisionError is raised if there is no unambiguous revision to + walk to. + """ + initial: Optional[_RevisionOrBase] + if isinstance(start, str): + initial = self.get_revision(start) + else: + initial = start + + children: Sequence[Optional[_RevisionOrBase]] + for _ in range(abs(steps)): + if steps > 0: + assert initial != "base" # type: ignore[comparison-overlap] + # Walk up + walk_up = [ + is_revision(rev) + for rev in self.get_revisions( + self.bases if initial is None else initial.nextrev + ) + ] + if branch_label: + children = self.filter_for_lineage(walk_up, branch_label) + else: + children = walk_up + else: + # Walk down + if initial == "base": # type: ignore[comparison-overlap] + children = () + else: + children = self.get_revisions( + self.heads + if initial is None + else initial.down_revision + ) + if not children: + children = ("base",) + if not children: + # This will return an invalid result if no_overwalk, otherwise + # further steps will stay where we are. + ret = None if no_overwalk else initial + return ret + elif len(children) > 1: + raise RevisionError("Ambiguous walk") + initial = children[0] + + return initial + + def _parse_downgrade_target( + self, + current_revisions: _RevisionIdentifierType, + target: _RevisionIdentifierType, + assert_relative_length: bool, + ) -> Tuple[Optional[str], Optional[_RevisionOrBase]]: + """ + Parse downgrade command syntax :target to retrieve the target revision + and branch label (if any) given the :current_revisions stamp of the + database. + + Returns a tuple (branch_label, target_revision) where branch_label + is a string from the command specifying the branch to consider (or + None if no branch given), and target_revision is a Revision object + which the command refers to. target_revisions is None if the command + refers to 'base'. The target may be specified in absolute form, or + relative to :current_revisions. + """ + if target is None: + return None, None + assert isinstance( + target, str + ), "Expected downgrade target in string form" + match = _relative_destination.match(target) + if match: + branch_label, symbol, relative = match.groups() + rel_int = int(relative) + if rel_int >= 0: + if symbol is None: + # Downgrading to current + n is not valid. + raise RevisionError( + "Relative revision %s didn't " + "produce %d migrations" % (relative, abs(rel_int)) + ) + # Find target revision relative to given symbol. + rev = self._walk( + symbol, + rel_int, + branch_label, + no_overwalk=assert_relative_length, + ) + if rev is None: + raise RevisionError("Walked too far") + return branch_label, rev + else: + relative_revision = symbol is None + if relative_revision: + # Find target revision relative to current state. + if branch_label: + cr_tuple = util.to_tuple(current_revisions) + symbol_list: Sequence[str] + symbol_list = self.filter_for_lineage( + cr_tuple, branch_label + ) + if not symbol_list: + # check the case where there are multiple branches + # but there is currently a single heads, since all + # other branch heads are dependent of the current + # single heads. + all_current = cast( + Set[Revision], self._get_all_current(cr_tuple) + ) + sl_all_current = self.filter_for_lineage( + all_current, branch_label + ) + symbol_list = [ + r.revision if r else r # type: ignore[misc] + for r in sl_all_current + ] + + assert len(symbol_list) == 1 + symbol = symbol_list[0] + else: + current_revisions = util.to_tuple(current_revisions) + if not current_revisions: + raise RevisionError( + "Relative revision %s didn't " + "produce %d migrations" + % (relative, abs(rel_int)) + ) + # Have to check uniques here for duplicate rows test. + if len(set(current_revisions)) > 1: + util.warn( + "downgrade -1 from multiple heads is " + "ambiguous; " + "this usage will be disallowed in a future " + "release." + ) + symbol = current_revisions[0] + # Restrict iteration to just the selected branch when + # ambiguous branches are involved. + branch_label = symbol + # Walk down the tree to find downgrade target. + rev = self._walk( + start=( + self.get_revision(symbol) + if branch_label is None + else self.get_revision( + "%s@%s" % (branch_label, symbol) + ) + ), + steps=rel_int, + no_overwalk=assert_relative_length, + ) + if rev is None: + if relative_revision: + raise RevisionError( + "Relative revision %s didn't " + "produce %d migrations" % (relative, abs(rel_int)) + ) + else: + raise RevisionError("Walked too far") + return branch_label, rev + + # No relative destination given, revision specified is absolute. + branch_label, _, symbol = target.rpartition("@") + if not branch_label: + branch_label = None + return branch_label, self.get_revision(symbol) + + def _parse_upgrade_target( + self, + current_revisions: _RevisionIdentifierType, + target: _RevisionIdentifierType, + assert_relative_length: bool, + ) -> Tuple[Optional[_RevisionOrBase], ...]: + """ + Parse upgrade command syntax :target to retrieve the target revision + and given the :current_revisions stamp of the database. + + Returns a tuple of Revision objects which should be iterated/upgraded + to. The target may be specified in absolute form, or relative to + :current_revisions. + """ + if isinstance(target, str): + match = _relative_destination.match(target) + else: + match = None + + if not match: + # No relative destination, target is absolute. + return self.get_revisions(target) + + current_revisions_tup: Union[str, Tuple[Optional[str], ...], None] + current_revisions_tup = util.to_tuple(current_revisions) + + branch_label, symbol, relative_str = match.groups() + relative = int(relative_str) + if relative > 0: + if symbol is None: + if not current_revisions_tup: + current_revisions_tup = (None,) + # Try to filter to a single target (avoid ambiguous branches). + start_revs = current_revisions_tup + if branch_label: + start_revs = self.filter_for_lineage( + self.get_revisions(current_revisions_tup), # type: ignore[arg-type] # noqa: E501 + branch_label, + ) + if not start_revs: + # The requested branch is not a head, so we need to + # backtrack to find a branchpoint. + active_on_branch = self.filter_for_lineage( + self._get_ancestor_nodes( + self.get_revisions(current_revisions_tup) + ), + branch_label, + ) + # Find the tips of this set of revisions (revisions + # without children within the set). + start_revs = tuple( + {rev.revision for rev in active_on_branch} + - { + down + for rev in active_on_branch + for down in rev._normalized_down_revisions + } + ) + if not start_revs: + # We must need to go right back to base to find + # a starting point for this branch. + start_revs = (None,) + if len(start_revs) > 1: + raise RevisionError( + "Ambiguous upgrade from multiple current revisions" + ) + # Walk up from unique target revision. + rev = self._walk( + start=start_revs[0], + steps=relative, + branch_label=branch_label, + no_overwalk=assert_relative_length, + ) + if rev is None: + raise RevisionError( + "Relative revision %s didn't " + "produce %d migrations" % (relative_str, abs(relative)) + ) + return (rev,) + else: + # Walk is relative to a given revision, not the current state. + return ( + self._walk( + start=self.get_revision(symbol), + steps=relative, + branch_label=branch_label, + no_overwalk=assert_relative_length, + ), + ) + else: + if symbol is None: + # Upgrading to current - n is not valid. + raise RevisionError( + "Relative revision %s didn't " + "produce %d migrations" % (relative, abs(relative)) + ) + return ( + self._walk( + start=( + self.get_revision(symbol) + if branch_label is None + else self.get_revision( + "%s@%s" % (branch_label, symbol) + ) + ), + steps=relative, + no_overwalk=assert_relative_length, + ), + ) + + def _collect_downgrade_revisions( + self, + upper: _RevisionIdentifierType, + lower: _RevisionIdentifierType, + inclusive: bool, + implicit_base: bool, + assert_relative_length: bool, + ) -> Tuple[Set[Revision], Tuple[Optional[_RevisionOrBase], ...]]: + """ + Compute the set of current revisions specified by :upper, and the + downgrade target specified by :target. Return all dependents of target + which are currently active. + + :inclusive=True includes the target revision in the set + """ + + branch_label, target_revision = self._parse_downgrade_target( + current_revisions=upper, + target=lower, + assert_relative_length=assert_relative_length, + ) + if target_revision == "base": + target_revision = None + assert target_revision is None or isinstance(target_revision, Revision) + + roots: List[Revision] + # Find candidates to drop. + if target_revision is None: + # Downgrading back to base: find all tree roots. + roots = [ + rev + for rev in self._revision_map.values() + if rev is not None and rev.down_revision is None + ] + elif inclusive: + # inclusive implies target revision should also be dropped + roots = [target_revision] + else: + # Downgrading to fixed target: find all direct children. + roots = [ + is_revision(rev) + for rev in self.get_revisions(target_revision.nextrev) + ] + + if branch_label and len(roots) > 1: + # Need to filter roots. + ancestors = { + rev.revision + for rev in self._get_ancestor_nodes( + [self._resolve_branch(branch_label)], + include_dependencies=False, + ) + } + # Intersection gives the root revisions we are trying to + # rollback with the downgrade. + roots = [ + is_revision(rev) + for rev in self.get_revisions( + {rev.revision for rev in roots}.intersection(ancestors) + ) + ] + + # Ensure we didn't throw everything away when filtering branches. + if len(roots) == 0: + raise RevisionError( + "Not a valid downgrade target from current heads" + ) + + heads = self.get_revisions(upper) + + # Aim is to drop :branch_revision; to do so we also need to drop its + # descendents and anything dependent on it. + downgrade_revisions = set( + self._get_descendant_nodes( + roots, + include_dependencies=True, + omit_immediate_dependencies=False, + ) + ) + active_revisions = set( + self._get_ancestor_nodes(heads, include_dependencies=True) + ) + + # Emit revisions to drop in reverse topological sorted order. + downgrade_revisions.intersection_update(active_revisions) + + if implicit_base: + # Wind other branches back to base. + downgrade_revisions.update( + active_revisions.difference(self._get_ancestor_nodes(roots)) + ) + + if ( + target_revision is not None + and not downgrade_revisions + and target_revision not in heads + ): + # Empty intersection: target revs are not present. + + raise RangeNotAncestorError("Nothing to drop", upper) + + return downgrade_revisions, heads + + def _collect_upgrade_revisions( + self, + upper: _RevisionIdentifierType, + lower: _RevisionIdentifierType, + inclusive: bool, + implicit_base: bool, + assert_relative_length: bool, + ) -> Tuple[Set[Revision], Tuple[Revision, ...]]: + """ + Compute the set of required revisions specified by :upper, and the + current set of active revisions specified by :lower. Find the + difference between the two to compute the required upgrades. + + :inclusive=True includes the current/lower revisions in the set + + :implicit_base=False only returns revisions which are downstream + of the current/lower revisions. Dependencies from branches with + different bases will not be included. + """ + targets: Collection[Revision] = [ + is_revision(rev) + for rev in self._parse_upgrade_target( + current_revisions=lower, + target=upper, + assert_relative_length=assert_relative_length, + ) + ] + + # assert type(targets) is tuple, "targets should be a tuple" + + # Handled named bases (e.g. branch@... -> heads should only produce + # targets on the given branch) + if isinstance(lower, str) and "@" in lower: + branch, _, _ = lower.partition("@") + branch_rev = self.get_revision(branch) + if branch_rev is not None and branch_rev.revision == branch: + # A revision was used as a label; get its branch instead + assert len(branch_rev.branch_labels) == 1 + branch = next(iter(branch_rev.branch_labels)) + targets = { + need for need in targets if branch in need.branch_labels + } + + required_node_set = set( + self._get_ancestor_nodes( + targets, check=True, include_dependencies=True + ) + ).union(targets) + + current_revisions = self.get_revisions(lower) + if not implicit_base and any( + rev not in required_node_set + for rev in current_revisions + if rev is not None + ): + raise RangeNotAncestorError(lower, upper) + assert ( + type(current_revisions) is tuple + ), "current_revisions should be a tuple" + + # Special case where lower = a relative value (get_revisions can't + # find it) + if current_revisions and current_revisions[0] is None: + _, rev = self._parse_downgrade_target( + current_revisions=upper, + target=lower, + assert_relative_length=assert_relative_length, + ) + assert rev + if rev == "base": + current_revisions = tuple() + lower = None + else: + current_revisions = (rev,) + lower = rev.revision + + current_node_set = set( + self._get_ancestor_nodes( + current_revisions, check=True, include_dependencies=True + ) + ).union(current_revisions) + + needs = required_node_set.difference(current_node_set) + + # Include the lower revision (=current_revisions?) in the iteration + if inclusive: + needs.update(is_revision(rev) for rev in self.get_revisions(lower)) + # By default, base is implicit as we want all dependencies returned. + # Base is also implicit if lower = base + # implicit_base=False -> only return direct downstreams of + # current_revisions + if current_revisions and not implicit_base: + lower_descendents = self._get_descendant_nodes( + [is_revision(rev) for rev in current_revisions], + check=True, + include_dependencies=False, + ) + needs.intersection_update(lower_descendents) + + return needs, tuple(targets) + + def _get_all_current( + self, id_: Tuple[str, ...] + ) -> Set[Optional[_RevisionOrBase]]: + top_revs: Set[Optional[_RevisionOrBase]] + top_revs = set(self.get_revisions(id_)) + top_revs.update( + self._get_ancestor_nodes(list(top_revs), include_dependencies=True) + ) + return self._filter_into_branch_heads(top_revs) + + +class Revision: + """Base class for revisioned objects. + + The :class:`.Revision` class is the base of the more public-facing + :class:`.Script` object, which represents a migration script. + The mechanics of revision management and traversal are encapsulated + within :class:`.Revision`, while :class:`.Script` applies this logic + to Python files in a version directory. + + """ + + nextrev: FrozenSet[str] = frozenset() + """following revisions, based on down_revision only.""" + + _all_nextrev: FrozenSet[str] = frozenset() + + revision: str = None # type: ignore[assignment] + """The string revision number.""" + + down_revision: Optional[_RevIdType] = None + """The ``down_revision`` identifier(s) within the migration script. + + Note that the total set of "down" revisions is + down_revision + dependencies. + + """ + + dependencies: Optional[_RevIdType] = None + """Additional revisions which this revision is dependent on. + + From a migration standpoint, these dependencies are added to the + down_revision to form the full iteration. However, the separation + of down_revision from "dependencies" is to assist in navigating + a history that contains many branches, typically a multi-root scenario. + + """ + + branch_labels: Set[str] = None # type: ignore[assignment] + """Optional string/tuple of symbolic names to apply to this + revision's branch""" + + _resolved_dependencies: Tuple[str, ...] + _normalized_resolved_dependencies: Tuple[str, ...] + + @classmethod + def verify_rev_id(cls, revision: str) -> None: + illegal_chars = set(revision).intersection(_revision_illegal_chars) + if illegal_chars: + raise RevisionError( + "Character(s) '%s' not allowed in revision identifier '%s'" + % (", ".join(sorted(illegal_chars)), revision) + ) + + def __init__( + self, + revision: str, + down_revision: Optional[Union[str, Tuple[str, ...]]], + dependencies: Optional[Union[str, Tuple[str, ...]]] = None, + branch_labels: Optional[Union[str, Tuple[str, ...]]] = None, + ) -> None: + if down_revision and revision in util.to_tuple(down_revision): + raise LoopDetected(revision) + elif dependencies is not None and revision in util.to_tuple( + dependencies + ): + raise DependencyLoopDetected(revision) + + self.verify_rev_id(revision) + self.revision = revision + self.down_revision = tuple_rev_as_scalar(util.to_tuple(down_revision)) + self.dependencies = tuple_rev_as_scalar(util.to_tuple(dependencies)) + self._orig_branch_labels = util.to_tuple(branch_labels, default=()) + self.branch_labels = set(self._orig_branch_labels) + + def __repr__(self) -> str: + args = [repr(self.revision), repr(self.down_revision)] + if self.dependencies: + args.append("dependencies=%r" % (self.dependencies,)) + if self.branch_labels: + args.append("branch_labels=%r" % (self.branch_labels,)) + return "%s(%s)" % (self.__class__.__name__, ", ".join(args)) + + def add_nextrev(self, revision: Revision) -> None: + self._all_nextrev = self._all_nextrev.union([revision.revision]) + if self.revision in revision._versioned_down_revisions: + self.nextrev = self.nextrev.union([revision.revision]) + + @property + def _all_down_revisions(self) -> Tuple[str, ...]: + return util.dedupe_tuple( + util.to_tuple(self.down_revision, default=()) + + self._resolved_dependencies + ) + + @property + def _normalized_down_revisions(self) -> Tuple[str, ...]: + """return immediate down revisions for a rev, omitting dependencies + that are still dependencies of ancestors. + + """ + return util.dedupe_tuple( + util.to_tuple(self.down_revision, default=()) + + self._normalized_resolved_dependencies + ) + + @property + def _versioned_down_revisions(self) -> Tuple[str, ...]: + return util.to_tuple(self.down_revision, default=()) + + @property + def is_head(self) -> bool: + """Return True if this :class:`.Revision` is a 'head' revision. + + This is determined based on whether any other :class:`.Script` + within the :class:`.ScriptDirectory` refers to this + :class:`.Script`. Multiple heads can be present. + + """ + return not bool(self.nextrev) + + @property + def _is_real_head(self) -> bool: + return not bool(self._all_nextrev) + + @property + def is_base(self) -> bool: + """Return True if this :class:`.Revision` is a 'base' revision.""" + + return self.down_revision is None + + @property + def _is_real_base(self) -> bool: + """Return True if this :class:`.Revision` is a "real" base revision, + e.g. that it has no dependencies either.""" + + # we use self.dependencies here because this is called up + # in initialization where _real_dependencies isn't set up + # yet + return self.down_revision is None and self.dependencies is None + + @property + def is_branch_point(self) -> bool: + """Return True if this :class:`.Script` is a branch point. + + A branchpoint is defined as a :class:`.Script` which is referred + to by more than one succeeding :class:`.Script`, that is more + than one :class:`.Script` has a `down_revision` identifier pointing + here. + + """ + return len(self.nextrev) > 1 + + @property + def _is_real_branch_point(self) -> bool: + """Return True if this :class:`.Script` is a 'real' branch point, + taking into account dependencies as well. + + """ + return len(self._all_nextrev) > 1 + + @property + def is_merge_point(self) -> bool: + """Return True if this :class:`.Script` is a merge point.""" + + return len(self._versioned_down_revisions) > 1 + + +@overload +def tuple_rev_as_scalar(rev: None) -> None: ... + + +@overload +def tuple_rev_as_scalar( + rev: Union[Tuple[_T, ...], List[_T]], +) -> Union[_T, Tuple[_T, ...], List[_T]]: ... + + +def tuple_rev_as_scalar( + rev: Optional[Sequence[_T]], +) -> Union[_T, Sequence[_T], None]: + if not rev: + return None + elif len(rev) == 1: + return rev[0] + else: + return rev + + +def is_revision(rev: Any) -> Revision: + assert isinstance(rev, Revision) + return rev diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/write_hooks.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/write_hooks.py new file mode 100644 index 000000000..50aefbe36 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/script/write_hooks.py @@ -0,0 +1,178 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import os +import shlex +import subprocess +import sys +from typing import Any +from typing import Callable +from typing import Dict +from typing import List +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +from .. import util +from ..util import compat +from ..util.pyfiles import _preserving_path_as_str + +if TYPE_CHECKING: + from ..config import PostWriteHookConfig + +REVISION_SCRIPT_TOKEN = "REVISION_SCRIPT_FILENAME" + +_registry: dict = {} + + +def register(name: str) -> Callable: + """A function decorator that will register that function as a write hook. + + See the documentation linked below for an example. + + .. seealso:: + + :ref:`post_write_hooks_custom` + + + """ + + def decorate(fn): + _registry[name] = fn + return fn + + return decorate + + +def _invoke( + name: str, + revision_path: Union[str, os.PathLike[str]], + options: PostWriteHookConfig, +) -> Any: + """Invokes the formatter registered for the given name. + + :param name: The name of a formatter in the registry + :param revision: string path to the revision file + :param options: A dict containing kwargs passed to the + specified formatter. + :raises: :class:`alembic.util.CommandError` + """ + revision_path = _preserving_path_as_str(revision_path) + try: + hook = _registry[name] + except KeyError as ke: + raise util.CommandError( + f"No formatter with name '{name}' registered" + ) from ke + else: + return hook(revision_path, options) + + +def _run_hooks( + path: Union[str, os.PathLike[str]], hooks: list[PostWriteHookConfig] +) -> None: + """Invoke hooks for a generated revision.""" + + for hook in hooks: + name = hook["_hook_name"] + try: + type_ = hook["type"] + except KeyError as ke: + raise util.CommandError( + f"Key '{name}.type' (or 'type' in toml) is required " + f"for post write hook {name!r}" + ) from ke + else: + with util.status( + f"Running post write hook {name!r}", newline=True + ): + _invoke(type_, path, hook) + + +def _parse_cmdline_options(cmdline_options_str: str, path: str) -> List[str]: + """Parse options from a string into a list. + + Also substitutes the revision script token with the actual filename of + the revision script. + + If the revision script token doesn't occur in the options string, it is + automatically prepended. + """ + if REVISION_SCRIPT_TOKEN not in cmdline_options_str: + cmdline_options_str = REVISION_SCRIPT_TOKEN + " " + cmdline_options_str + cmdline_options_list = shlex.split( + cmdline_options_str, posix=compat.is_posix + ) + cmdline_options_list = [ + option.replace(REVISION_SCRIPT_TOKEN, path) + for option in cmdline_options_list + ] + return cmdline_options_list + + +@register("console_scripts") +def console_scripts( + path: str, options: dict, ignore_output: bool = False +) -> None: + try: + entrypoint_name = options["entrypoint"] + except KeyError as ke: + raise util.CommandError( + f"Key {options['_hook_name']}.entrypoint is required for post " + f"write hook {options['_hook_name']!r}" + ) from ke + for entry in compat.importlib_metadata_get("console_scripts"): + if entry.name == entrypoint_name: + impl: Any = entry + break + else: + raise util.CommandError( + f"Could not find entrypoint console_scripts.{entrypoint_name}" + ) + cwd: Optional[str] = options.get("cwd", None) + cmdline_options_str = options.get("options", "") + cmdline_options_list = _parse_cmdline_options(cmdline_options_str, path) + + kw: Dict[str, Any] = {} + if ignore_output: + kw["stdout"] = kw["stderr"] = subprocess.DEVNULL + + subprocess.run( + [ + sys.executable, + "-c", + f"import {impl.module}; {impl.module}.{impl.attr}()", + ] + + cmdline_options_list, + cwd=cwd, + **kw, + ) + + +@register("exec") +def exec_(path: str, options: dict, ignore_output: bool = False) -> None: + try: + executable = options["executable"] + except KeyError as ke: + raise util.CommandError( + f"Key {options['_hook_name']}.executable is required for post " + f"write hook {options['_hook_name']!r}" + ) from ke + cwd: Optional[str] = options.get("cwd", None) + cmdline_options_str = options.get("options", "") + cmdline_options_list = _parse_cmdline_options(cmdline_options_str, path) + + kw: Dict[str, Any] = {} + if ignore_output: + kw["stdout"] = kw["stderr"] = subprocess.DEVNULL + + subprocess.run( + [ + executable, + *cmdline_options_list, + ], + cwd=cwd, + **kw, + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/README b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/README new file mode 100644 index 000000000..e0d0858f2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/README @@ -0,0 +1 @@ +Generic single-database configuration with an async dbapi. \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/alembic.ini.mako new file mode 100644 index 000000000..782524e21 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/alembic.ini.mako @@ -0,0 +1,141 @@ +# A generic, single database configuration. + +[alembic] +# path to migration scripts. +# this is typically a path given in POSIX (e.g. forward slashes) +# format, relative to the token %(here)s which refers to the location of this +# ini file +script_location = ${script_location} + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file +# for all available tokens +# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s + +# sys.path path, will be prepended to sys.path if present. +# defaults to the current working directory. for multiple paths, the path separator +# is defined by "path_separator" below. +prepend_sys_path = . + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library. +# Any required deps can installed by adding `alembic[tz]` to the pip requirements +# string value is passed to ZoneInfo() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; This defaults +# to /versions. When using multiple version +# directories, initial revisions must be specified with --version-path. +# The path separator used here should be the separator specified by "path_separator" +# below. +# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions + +# path_separator; This indicates what character is used to split lists of file +# paths, including version_locations and prepend_sys_path within configparser +# files such as alembic.ini. +# The default rendered in new alembic.ini files is "os", which uses os.pathsep +# to provide os-dependent path splitting. +# +# Note that in order to support legacy alembic.ini files, this default does NOT +# take place if path_separator is not present in alembic.ini. If this +# option is omitted entirely, fallback logic is as follows: +# +# 1. Parsing of the version_locations option falls back to using the legacy +# "version_path_separator" key, which if absent then falls back to the legacy +# behavior of splitting on spaces and/or commas. +# 2. Parsing of the prepend_sys_path option falls back to the legacy +# behavior of splitting on spaces, commas, or colons. +# +# Valid values for path_separator are: +# +# path_separator = : +# path_separator = ; +# path_separator = space +# path_separator = newline +# +# Use os.pathsep. Default configuration used for new projects. +path_separator = os + + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +# database URL. This is consumed by the user-maintained env.py script only. +# other means of configuring database URLs may be customized within the env.py +# file. +sqlalchemy.url = driver://user:pass@localhost/dbname + + +[post_write_hooks] +# post_write_hooks defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples + +# format using "black" - use the console_scripts runner, against the "black" entrypoint +# hooks = black +# black.type = console_scripts +# black.entrypoint = black +# black.options = -l 79 REVISION_SCRIPT_FILENAME + +# lint with attempts to fix using "ruff" - use the exec runner, execute a binary +# hooks = ruff +# ruff.type = exec +# ruff.executable = %(here)s/.venv/bin/ruff +# ruff.options = check --fix REVISION_SCRIPT_FILENAME + +# Logging configuration. This is also consumed by the user-maintained +# env.py script only. +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARNING +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARNING +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/env.py new file mode 100644 index 000000000..9f2d51940 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/env.py @@ -0,0 +1,89 @@ +import asyncio +from logging.config import fileConfig + +from sqlalchemy import pool +from sqlalchemy.engine import Connection +from sqlalchemy.ext.asyncio import async_engine_from_config + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +target_metadata = None + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def do_run_migrations(connection: Connection) -> None: + context.configure(connection=connection, target_metadata=target_metadata) + + with context.begin_transaction(): + context.run_migrations() + + +async def run_async_migrations() -> None: + """In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + connectable = async_engine_from_config( + config.get_section(config.config_ini_section, {}), + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + async with connectable.connect() as connection: + await connection.run_sync(do_run_migrations) + + await connectable.dispose() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode.""" + + asyncio.run(run_async_migrations()) + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/script.py.mako new file mode 100644 index 000000000..11016301e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/async/script.py.mako @@ -0,0 +1,28 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade() -> None: + """Upgrade schema.""" + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + """Downgrade schema.""" + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/README b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/README new file mode 100644 index 000000000..98e4f9c44 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/README @@ -0,0 +1 @@ +Generic single-database configuration. \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/alembic.ini.mako new file mode 100644 index 000000000..cb4e2bf39 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/alembic.ini.mako @@ -0,0 +1,141 @@ +# A generic, single database configuration. + +[alembic] +# path to migration scripts. +# this is typically a path given in POSIX (e.g. forward slashes) +# format, relative to the token %(here)s which refers to the location of this +# ini file +script_location = ${script_location} + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file +# for all available tokens +# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s + +# sys.path path, will be prepended to sys.path if present. +# defaults to the current working directory. for multiple paths, the path separator +# is defined by "path_separator" below. +prepend_sys_path = . + + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library. +# Any required deps can installed by adding `alembic[tz]` to the pip requirements +# string value is passed to ZoneInfo() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; This defaults +# to /versions. When using multiple version +# directories, initial revisions must be specified with --version-path. +# The path separator used here should be the separator specified by "path_separator" +# below. +# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions + +# path_separator; This indicates what character is used to split lists of file +# paths, including version_locations and prepend_sys_path within configparser +# files such as alembic.ini. +# The default rendered in new alembic.ini files is "os", which uses os.pathsep +# to provide os-dependent path splitting. +# +# Note that in order to support legacy alembic.ini files, this default does NOT +# take place if path_separator is not present in alembic.ini. If this +# option is omitted entirely, fallback logic is as follows: +# +# 1. Parsing of the version_locations option falls back to using the legacy +# "version_path_separator" key, which if absent then falls back to the legacy +# behavior of splitting on spaces and/or commas. +# 2. Parsing of the prepend_sys_path option falls back to the legacy +# behavior of splitting on spaces, commas, or colons. +# +# Valid values for path_separator are: +# +# path_separator = : +# path_separator = ; +# path_separator = space +# path_separator = newline +# +# Use os.pathsep. Default configuration used for new projects. +path_separator = os + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +# database URL. This is consumed by the user-maintained env.py script only. +# other means of configuring database URLs may be customized within the env.py +# file. +sqlalchemy.url = driver://user:pass@localhost/dbname + + +[post_write_hooks] +# post_write_hooks defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples + +# format using "black" - use the console_scripts runner, against the "black" entrypoint +# hooks = black +# black.type = console_scripts +# black.entrypoint = black +# black.options = -l 79 REVISION_SCRIPT_FILENAME + +# lint with attempts to fix using "ruff" - use the exec runner, execute a binary +# hooks = ruff +# ruff.type = exec +# ruff.executable = %(here)s/.venv/bin/ruff +# ruff.options = check --fix REVISION_SCRIPT_FILENAME + +# Logging configuration. This is also consumed by the user-maintained +# env.py script only. +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARNING +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARNING +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/env.py new file mode 100644 index 000000000..36112a3c6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/env.py @@ -0,0 +1,78 @@ +from logging.config import fileConfig + +from sqlalchemy import engine_from_config +from sqlalchemy import pool + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +target_metadata = None + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + connectable = engine_from_config( + config.get_section(config.config_ini_section, {}), + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + with connectable.connect() as connection: + context.configure( + connection=connection, target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/script.py.mako new file mode 100644 index 000000000..11016301e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/generic/script.py.mako @@ -0,0 +1,28 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade() -> None: + """Upgrade schema.""" + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + """Downgrade schema.""" + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/README b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/README new file mode 100644 index 000000000..f046ec914 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/README @@ -0,0 +1,12 @@ +Rudimentary multi-database configuration. + +Multi-DB isn't vastly different from generic. The primary difference is that it +will run the migrations N times (depending on how many databases you have +configured), providing one engine name and associated context for each run. + +That engine name will then allow the migration to restrict what runs within it to +just the appropriate migrations for that engine. You can see this behavior within +the mako template. + +In the provided configuration, you'll need to have `databases` provided in +alembic's config, and an `sqlalchemy.url` provided for each engine name. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/alembic.ini.mako new file mode 100644 index 000000000..2680ef68a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/alembic.ini.mako @@ -0,0 +1,149 @@ +# a multi-database configuration. + +[alembic] +# path to migration scripts. +# this is typically a path given in POSIX (e.g. forward slashes) +# format, relative to the token %(here)s which refers to the location of this +# ini file +script_location = ${script_location} + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file +# for all available tokens +# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s + +# sys.path path, will be prepended to sys.path if present. +# defaults to the current working directory. for multiple paths, the path separator +# is defined by "path_separator" below. +prepend_sys_path = . + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library. +# Any required deps can installed by adding `alembic[tz]` to the pip requirements +# string value is passed to ZoneInfo() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; This defaults +# to /versions. When using multiple version +# directories, initial revisions must be specified with --version-path. +# The path separator used here should be the separator specified by "path_separator" +# below. +# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions + +# path_separator; This indicates what character is used to split lists of file +# paths, including version_locations and prepend_sys_path within configparser +# files such as alembic.ini. +# The default rendered in new alembic.ini files is "os", which uses os.pathsep +# to provide os-dependent path splitting. +# +# Note that in order to support legacy alembic.ini files, this default does NOT +# take place if path_separator is not present in alembic.ini. If this +# option is omitted entirely, fallback logic is as follows: +# +# 1. Parsing of the version_locations option falls back to using the legacy +# "version_path_separator" key, which if absent then falls back to the legacy +# behavior of splitting on spaces and/or commas. +# 2. Parsing of the prepend_sys_path option falls back to the legacy +# behavior of splitting on spaces, commas, or colons. +# +# Valid values for path_separator are: +# +# path_separator = : +# path_separator = ; +# path_separator = space +# path_separator = newline +# +# Use os.pathsep. Default configuration used for new projects. +path_separator = os + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +# for multiple database configuration, new named sections are added +# which each include a distinct ``sqlalchemy.url`` entry. A custom value +# ``databases`` is added which indicates a listing of the per-database sections. +# The ``databases`` entry as well as the URLs present in the ``[engine1]`` +# and ``[engine2]`` sections continue to be consumed by the user-maintained env.py +# script only. + +databases = engine1, engine2 + +[engine1] +sqlalchemy.url = driver://user:pass@localhost/dbname + +[engine2] +sqlalchemy.url = driver://user:pass@localhost/dbname2 + +[post_write_hooks] +# post_write_hooks defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples + +# format using "black" - use the console_scripts runner, against the "black" entrypoint +# hooks = black +# black.type = console_scripts +# black.entrypoint = black +# black.options = -l 79 REVISION_SCRIPT_FILENAME + +# lint with attempts to fix using "ruff" - use the exec runner, execute a binary +# hooks = ruff +# ruff.type = exec +# ruff.executable = %(here)s/.venv/bin/ruff +# ruff.options = check --fix REVISION_SCRIPT_FILENAME + +# Logging configuration. This is also consumed by the user-maintained +# env.py script only. +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARNING +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARNING +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/env.py new file mode 100644 index 000000000..e937b64ee --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/env.py @@ -0,0 +1,140 @@ +import logging +from logging.config import fileConfig +import re + +from sqlalchemy import engine_from_config +from sqlalchemy import pool + +from alembic import context + +USE_TWOPHASE = False + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) +logger = logging.getLogger("alembic.env") + +# gather section names referring to different +# databases. These are named "engine1", "engine2" +# in the sample .ini file. +db_names = config.get_main_option("databases", "") + +# add your model's MetaData objects here +# for 'autogenerate' support. These must be set +# up to hold just those tables targeting a +# particular database. table.tometadata() may be +# helpful here in case a "copy" of +# a MetaData is needed. +# from myapp import mymodel +# target_metadata = { +# 'engine1':mymodel.metadata1, +# 'engine2':mymodel.metadata2 +# } +target_metadata = {} + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + # for the --sql use case, run migrations for each URL into + # individual files. + + engines = {} + for name in re.split(r",\s*", db_names): + engines[name] = rec = {} + rec["url"] = context.config.get_section_option(name, "sqlalchemy.url") + + for name, rec in engines.items(): + logger.info("Migrating database %s" % name) + file_ = "%s.sql" % name + logger.info("Writing output to %s" % file_) + with open(file_, "w") as buffer: + context.configure( + url=rec["url"], + output_buffer=buffer, + target_metadata=target_metadata.get(name), + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + with context.begin_transaction(): + context.run_migrations(engine_name=name) + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + # for the direct-to-DB use case, start a transaction on all + # engines, then run all migrations, then commit all transactions. + + engines = {} + for name in re.split(r",\s*", db_names): + engines[name] = rec = {} + rec["engine"] = engine_from_config( + context.config.get_section(name, {}), + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + for name, rec in engines.items(): + engine = rec["engine"] + rec["connection"] = conn = engine.connect() + + if USE_TWOPHASE: + rec["transaction"] = conn.begin_twophase() + else: + rec["transaction"] = conn.begin() + + try: + for name, rec in engines.items(): + logger.info("Migrating database %s" % name) + context.configure( + connection=rec["connection"], + upgrade_token="%s_upgrades" % name, + downgrade_token="%s_downgrades" % name, + target_metadata=target_metadata.get(name), + ) + context.run_migrations(engine_name=name) + + if USE_TWOPHASE: + for rec in engines.values(): + rec["transaction"].prepare() + + for rec in engines.values(): + rec["transaction"].commit() + except: + for rec in engines.values(): + rec["transaction"].rollback() + raise + finally: + for rec in engines.values(): + rec["connection"].close() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/script.py.mako new file mode 100644 index 000000000..8e667d84c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/multidb/script.py.mako @@ -0,0 +1,51 @@ +<%! +import re + +%>"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade(engine_name: str) -> None: + """Upgrade schema.""" + globals()["upgrade_%s" % engine_name]() + + +def downgrade(engine_name: str) -> None: + """Downgrade schema.""" + globals()["downgrade_%s" % engine_name]() + +<% + db_names = config.get_main_option("databases") +%> + +## generate an "upgrade_() / downgrade_()" function +## for each database name in the ini file. + +% for db_name in re.split(r',\s*', db_names): + +def upgrade_${db_name}() -> None: + """Upgrade ${db_name} schema.""" + ${context.get("%s_upgrades" % db_name, "pass")} + + +def downgrade_${db_name}() -> None: + """Downgrade ${db_name} schema.""" + ${context.get("%s_downgrades" % db_name, "pass")} + +% endfor diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/README b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/README new file mode 100644 index 000000000..fdacc05f6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/README @@ -0,0 +1 @@ +pyproject configuration, based on the generic configuration. \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/alembic.ini.mako new file mode 100644 index 000000000..3d10f0e46 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/alembic.ini.mako @@ -0,0 +1,44 @@ +# A generic, single database configuration. + +[alembic] + +# database URL. This is consumed by the user-maintained env.py script only. +# other means of configuring database URLs may be customized within the env.py +# file. +sqlalchemy.url = driver://user:pass@localhost/dbname + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARNING +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARNING +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/env.py new file mode 100644 index 000000000..36112a3c6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/env.py @@ -0,0 +1,78 @@ +from logging.config import fileConfig + +from sqlalchemy import engine_from_config +from sqlalchemy import pool + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +target_metadata = None + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + connectable = engine_from_config( + config.get_section(config.config_ini_section, {}), + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + with connectable.connect() as connection: + context.configure( + connection=connection, target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/pyproject.toml.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/pyproject.toml.mako new file mode 100644 index 000000000..cfc56cebc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/pyproject.toml.mako @@ -0,0 +1,76 @@ +[tool.alembic] + +# path to migration scripts. +# this is typically a path given in POSIX (e.g. forward slashes) +# format, relative to the token %(here)s which refers to the location of this +# ini file +script_location = "${script_location}" + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file +# for all available tokens +# file_template = "%%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s" + +# additional paths to be prepended to sys.path. defaults to the current working directory. +prepend_sys_path = [ + "." +] + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library. +# Any required deps can installed by adding `alembic[tz]` to the pip requirements +# string value is passed to ZoneInfo() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; This defaults +# to /versions. When using multiple version +# directories, initial revisions must be specified with --version-path. +# version_locations = [ +# "%(here)s/alembic/versions", +# "%(here)s/foo/bar" +# ] + + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = "utf-8" + +# This section defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples +# [[tool.alembic.post_write_hooks]] +# format using "black" - use the console_scripts runner, +# against the "black" entrypoint +# name = "black" +# type = "console_scripts" +# entrypoint = "black" +# options = "-l 79 REVISION_SCRIPT_FILENAME" +# +# [[tool.alembic.post_write_hooks]] +# lint with attempts to fix using "ruff" - use the exec runner, +# execute a binary +# name = "ruff" +# type = "exec" +# executable = "%(here)s/.venv/bin/ruff" +# options = "check --fix REVISION_SCRIPT_FILENAME" + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/script.py.mako new file mode 100644 index 000000000..11016301e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/templates/pyproject/script.py.mako @@ -0,0 +1,28 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade() -> None: + """Upgrade schema.""" + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + """Downgrade schema.""" + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/__init__.py new file mode 100644 index 000000000..316ea91b9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/__init__.py @@ -0,0 +1,31 @@ +from sqlalchemy.testing import config +from sqlalchemy.testing import emits_warning +from sqlalchemy.testing import engines +from sqlalchemy.testing import exclusions +from sqlalchemy.testing import mock +from sqlalchemy.testing import provide_metadata +from sqlalchemy.testing import skip_if +from sqlalchemy.testing import uses_deprecated +from sqlalchemy.testing.config import combinations +from sqlalchemy.testing.config import fixture +from sqlalchemy.testing.config import requirements as requires +from sqlalchemy.testing.config import variation + +from .assertions import assert_raises +from .assertions import assert_raises_message +from .assertions import emits_python_deprecation_warning +from .assertions import eq_ +from .assertions import eq_ignore_whitespace +from .assertions import expect_deprecated +from .assertions import expect_raises +from .assertions import expect_raises_message +from .assertions import expect_sqlalchemy_deprecated +from .assertions import expect_sqlalchemy_deprecated_20 +from .assertions import expect_warnings +from .assertions import is_ +from .assertions import is_false +from .assertions import is_not_ +from .assertions import is_true +from .assertions import ne_ +from .fixtures import TestBase +from .util import resolve_lambda diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/assertions.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/assertions.py new file mode 100644 index 000000000..898fbd167 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/assertions.py @@ -0,0 +1,179 @@ +from __future__ import annotations + +import contextlib +import re +import sys +from typing import Any +from typing import Dict + +from sqlalchemy import exc as sa_exc +from sqlalchemy.engine import default +from sqlalchemy.engine import URL +from sqlalchemy.testing.assertions import _expect_warnings +from sqlalchemy.testing.assertions import eq_ # noqa +from sqlalchemy.testing.assertions import is_ # noqa +from sqlalchemy.testing.assertions import is_false # noqa +from sqlalchemy.testing.assertions import is_not_ # noqa +from sqlalchemy.testing.assertions import is_true # noqa +from sqlalchemy.testing.assertions import ne_ # noqa +from sqlalchemy.util import decorator + + +def _assert_proper_exception_context(exception): + """assert that any exception we're catching does not have a __context__ + without a __cause__, and that __suppress_context__ is never set. + + Python 3 will report nested as exceptions as "during the handling of + error X, error Y occurred". That's not what we want to do. we want + these exceptions in a cause chain. + + """ + + if ( + exception.__context__ is not exception.__cause__ + and not exception.__suppress_context__ + ): + assert False, ( + "Exception %r was correctly raised but did not set a cause, " + "within context %r as its cause." + % (exception, exception.__context__) + ) + + +def assert_raises(except_cls, callable_, *args, **kw): + return _assert_raises(except_cls, callable_, args, kw, check_context=True) + + +def assert_raises_context_ok(except_cls, callable_, *args, **kw): + return _assert_raises(except_cls, callable_, args, kw) + + +def assert_raises_message(except_cls, msg, callable_, *args, **kwargs): + return _assert_raises( + except_cls, callable_, args, kwargs, msg=msg, check_context=True + ) + + +def assert_raises_message_context_ok( + except_cls, msg, callable_, *args, **kwargs +): + return _assert_raises(except_cls, callable_, args, kwargs, msg=msg) + + +def _assert_raises( + except_cls, callable_, args, kwargs, msg=None, check_context=False +): + with _expect_raises(except_cls, msg, check_context) as ec: + callable_(*args, **kwargs) + return ec.error + + +class _ErrorContainer: + error: Any = None + + +@contextlib.contextmanager +def _expect_raises( + except_cls, msg=None, check_context=False, text_exact=False +): + ec = _ErrorContainer() + if check_context: + are_we_already_in_a_traceback = sys.exc_info()[0] + try: + yield ec + success = False + except except_cls as err: + ec.error = err + success = True + if msg is not None: + if text_exact: + assert str(err) == msg, f"{msg} != {err}" + else: + assert re.search(msg, str(err), re.UNICODE), f"{msg} !~ {err}" + if check_context and not are_we_already_in_a_traceback: + _assert_proper_exception_context(err) + print(str(err).encode("utf-8")) + + # assert outside the block so it works for AssertionError too ! + assert success, "Callable did not raise an exception" + + +def expect_raises(except_cls, check_context=True): + return _expect_raises(except_cls, check_context=check_context) + + +def expect_raises_message( + except_cls, msg, check_context=True, text_exact=False +): + return _expect_raises( + except_cls, msg=msg, check_context=check_context, text_exact=text_exact + ) + + +def eq_ignore_whitespace(a, b, msg=None): + a = re.sub(r"^\s+?|\n", "", a) + a = re.sub(r" {2,}", " ", a) + b = re.sub(r"^\s+?|\n", "", b) + b = re.sub(r" {2,}", " ", b) + + assert a == b, msg or "%r != %r" % (a, b) + + +_dialect_mods: Dict[Any, Any] = {} + + +def _get_dialect(name): + if name is None or name == "default": + return default.DefaultDialect() + else: + d = URL.create(name).get_dialect()() + + if name == "postgresql": + d.implicit_returning = True + elif name == "mssql": + d.legacy_schema_aliasing = False + return d + + +def expect_warnings(*messages, **kw): + """Context manager which expects one or more warnings. + + With no arguments, squelches all SAWarnings emitted via + sqlalchemy.util.warn and sqlalchemy.util.warn_limited. Otherwise + pass string expressions that will match selected warnings via regex; + all non-matching warnings are sent through. + + The expect version **asserts** that the warnings were in fact seen. + + Note that the test suite sets SAWarning warnings to raise exceptions. + + """ + return _expect_warnings(Warning, messages, **kw) + + +def emits_python_deprecation_warning(*messages): + """Decorator form of expect_warnings(). + + Note that emits_warning does **not** assert that the warnings + were in fact seen. + + """ + + @decorator + def decorate(fn, *args, **kw): + with _expect_warnings(DeprecationWarning, assert_=False, *messages): + return fn(*args, **kw) + + return decorate + + +def expect_deprecated(*messages, **kw): + return _expect_warnings(DeprecationWarning, messages, **kw) + + +def expect_sqlalchemy_deprecated(*messages, **kw): + return _expect_warnings(sa_exc.SADeprecationWarning, messages, **kw) + + +def expect_sqlalchemy_deprecated_20(*messages, **kw): + return _expect_warnings(sa_exc.RemovedIn20Warning, messages, **kw) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/env.py new file mode 100644 index 000000000..72a5e4245 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/env.py @@ -0,0 +1,557 @@ +import importlib.machinery +import os +from pathlib import Path +import shutil +import textwrap + +from sqlalchemy.testing import config +from sqlalchemy.testing import provision + +from . import util as testing_util +from .. import command +from .. import script +from .. import util +from ..script import Script +from ..script import ScriptDirectory + + +def _get_staging_directory(): + if provision.FOLLOWER_IDENT: + return f"scratch_{provision.FOLLOWER_IDENT}" + else: + return "scratch" + + +def staging_env(create=True, template="generic", sourceless=False): + cfg = _testing_config() + if create: + path = _join_path(_get_staging_directory(), "scripts") + assert not os.path.exists(path), ( + "staging directory %s already exists; poor cleanup?" % path + ) + + command.init(cfg, path, template=template) + if sourceless: + try: + # do an import so that a .pyc/.pyo is generated. + util.load_python_file(path, "env.py") + except AttributeError: + # we don't have the migration context set up yet + # so running the .env py throws this exception. + # theoretically we could be using py_compiler here to + # generate .pyc/.pyo without importing but not really + # worth it. + pass + assert sourceless in ( + "pep3147_envonly", + "simple", + "pep3147_everything", + ), sourceless + make_sourceless( + _join_path(path, "env.py"), + "pep3147" if "pep3147" in sourceless else "simple", + ) + + sc = script.ScriptDirectory.from_config(cfg) + return sc + + +def clear_staging_env(): + from sqlalchemy.testing import engines + + engines.testing_reaper.close_all() + shutil.rmtree(_get_staging_directory(), True) + + +def script_file_fixture(txt): + dir_ = _join_path(_get_staging_directory(), "scripts") + path = _join_path(dir_, "script.py.mako") + with open(path, "w") as f: + f.write(txt) + + +def env_file_fixture(txt): + dir_ = _join_path(_get_staging_directory(), "scripts") + txt = ( + """ +from alembic import context + +config = context.config +""" + + txt + ) + + path = _join_path(dir_, "env.py") + pyc_path = util.pyc_file_from_path(path) + if pyc_path: + os.unlink(pyc_path) + + with open(path, "w") as f: + f.write(txt) + + +def _sqlite_file_db(tempname="foo.db", future=False, scope=None, **options): + dir_ = _join_path(_get_staging_directory(), "scripts") + url = "sqlite:///%s/%s" % (dir_, tempname) + if scope: + options["scope"] = scope + return testing_util.testing_engine(url=url, future=future, options=options) + + +def _sqlite_testing_config(sourceless=False, future=False): + dir_ = _join_path(_get_staging_directory(), "scripts") + url = f"sqlite:///{dir_}/foo.db" + + sqlalchemy_future = future or ("future" in config.db.__class__.__module__) + + return _write_config_file( + f""" +[alembic] +script_location = {dir_} +sqlalchemy.url = {url} +sourceless = {"true" if sourceless else "false"} +{"sqlalchemy.future = true" if sqlalchemy_future else ""} + +[loggers] +keys = root,sqlalchemy + +[handlers] +keys = console + +[logger_root] +level = WARNING +handlers = console +qualname = + +[logger_sqlalchemy] +level = DEBUG +handlers = +qualname = sqlalchemy.engine + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatters] +keys = generic + +[formatter_generic] +format = %%(levelname)-5.5s [%%(name)s] %%(message)s +datefmt = %%H:%%M:%%S + """ + ) + + +def _multi_dir_testing_config(sourceless=False, extra_version_location=""): + dir_ = _join_path(_get_staging_directory(), "scripts") + sqlalchemy_future = "future" in config.db.__class__.__module__ + + url = "sqlite:///%s/foo.db" % dir_ + + return _write_config_file( + f""" +[alembic] +script_location = {dir_} +sqlalchemy.url = {url} +sqlalchemy.future = {"true" if sqlalchemy_future else "false"} +sourceless = {"true" if sourceless else "false"} +path_separator = space +version_locations = %(here)s/model1/ %(here)s/model2/ %(here)s/model3/ \ +{extra_version_location} + +[loggers] +keys = root + +[handlers] +keys = console + +[logger_root] +level = WARNING +handlers = console +qualname = + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatters] +keys = generic + +[formatter_generic] +format = %%(levelname)-5.5s [%%(name)s] %%(message)s +datefmt = %%H:%%M:%%S + """ + ) + + +def _no_sql_pyproject_config(dialect="postgresql", directives=""): + """use a postgresql url with no host so that + connections guaranteed to fail""" + dir_ = _join_path(_get_staging_directory(), "scripts") + + return _write_toml_config( + f""" +[tool.alembic] +script_location ="{dir_}" +{textwrap.dedent(directives)} + + """, + f""" +[alembic] +sqlalchemy.url = {dialect}:// + +[loggers] +keys = root + +[handlers] +keys = console + +[logger_root] +level = WARNING +handlers = console +qualname = + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatters] +keys = generic + +[formatter_generic] +format = %%(levelname)-5.5s [%%(name)s] %%(message)s +datefmt = %%H:%%M:%%S + +""", + ) + + +def _no_sql_testing_config(dialect="postgresql", directives=""): + """use a postgresql url with no host so that + connections guaranteed to fail""" + dir_ = _join_path(_get_staging_directory(), "scripts") + return _write_config_file( + f""" +[alembic] +script_location ={dir_} +sqlalchemy.url = {dialect}:// +{directives} + +[loggers] +keys = root + +[handlers] +keys = console + +[logger_root] +level = WARNING +handlers = console +qualname = + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatters] +keys = generic + +[formatter_generic] +format = %%(levelname)-5.5s [%%(name)s] %%(message)s +datefmt = %%H:%%M:%%S + +""" + ) + + +def _write_toml_config(tomltext, initext): + cfg = _write_config_file(initext) + with open(cfg.toml_file_name, "w") as f: + f.write(tomltext) + return cfg + + +def _write_config_file(text): + cfg = _testing_config() + with open(cfg.config_file_name, "w") as f: + f.write(text) + return cfg + + +def _testing_config(): + from alembic.config import Config + + if not os.access(_get_staging_directory(), os.F_OK): + os.mkdir(_get_staging_directory()) + return Config( + _join_path(_get_staging_directory(), "test_alembic.ini"), + _join_path(_get_staging_directory(), "pyproject.toml"), + ) + + +def write_script( + scriptdir, rev_id, content, encoding="ascii", sourceless=False +): + old = scriptdir.revision_map.get_revision(rev_id) + path = old.path + + content = textwrap.dedent(content) + if encoding: + content = content.encode(encoding) + with open(path, "wb") as fp: + fp.write(content) + pyc_path = util.pyc_file_from_path(path) + if pyc_path: + os.unlink(pyc_path) + script = Script._from_path(scriptdir, path) + old = scriptdir.revision_map.get_revision(script.revision) + if old.down_revision != script.down_revision: + raise Exception("Can't change down_revision on a refresh operation.") + scriptdir.revision_map.add_revision(script, _replace=True) + + if sourceless: + make_sourceless( + path, "pep3147" if sourceless == "pep3147_everything" else "simple" + ) + + +def make_sourceless(path, style): + import py_compile + + py_compile.compile(path) + + if style == "simple": + pyc_path = util.pyc_file_from_path(path) + suffix = importlib.machinery.BYTECODE_SUFFIXES[0] + filepath, ext = os.path.splitext(path) + simple_pyc_path = filepath + suffix + shutil.move(pyc_path, simple_pyc_path) + pyc_path = simple_pyc_path + else: + assert style in ("pep3147", "simple") + pyc_path = util.pyc_file_from_path(path) + + assert os.access(pyc_path, os.F_OK) + + os.unlink(path) + + +def three_rev_fixture(cfg): + a = util.rev_id() + b = util.rev_id() + c = util.rev_id() + + script = ScriptDirectory.from_config(cfg) + script.generate_revision(a, "revision a", refresh=True, head="base") + write_script( + script, + a, + f"""\ +"Rev A" +revision = '{a}' +down_revision = None + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 1") + + +def downgrade(): + op.execute("DROP STEP 1") + +""", + ) + + script.generate_revision(b, "revision b", refresh=True, head=a) + write_script( + script, + b, + f"""# coding: utf-8 +"Rev B, méil, %3" +revision = '{b}' +down_revision = '{a}' + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 2") + + +def downgrade(): + op.execute("DROP STEP 2") + +""", + encoding="utf-8", + ) + + script.generate_revision(c, "revision c", refresh=True, head=b) + write_script( + script, + c, + f"""\ +"Rev C" +revision = '{c}' +down_revision = '{b}' + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 3") + + +def downgrade(): + op.execute("DROP STEP 3") + +""", + ) + return a, b, c + + +def multi_heads_fixture(cfg, a, b, c): + """Create a multiple head fixture from the three-revs fixture""" + + # a->b->c + # -> d -> e + # -> f + d = util.rev_id() + e = util.rev_id() + f = util.rev_id() + + script = ScriptDirectory.from_config(cfg) + script.generate_revision( + d, "revision d from b", head=b, splice=True, refresh=True + ) + write_script( + script, + d, + f"""\ +"Rev D" +revision = '{d}' +down_revision = '{b}' + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 4") + + +def downgrade(): + op.execute("DROP STEP 4") + +""", + ) + + script.generate_revision( + e, "revision e from d", head=d, splice=True, refresh=True + ) + write_script( + script, + e, + f"""\ +"Rev E" +revision = '{e}' +down_revision = '{d}' + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 5") + + +def downgrade(): + op.execute("DROP STEP 5") + +""", + ) + + script.generate_revision( + f, "revision f from b", head=b, splice=True, refresh=True + ) + write_script( + script, + f, + f"""\ +"Rev F" +revision = '{f}' +down_revision = '{b}' + +from alembic import op + + +def upgrade(): + op.execute("CREATE STEP 6") + + +def downgrade(): + op.execute("DROP STEP 6") + +""", + ) + + return d, e, f + + +def _multidb_testing_config(engines): + """alembic.ini fixture to work exactly with the 'multidb' template""" + + dir_ = _join_path(_get_staging_directory(), "scripts") + + sqlalchemy_future = "future" in config.db.__class__.__module__ + + databases = ", ".join(engines.keys()) + engines = "\n\n".join( + f"[{key}]\nsqlalchemy.url = {value.url}" + for key, value in engines.items() + ) + + return _write_config_file( + f""" +[alembic] +script_location = {dir_} +sourceless = false +sqlalchemy.future = {"true" if sqlalchemy_future else "false"} +databases = {databases} + +{engines} +[loggers] +keys = root + +[handlers] +keys = console + +[logger_root] +level = WARNING +handlers = console +qualname = + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatters] +keys = generic + +[formatter_generic] +format = %%(levelname)-5.5s [%%(name)s] %%(message)s +datefmt = %%H:%%M:%%S + """ + ) + + +def _join_path(base: str, *more: str): + return str(Path(base).joinpath(*more).as_posix()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/fixtures.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/fixtures.py new file mode 100644 index 000000000..61bcd7e93 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/fixtures.py @@ -0,0 +1,333 @@ +from __future__ import annotations + +import configparser +from contextlib import contextmanager +import io +import os +import re +import shutil +from typing import Any +from typing import Dict + +from sqlalchemy import Column +from sqlalchemy import create_mock_engine +from sqlalchemy import inspect +from sqlalchemy import MetaData +from sqlalchemy import String +from sqlalchemy import Table +from sqlalchemy import testing +from sqlalchemy import text +from sqlalchemy.testing import config +from sqlalchemy.testing import mock +from sqlalchemy.testing.assertions import eq_ +from sqlalchemy.testing.fixtures import FutureEngineMixin +from sqlalchemy.testing.fixtures import TablesTest as SQLAlchemyTablesTest +from sqlalchemy.testing.fixtures import TestBase as SQLAlchemyTestBase + +import alembic +from .assertions import _get_dialect +from .env import _get_staging_directory +from ..environment import EnvironmentContext +from ..migration import MigrationContext +from ..operations import Operations +from ..util import sqla_compat +from ..util.sqla_compat import sqla_2 + +testing_config = configparser.ConfigParser() +testing_config.read(["test.cfg"]) + + +class TestBase(SQLAlchemyTestBase): + is_sqlalchemy_future = sqla_2 + + @testing.fixture() + def clear_staging_dir(self): + yield + location = _get_staging_directory() + for filename in os.listdir(location): + file_path = os.path.join(location, filename) + if os.path.isfile(file_path) or os.path.islink(file_path): + os.unlink(file_path) + elif os.path.isdir(file_path): + shutil.rmtree(file_path) + + @contextmanager + def pushd(self, dirname): + current_dir = os.getcwd() + try: + os.chdir(dirname) + yield + finally: + os.chdir(current_dir) + + @testing.fixture() + def pop_alembic_config_env(self): + yield + os.environ.pop("ALEMBIC_CONFIG", None) + + @testing.fixture() + def ops_context(self, migration_context): + with migration_context.begin_transaction(_per_migration=True): + yield Operations(migration_context) + + @testing.fixture + def migration_context(self, connection): + return MigrationContext.configure( + connection, opts=dict(transaction_per_migration=True) + ) + + @testing.fixture + def as_sql_migration_context(self, connection): + return MigrationContext.configure( + connection, opts=dict(transaction_per_migration=True, as_sql=True) + ) + + @testing.fixture + def connection(self): + with config.db.connect() as conn: + yield conn + + +class TablesTest(TestBase, SQLAlchemyTablesTest): + pass + + +FutureEngineMixin.is_sqlalchemy_future = True + + +def capture_db(dialect="postgresql://"): + buf = [] + + def dump(sql, *multiparams, **params): + buf.append(str(sql.compile(dialect=engine.dialect))) + + engine = create_mock_engine(dialect, dump) + return engine, buf + + +_engs: Dict[Any, Any] = {} + + +@contextmanager +def capture_context_buffer(**kw): + if kw.pop("bytes_io", False): + buf = io.BytesIO() + else: + buf = io.StringIO() + + kw.update({"dialect_name": "sqlite", "output_buffer": buf}) + conf = EnvironmentContext.configure + + def configure(*arg, **opt): + opt.update(**kw) + return conf(*arg, **opt) + + with mock.patch.object(EnvironmentContext, "configure", configure): + yield buf + + +@contextmanager +def capture_engine_context_buffer(**kw): + from .env import _sqlite_file_db + from sqlalchemy import event + + buf = io.StringIO() + + eng = _sqlite_file_db() + + conn = eng.connect() + + @event.listens_for(conn, "before_cursor_execute") + def bce(conn, cursor, statement, parameters, context, executemany): + buf.write(statement + "\n") + + kw.update({"connection": conn}) + conf = EnvironmentContext.configure + + def configure(*arg, **opt): + opt.update(**kw) + return conf(*arg, **opt) + + with mock.patch.object(EnvironmentContext, "configure", configure): + yield buf + + +def op_fixture( + dialect="default", + as_sql=False, + naming_convention=None, + literal_binds=False, + native_boolean=None, +): + opts = {} + if naming_convention: + opts["target_metadata"] = MetaData(naming_convention=naming_convention) + + class buffer_: + def __init__(self): + self.lines = [] + + def write(self, msg): + msg = msg.strip() + msg = re.sub(r"[\n\t]", "", msg) + if as_sql: + # the impl produces soft tabs, + # so search for blocks of 4 spaces + msg = re.sub(r" ", "", msg) + msg = re.sub(r"\;\n*$", "", msg) + + self.lines.append(msg) + + def flush(self): + pass + + buf = buffer_() + + class ctx(MigrationContext): + def get_buf(self): + return buf + + def clear_assertions(self): + buf.lines[:] = [] + + def assert_(self, *sql): + # TODO: make this more flexible about + # whitespace and such + eq_(buf.lines, [re.sub(r"[\n\t]", "", s) for s in sql]) + + def assert_contains(self, sql): + for stmt in buf.lines: + if re.sub(r"[\n\t]", "", sql) in stmt: + return + else: + assert False, "Could not locate fragment %r in %r" % ( + sql, + buf.lines, + ) + + if as_sql: + opts["as_sql"] = as_sql + if literal_binds: + opts["literal_binds"] = literal_binds + + ctx_dialect = _get_dialect(dialect) + if native_boolean is not None: + ctx_dialect.supports_native_boolean = native_boolean + # this is new as of SQLAlchemy 1.2.7 and is used by SQL Server, + # which breaks assumptions in the alembic test suite + ctx_dialect.non_native_boolean_check_constraint = True + if not as_sql: + + def execute(stmt, *multiparam, **param): + if isinstance(stmt, str): + stmt = text(stmt) + assert stmt.supports_execution + sql = str(stmt.compile(dialect=ctx_dialect)) + + buf.write(sql) + + connection = mock.Mock(dialect=ctx_dialect, execute=execute) + else: + opts["output_buffer"] = buf + connection = None + context = ctx(ctx_dialect, connection, opts) + + alembic.op._proxy = Operations(context) + return context + + +class AlterColRoundTripFixture: + # since these tests are about syntax, use more recent SQLAlchemy as some of + # the type / server default compare logic might not work on older + # SQLAlchemy versions as seems to be the case for SQLAlchemy 1.1 on Oracle + + __requires__ = ("alter_column",) + + def setUp(self): + self.conn = config.db.connect() + self.ctx = MigrationContext.configure(self.conn) + self.op = Operations(self.ctx) + self.metadata = MetaData() + + def _compare_type(self, t1, t2): + c1 = Column("q", t1) + c2 = Column("q", t2) + assert not self.ctx.impl.compare_type( + c1, c2 + ), "Type objects %r and %r didn't compare as equivalent" % (t1, t2) + + def _compare_server_default(self, t1, s1, t2, s2): + c1 = Column("q", t1, server_default=s1) + c2 = Column("q", t2, server_default=s2) + assert not self.ctx.impl.compare_server_default( + c1, c2, s2, s1 + ), "server defaults %r and %r didn't compare as equivalent" % (s1, s2) + + def tearDown(self): + sqla_compat._safe_rollback_connection_transaction(self.conn) + with self.conn.begin(): + self.metadata.drop_all(self.conn) + self.conn.close() + + def _run_alter_col(self, from_, to_, compare=None): + column = Column( + from_.get("name", "colname"), + from_.get("type", String(10)), + nullable=from_.get("nullable", True), + server_default=from_.get("server_default", None), + # comment=from_.get("comment", None) + ) + t = Table("x", self.metadata, column) + + with sqla_compat._ensure_scope_for_ddl(self.conn): + t.create(self.conn) + insp = inspect(self.conn) + old_col = insp.get_columns("x")[0] + + # TODO: conditional comment support + self.op.alter_column( + "x", + column.name, + existing_type=column.type, + existing_server_default=( + column.server_default + if column.server_default is not None + else False + ), + existing_nullable=True if column.nullable else False, + # existing_comment=column.comment, + nullable=to_.get("nullable", None), + # modify_comment=False, + server_default=to_.get("server_default", False), + new_column_name=to_.get("name", None), + type_=to_.get("type", None), + ) + + insp = inspect(self.conn) + new_col = insp.get_columns("x")[0] + + if compare is None: + compare = to_ + + eq_( + new_col["name"], + compare["name"] if "name" in compare else column.name, + ) + self._compare_type( + new_col["type"], compare.get("type", old_col["type"]) + ) + eq_(new_col["nullable"], compare.get("nullable", column.nullable)) + self._compare_server_default( + new_col["type"], + new_col.get("default", None), + compare.get("type", old_col["type"]), + ( + compare["server_default"].text + if "server_default" in compare + else ( + column.server_default.arg.text + if column.server_default is not None + else None + ) + ), + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/plugin/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/plugin/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/plugin/bootstrap.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/plugin/bootstrap.py new file mode 100644 index 000000000..d4a2c5521 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/plugin/bootstrap.py @@ -0,0 +1,4 @@ +""" +Bootstrapper for test framework plugins. + +""" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/requirements.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/requirements.py new file mode 100644 index 000000000..8b63c16ba --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/requirements.py @@ -0,0 +1,176 @@ +from sqlalchemy.testing.requirements import Requirements + +from alembic import util +from ..testing import exclusions + + +class SuiteRequirements(Requirements): + @property + def schemas(self): + """Target database must support external schemas, and have one + named 'test_schema'.""" + + return exclusions.open() + + @property + def autocommit_isolation(self): + """target database should support 'AUTOCOMMIT' isolation level""" + + return exclusions.closed() + + @property + def materialized_views(self): + """needed for sqlalchemy compat""" + return exclusions.closed() + + @property + def unique_constraint_reflection(self): + def doesnt_have_check_uq_constraints(config): + from sqlalchemy import inspect + + insp = inspect(config.db) + try: + insp.get_unique_constraints("x") + except NotImplementedError: + return True + except TypeError: + return True + except Exception: + pass + return False + + return exclusions.skip_if(doesnt_have_check_uq_constraints) + + @property + def sequences(self): + """Target database must support SEQUENCEs.""" + + return exclusions.only_if( + [lambda config: config.db.dialect.supports_sequences], + "no sequence support", + ) + + @property + def foreign_key_match(self): + return exclusions.open() + + @property + def foreign_key_constraint_reflection(self): + return exclusions.open() + + @property + def check_constraints_w_enforcement(self): + """Target database must support check constraints + and also enforce them.""" + + return exclusions.open() + + @property + def reflects_pk_names(self): + return exclusions.closed() + + @property + def reflects_fk_options(self): + return exclusions.closed() + + @property + def sqlalchemy_1x(self): + return exclusions.skip_if( + lambda config: util.sqla_2, + "SQLAlchemy 1.x test", + ) + + @property + def sqlalchemy_2(self): + return exclusions.skip_if( + lambda config: not util.sqla_2, + "SQLAlchemy 2.x test", + ) + + @property + def asyncio(self): + def go(config): + try: + import greenlet # noqa: F401 + except ImportError: + return False + else: + return True + + return exclusions.only_if(go) + + @property + def comments(self): + return exclusions.only_if( + lambda config: config.db.dialect.supports_comments + ) + + @property + def alter_column(self): + return exclusions.open() + + @property + def computed_columns(self): + return exclusions.closed() + + @property + def autoincrement_on_composite_pk(self): + return exclusions.closed() + + @property + def fk_ondelete_is_reflected(self): + return exclusions.closed() + + @property + def fk_onupdate_is_reflected(self): + return exclusions.closed() + + @property + def fk_onupdate(self): + return exclusions.open() + + @property + def fk_ondelete_restrict(self): + return exclusions.open() + + @property + def fk_onupdate_restrict(self): + return exclusions.open() + + @property + def fk_ondelete_noaction(self): + return exclusions.open() + + @property + def fk_initially(self): + return exclusions.closed() + + @property + def fk_deferrable(self): + return exclusions.closed() + + @property + def fk_deferrable_is_reflected(self): + return exclusions.closed() + + @property + def fk_names(self): + return exclusions.open() + + @property + def integer_subtype_comparisons(self): + return exclusions.open() + + @property + def no_name_normalize(self): + return exclusions.skip_if( + lambda config: config.db.dialect.requires_name_normalize + ) + + @property + def identity_columns(self): + return exclusions.closed() + + @property + def identity_columns_alter(self): + return exclusions.closed() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/schemacompare.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/schemacompare.py new file mode 100644 index 000000000..204cc4ddc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/schemacompare.py @@ -0,0 +1,169 @@ +from itertools import zip_longest + +from sqlalchemy import schema +from sqlalchemy.sql.elements import ClauseList + + +class CompareTable: + def __init__(self, table): + self.table = table + + def __eq__(self, other): + if self.table.name != other.name or self.table.schema != other.schema: + return False + + for c1, c2 in zip_longest(self.table.c, other.c): + if (c1 is None and c2 is not None) or ( + c2 is None and c1 is not None + ): + return False + if CompareColumn(c1) != c2: + return False + + return True + + # TODO: compare constraints, indexes + + def __ne__(self, other): + return not self.__eq__(other) + + +class CompareColumn: + def __init__(self, column): + self.column = column + + def __eq__(self, other): + return ( + self.column.name == other.name + and self.column.nullable == other.nullable + ) + # TODO: datatypes etc + + def __ne__(self, other): + return not self.__eq__(other) + + +class CompareIndex: + def __init__(self, index, name_only=False): + self.index = index + self.name_only = name_only + + def __eq__(self, other): + if self.name_only: + return self.index.name == other.name + else: + return ( + str(schema.CreateIndex(self.index)) + == str(schema.CreateIndex(other)) + and self.index.dialect_kwargs == other.dialect_kwargs + ) + + def __ne__(self, other): + return not self.__eq__(other) + + def __repr__(self): + expr = ClauseList(*self.index.expressions) + try: + expr_str = expr.compile().string + except Exception: + expr_str = str(expr) + return f"" + + +class CompareCheckConstraint: + def __init__(self, constraint): + self.constraint = constraint + + def __eq__(self, other): + return ( + isinstance(other, schema.CheckConstraint) + and self.constraint.name == other.name + and (str(self.constraint.sqltext) == str(other.sqltext)) + and (other.table.name == self.constraint.table.name) + and other.table.schema == self.constraint.table.schema + ) + + def __ne__(self, other): + return not self.__eq__(other) + + +class CompareForeignKey: + def __init__(self, constraint): + self.constraint = constraint + + def __eq__(self, other): + r1 = ( + isinstance(other, schema.ForeignKeyConstraint) + and self.constraint.name == other.name + and (other.table.name == self.constraint.table.name) + and other.table.schema == self.constraint.table.schema + ) + if not r1: + return False + for c1, c2 in zip_longest(self.constraint.columns, other.columns): + if (c1 is None and c2 is not None) or ( + c2 is None and c1 is not None + ): + return False + if CompareColumn(c1) != c2: + return False + return True + + def __ne__(self, other): + return not self.__eq__(other) + + +class ComparePrimaryKey: + def __init__(self, constraint): + self.constraint = constraint + + def __eq__(self, other): + r1 = ( + isinstance(other, schema.PrimaryKeyConstraint) + and self.constraint.name == other.name + and (other.table.name == self.constraint.table.name) + and other.table.schema == self.constraint.table.schema + ) + if not r1: + return False + + for c1, c2 in zip_longest(self.constraint.columns, other.columns): + if (c1 is None and c2 is not None) or ( + c2 is None and c1 is not None + ): + return False + if CompareColumn(c1) != c2: + return False + + return True + + def __ne__(self, other): + return not self.__eq__(other) + + +class CompareUniqueConstraint: + def __init__(self, constraint): + self.constraint = constraint + + def __eq__(self, other): + r1 = ( + isinstance(other, schema.UniqueConstraint) + and self.constraint.name == other.name + and (other.table.name == self.constraint.table.name) + and other.table.schema == self.constraint.table.schema + ) + if not r1: + return False + + for c1, c2 in zip_longest(self.constraint.columns, other.columns): + if (c1 is None and c2 is not None) or ( + c2 is None and c1 is not None + ): + return False + if CompareColumn(c1) != c2: + return False + + return True + + def __ne__(self, other): + return not self.__eq__(other) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/__init__.py new file mode 100644 index 000000000..3da498d28 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/__init__.py @@ -0,0 +1,7 @@ +from .test_autogen_comments import * # noqa +from .test_autogen_computed import * # noqa +from .test_autogen_diffs import * # noqa +from .test_autogen_fks import * # noqa +from .test_autogen_identity import * # noqa +from .test_environment import * # noqa +from .test_op import * # noqa diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/_autogen_fixtures.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/_autogen_fixtures.py new file mode 100644 index 000000000..ed4acb26a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/_autogen_fixtures.py @@ -0,0 +1,448 @@ +from __future__ import annotations + +from typing import Any +from typing import Dict +from typing import Set + +from sqlalchemy import CHAR +from sqlalchemy import CheckConstraint +from sqlalchemy import Column +from sqlalchemy import event +from sqlalchemy import ForeignKey +from sqlalchemy import Index +from sqlalchemy import inspect +from sqlalchemy import Integer +from sqlalchemy import MetaData +from sqlalchemy import Numeric +from sqlalchemy import PrimaryKeyConstraint +from sqlalchemy import String +from sqlalchemy import Table +from sqlalchemy import Text +from sqlalchemy import text +from sqlalchemy import UniqueConstraint + +from ... import autogenerate +from ... import util +from ...autogenerate import api +from ...ddl.base import _fk_spec +from ...migration import MigrationContext +from ...operations import ops +from ...testing import config +from ...testing import eq_ +from ...testing.env import clear_staging_env +from ...testing.env import staging_env + +names_in_this_test: Set[Any] = set() + + +@event.listens_for(Table, "after_parent_attach") +def new_table(table, parent): + names_in_this_test.add(table.name) + + +def _default_include_object(obj, name, type_, reflected, compare_to): + if type_ == "table": + return name in names_in_this_test + else: + return True + + +_default_object_filters: Any = _default_include_object + +_default_name_filters: Any = None + + +class ModelOne: + __requires__ = ("unique_constraint_reflection",) + + schema: Any = None + + @classmethod + def _get_db_schema(cls): + schema = cls.schema + + m = MetaData(schema=schema) + + Table( + "user", + m, + Column("id", Integer, primary_key=True), + Column("name", String(50)), + Column("a1", Text), + Column("pw", String(50)), + Index("pw_idx", "pw"), + ) + + Table( + "address", + m, + Column("id", Integer, primary_key=True), + Column("email_address", String(100), nullable=False), + ) + + Table( + "order", + m, + Column("order_id", Integer, primary_key=True), + Column( + "amount", + Numeric(8, 2), + nullable=False, + server_default=text("0"), + ), + CheckConstraint("amount >= 0", name="ck_order_amount"), + ) + + Table( + "extra", + m, + Column("x", CHAR), + Column("uid", Integer, ForeignKey("user.id")), + ) + + return m + + @classmethod + def _get_model_schema(cls): + schema = cls.schema + + m = MetaData(schema=schema) + + Table( + "user", + m, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", Text, server_default="x"), + ) + + Table( + "address", + m, + Column("id", Integer, primary_key=True), + Column("email_address", String(100), nullable=False), + Column("street", String(50)), + UniqueConstraint("email_address", name="uq_email"), + ) + + Table( + "order", + m, + Column("order_id", Integer, primary_key=True), + Column( + "amount", + Numeric(10, 2), + nullable=True, + server_default=text("0"), + ), + Column("user_id", Integer, ForeignKey("user.id")), + CheckConstraint("amount > -1", name="ck_order_amount"), + ) + + Table( + "item", + m, + Column("id", Integer, primary_key=True), + Column("description", String(100)), + Column("order_id", Integer, ForeignKey("order.order_id")), + CheckConstraint("len(description) > 5"), + ) + return m + + +class NamingConvModel: + __requires__ = ("unique_constraint_reflection",) + configure_opts = {"conv_all_constraint_names": True} + naming_convention = { + "ix": "ix_%(column_0_label)s", + "uq": "uq_%(table_name)s_%(constraint_name)s", + "ck": "ck_%(table_name)s_%(constraint_name)s", + "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s", + "pk": "pk_%(table_name)s", + } + + @classmethod + def _get_db_schema(cls): + # database side - assume all constraints have a name that + # we would assume here is a "db generated" name. need to make + # sure these all render with op.f(). + m = MetaData() + Table( + "x1", + m, + Column("q", Integer), + Index("db_x1_index_q", "q"), + PrimaryKeyConstraint("q", name="db_x1_primary_q"), + ) + Table( + "x2", + m, + Column("q", Integer), + Column("p", ForeignKey("x1.q", name="db_x2_foreign_q")), + CheckConstraint("q > 5", name="db_x2_check_q"), + ) + Table( + "x3", + m, + Column("q", Integer), + Column("r", Integer), + Column("s", Integer), + UniqueConstraint("q", name="db_x3_unique_q"), + ) + Table( + "x4", + m, + Column("q", Integer), + PrimaryKeyConstraint("q", name="db_x4_primary_q"), + ) + Table( + "x5", + m, + Column("q", Integer), + Column("p", ForeignKey("x4.q", name="db_x5_foreign_q")), + Column("r", Integer), + Column("s", Integer), + PrimaryKeyConstraint("q", name="db_x5_primary_q"), + UniqueConstraint("r", name="db_x5_unique_r"), + CheckConstraint("s > 5", name="db_x5_check_s"), + ) + # SQLite and it's "no names needed" thing. bleh. + # we can't have a name for these so you'll see "None" for the name. + Table( + "unnamed_sqlite", + m, + Column("q", Integer), + Column("r", Integer), + PrimaryKeyConstraint("q"), + UniqueConstraint("r"), + ) + return m + + @classmethod + def _get_model_schema(cls): + from sqlalchemy.sql.naming import conv + + m = MetaData(naming_convention=cls.naming_convention) + Table( + "x1", m, Column("q", Integer, primary_key=True), Index(None, "q") + ) + Table( + "x2", + m, + Column("q", Integer), + Column("p", ForeignKey("x1.q")), + CheckConstraint("q > 5", name="token_x2check1"), + ) + Table( + "x3", + m, + Column("q", Integer), + Column("r", Integer), + Column("s", Integer), + UniqueConstraint("r", name="token_x3r"), + UniqueConstraint("s", name=conv("userdef_x3_unique_s")), + ) + Table( + "x4", + m, + Column("q", Integer, primary_key=True), + Index("userdef_x4_idx_q", "q"), + ) + Table( + "x6", + m, + Column("q", Integer, primary_key=True), + Column("p", ForeignKey("x4.q")), + Column("r", Integer), + Column("s", Integer), + UniqueConstraint("r", name="token_x6r"), + CheckConstraint("s > 5", "token_x6check1"), + CheckConstraint("s < 20", conv("userdef_x6_check_s")), + ) + return m + + +class _ComparesFKs: + def _assert_fk_diff( + self, + diff, + type_, + source_table, + source_columns, + target_table, + target_columns, + name=None, + conditional_name=None, + source_schema=None, + onupdate=None, + ondelete=None, + initially=None, + deferrable=None, + ): + # the public API for ForeignKeyConstraint was not very rich + # in 0.7, 0.8, so here we use the well-known but slightly + # private API to get at its elements + ( + fk_source_schema, + fk_source_table, + fk_source_columns, + fk_target_schema, + fk_target_table, + fk_target_columns, + fk_onupdate, + fk_ondelete, + fk_deferrable, + fk_initially, + ) = _fk_spec(diff[1]) + + eq_(diff[0], type_) + eq_(fk_source_table, source_table) + eq_(fk_source_columns, source_columns) + eq_(fk_target_table, target_table) + eq_(fk_source_schema, source_schema) + eq_(fk_onupdate, onupdate) + eq_(fk_ondelete, ondelete) + eq_(fk_initially, initially) + eq_(fk_deferrable, deferrable) + + eq_([elem.column.name for elem in diff[1].elements], target_columns) + if conditional_name is not None: + if conditional_name == "servergenerated": + fks = inspect(self.bind).get_foreign_keys(source_table) + server_fk_name = fks[0]["name"] + eq_(diff[1].name, server_fk_name) + else: + eq_(diff[1].name, conditional_name) + else: + eq_(diff[1].name, name) + + +class AutogenTest(_ComparesFKs): + def _flatten_diffs(self, diffs): + for d in diffs: + if isinstance(d, list): + yield from self._flatten_diffs(d) + else: + yield d + + @classmethod + def _get_bind(cls): + return config.db + + configure_opts: Dict[Any, Any] = {} + + @classmethod + def setup_class(cls): + staging_env() + cls.bind = cls._get_bind() + cls.m1 = cls._get_db_schema() + cls.m1.create_all(cls.bind) + cls.m2 = cls._get_model_schema() + + @classmethod + def teardown_class(cls): + cls.m1.drop_all(cls.bind) + clear_staging_env() + + def setUp(self): + self.conn = conn = self.bind.connect() + ctx_opts = { + "compare_type": True, + "compare_server_default": True, + "target_metadata": self.m2, + "upgrade_token": "upgrades", + "downgrade_token": "downgrades", + "alembic_module_prefix": "op.", + "sqlalchemy_module_prefix": "sa.", + "include_object": _default_object_filters, + "include_name": _default_name_filters, + } + if self.configure_opts: + ctx_opts.update(self.configure_opts) + self.context = context = MigrationContext.configure( + connection=conn, opts=ctx_opts + ) + + self.autogen_context = api.AutogenContext(context, self.m2) + + def tearDown(self): + self.conn.close() + + def _update_context( + self, object_filters=None, name_filters=None, include_schemas=None + ): + if include_schemas is not None: + self.autogen_context.opts["include_schemas"] = include_schemas + if object_filters is not None: + self.autogen_context._object_filters = [object_filters] + if name_filters is not None: + self.autogen_context._name_filters = [name_filters] + return self.autogen_context + + +class AutogenFixtureTest(_ComparesFKs): + def _fixture( + self, + m1, + m2, + include_schemas=False, + opts=None, + object_filters=_default_object_filters, + name_filters=_default_name_filters, + return_ops=False, + max_identifier_length=None, + ): + if max_identifier_length: + dialect = self.bind.dialect + existing_length = dialect.max_identifier_length + dialect.max_identifier_length = ( + dialect._user_defined_max_identifier_length + ) = max_identifier_length + try: + self._alembic_metadata, model_metadata = m1, m2 + for m in util.to_list(self._alembic_metadata): + m.create_all(self.bind) + + with self.bind.connect() as conn: + ctx_opts = { + "compare_type": True, + "compare_server_default": True, + "target_metadata": model_metadata, + "upgrade_token": "upgrades", + "downgrade_token": "downgrades", + "alembic_module_prefix": "op.", + "sqlalchemy_module_prefix": "sa.", + "include_object": object_filters, + "include_name": name_filters, + "include_schemas": include_schemas, + } + if opts: + ctx_opts.update(opts) + self.context = context = MigrationContext.configure( + connection=conn, opts=ctx_opts + ) + + autogen_context = api.AutogenContext(context, model_metadata) + uo = ops.UpgradeOps(ops=[]) + autogenerate._produce_net_changes(autogen_context, uo) + + if return_ops: + return uo + else: + return uo.as_diffs() + finally: + if max_identifier_length: + dialect = self.bind.dialect + dialect.max_identifier_length = ( + dialect._user_defined_max_identifier_length + ) = existing_length + + def setUp(self): + staging_env() + self.bind = config.db + + def tearDown(self): + if hasattr(self, "_alembic_metadata"): + for m in util.to_list(self._alembic_metadata): + m.drop_all(self.bind) + clear_staging_env() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_comments.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_comments.py new file mode 100644 index 000000000..7ef074f57 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_comments.py @@ -0,0 +1,242 @@ +from sqlalchemy import Column +from sqlalchemy import Float +from sqlalchemy import MetaData +from sqlalchemy import String +from sqlalchemy import Table + +from ._autogen_fixtures import AutogenFixtureTest +from ...testing import eq_ +from ...testing import mock +from ...testing import TestBase + + +class AutogenerateCommentsTest(AutogenFixtureTest, TestBase): + __backend__ = True + + __requires__ = ("comments",) + + def test_existing_table_comment_no_change(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + comment="this is some table", + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + comment="this is some table", + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + def test_add_table_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table("some_table", m1, Column("test", String(10), primary_key=True)) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + comment="this is some table", + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "add_table_comment") + eq_(diffs[0][1].comment, "this is some table") + eq_(diffs[0][2], None) + + def test_remove_table_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + comment="this is some table", + ) + + Table("some_table", m2, Column("test", String(10), primary_key=True)) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "remove_table_comment") + eq_(diffs[0][1].comment, None) + + def test_alter_table_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + comment="this is some table", + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + comment="this is also some table", + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "add_table_comment") + eq_(diffs[0][1].comment, "this is also some table") + eq_(diffs[0][2], "this is some table") + + def test_existing_column_comment_no_change(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the amount"), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the amount"), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + def test_add_column_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + Column("amount", Float), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the amount"), + ) + + diffs = self._fixture(m1, m2) + eq_( + diffs, + [ + [ + ( + "modify_comment", + None, + "some_table", + "amount", + { + "existing_nullable": True, + "existing_type": mock.ANY, + "existing_server_default": False, + }, + None, + "the amount", + ) + ] + ], + ) + + def test_remove_column_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the amount"), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + Column("amount", Float), + ) + + diffs = self._fixture(m1, m2) + eq_( + diffs, + [ + [ + ( + "modify_comment", + None, + "some_table", + "amount", + { + "existing_nullable": True, + "existing_type": mock.ANY, + "existing_server_default": False, + }, + "the amount", + None, + ) + ] + ], + ) + + def test_alter_column_comment(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the amount"), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + Column("amount", Float, comment="the adjusted amount"), + ) + + diffs = self._fixture(m1, m2) + + eq_( + diffs, + [ + [ + ( + "modify_comment", + None, + "some_table", + "amount", + { + "existing_nullable": True, + "existing_type": mock.ANY, + "existing_server_default": False, + }, + "the amount", + "the adjusted amount", + ) + ] + ], + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_computed.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_computed.py new file mode 100644 index 000000000..fe7eb7a56 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_computed.py @@ -0,0 +1,144 @@ +import sqlalchemy as sa +from sqlalchemy import Column +from sqlalchemy import Integer +from sqlalchemy import MetaData +from sqlalchemy import Table + +from ._autogen_fixtures import AutogenFixtureTest +from ... import testing +from ...testing import eq_ +from ...testing import is_ +from ...testing import is_true +from ...testing import mock +from ...testing import TestBase + + +class AutogenerateComputedTest(AutogenFixtureTest, TestBase): + __requires__ = ("computed_columns",) + __backend__ = True + + def test_add_computed_column(self): + m1 = MetaData() + m2 = MetaData() + + Table("user", m1, Column("id", Integer, primary_key=True)) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("foo", Integer, sa.Computed("5")), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "add_column") + eq_(diffs[0][2], "user") + eq_(diffs[0][3].name, "foo") + c = diffs[0][3].computed + + is_true(isinstance(c, sa.Computed)) + is_(c.persisted, None) + eq_(str(c.sqltext), "5") + + def test_remove_computed_column(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("foo", Integer, sa.Computed("5")), + ) + + Table("user", m2, Column("id", Integer, primary_key=True)) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "remove_column") + eq_(diffs[0][2], "user") + c = diffs[0][3] + eq_(c.name, "foo") + + is_true(isinstance(c.computed, sa.Computed)) + is_true(isinstance(c.server_default, sa.Computed)) + + @testing.combinations( + lambda: (None, sa.Computed("bar*5")), + (lambda: (sa.Computed("bar*5"), None)), + lambda: ( + sa.Computed("bar*5"), + sa.Computed("bar * 42", persisted=True), + ), + lambda: (sa.Computed("bar*5"), sa.Computed("bar * 42")), + ) + def test_cant_change_computed_warning(self, test_case): + arg_before, arg_after = testing.resolve_lambda(test_case, **locals()) + m1 = MetaData() + m2 = MetaData() + + arg_before = [] if arg_before is None else [arg_before] + arg_after = [] if arg_after is None else [arg_after] + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("bar", Integer), + Column("foo", Integer, *arg_before), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("bar", Integer), + Column("foo", Integer, *arg_after), + ) + + with mock.patch("alembic.util.warn") as mock_warn: + diffs = self._fixture(m1, m2) + + eq_( + mock_warn.mock_calls, + [mock.call("Computed default on user.foo cannot be modified")], + ) + + eq_(list(diffs), []) + + @testing.combinations( + lambda: (None, None), + lambda: (sa.Computed("5"), sa.Computed("5")), + lambda: (sa.Computed("bar*5"), sa.Computed("bar*5")), + lambda: (sa.Computed("bar*5"), sa.Computed("bar * \r\n\t5")), + ) + def test_computed_unchanged(self, test_case): + arg_before, arg_after = testing.resolve_lambda(test_case, **locals()) + m1 = MetaData() + m2 = MetaData() + + arg_before = [] if arg_before is None else [arg_before] + arg_after = [] if arg_after is None else [arg_after] + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("bar", Integer), + Column("foo", Integer, *arg_before), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("bar", Integer), + Column("foo", Integer, *arg_after), + ) + + with mock.patch("alembic.util.warn") as mock_warn: + diffs = self._fixture(m1, m2) + eq_(mock_warn.mock_calls, []) + + eq_(list(diffs), []) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_diffs.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_diffs.py new file mode 100644 index 000000000..75bcd37ae --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_diffs.py @@ -0,0 +1,273 @@ +from sqlalchemy import BigInteger +from sqlalchemy import Column +from sqlalchemy import Integer +from sqlalchemy import MetaData +from sqlalchemy import Table +from sqlalchemy.testing import in_ + +from ._autogen_fixtures import AutogenFixtureTest +from ... import testing +from ...testing import config +from ...testing import eq_ +from ...testing import is_ +from ...testing import TestBase + + +class AlterColumnTest(AutogenFixtureTest, TestBase): + __backend__ = True + + @testing.combinations((True,), (False,)) + @config.requirements.comments + def test_all_existings_filled(self, pk): + m1 = MetaData() + m2 = MetaData() + + Table("a", m1, Column("x", Integer, primary_key=pk)) + Table("a", m2, Column("x", Integer, comment="x", primary_key=pk)) + + alter_col = self._assert_alter_col(m1, m2, pk) + eq_(alter_col.modify_comment, "x") + + @testing.combinations((True,), (False,)) + @config.requirements.comments + def test_all_existings_filled_in_notnull(self, pk): + m1 = MetaData() + m2 = MetaData() + + Table("a", m1, Column("x", Integer, nullable=False, primary_key=pk)) + Table( + "a", + m2, + Column("x", Integer, nullable=False, comment="x", primary_key=pk), + ) + + self._assert_alter_col(m1, m2, pk, nullable=False) + + @testing.combinations((True,), (False,)) + @config.requirements.comments + def test_all_existings_filled_in_comment(self, pk): + m1 = MetaData() + m2 = MetaData() + + Table("a", m1, Column("x", Integer, comment="old", primary_key=pk)) + Table("a", m2, Column("x", Integer, comment="new", primary_key=pk)) + + alter_col = self._assert_alter_col(m1, m2, pk) + eq_(alter_col.existing_comment, "old") + + @testing.combinations((True,), (False,)) + @config.requirements.comments + def test_all_existings_filled_in_server_default(self, pk): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", m1, Column("x", Integer, server_default="5", primary_key=pk) + ) + Table( + "a", + m2, + Column( + "x", Integer, server_default="5", comment="new", primary_key=pk + ), + ) + + alter_col = self._assert_alter_col(m1, m2, pk) + in_("5", alter_col.existing_server_default.arg.text) + + def _assert_alter_col(self, m1, m2, pk, nullable=None): + ops = self._fixture(m1, m2, return_ops=True) + modify_table = ops.ops[-1] + alter_col = modify_table.ops[0] + + if nullable is None: + eq_(alter_col.existing_nullable, not pk) + else: + eq_(alter_col.existing_nullable, nullable) + assert alter_col.existing_type._compare_type_affinity(Integer()) + return alter_col + + +class AutoincrementTest(AutogenFixtureTest, TestBase): + __backend__ = True + __requires__ = ("integer_subtype_comparisons",) + + def test_alter_column_autoincrement_none(self): + m1 = MetaData() + m2 = MetaData() + + Table("a", m1, Column("x", Integer, nullable=False)) + Table("a", m2, Column("x", Integer, nullable=True)) + + ops = self._fixture(m1, m2, return_ops=True) + assert "autoincrement" not in ops.ops[0].ops[0].kw + + def test_alter_column_autoincrement_pk_false(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("x", Integer, primary_key=True, autoincrement=False), + ) + Table( + "a", + m2, + Column("x", BigInteger, primary_key=True, autoincrement=False), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], False) + + def test_alter_column_autoincrement_pk_implicit_true(self): + m1 = MetaData() + m2 = MetaData() + + Table("a", m1, Column("x", Integer, primary_key=True)) + Table("a", m2, Column("x", BigInteger, primary_key=True)) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], True) + + def test_alter_column_autoincrement_pk_explicit_true(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", m1, Column("x", Integer, primary_key=True, autoincrement=True) + ) + Table( + "a", + m2, + Column("x", BigInteger, primary_key=True, autoincrement=True), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], True) + + def test_alter_column_autoincrement_nonpk_false(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True), + Column("x", Integer, autoincrement=False), + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True), + Column("x", BigInteger, autoincrement=False), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], False) + + def test_alter_column_autoincrement_nonpk_implicit_false(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True), + Column("x", Integer), + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True), + Column("x", BigInteger), + ) + + ops = self._fixture(m1, m2, return_ops=True) + assert "autoincrement" not in ops.ops[0].ops[0].kw + + def test_alter_column_autoincrement_nonpk_explicit_true(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True, autoincrement=False), + Column("x", Integer, autoincrement=True), + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True, autoincrement=False), + Column("x", BigInteger, autoincrement=True), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], True) + + def test_alter_column_autoincrement_compositepk_false(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True), + Column("x", Integer, primary_key=True, autoincrement=False), + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True), + Column("x", BigInteger, primary_key=True, autoincrement=False), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], False) + + def test_alter_column_autoincrement_compositepk_implicit_false(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True), + Column("x", Integer, primary_key=True), + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True), + Column("x", BigInteger, primary_key=True), + ) + + ops = self._fixture(m1, m2, return_ops=True) + assert "autoincrement" not in ops.ops[0].ops[0].kw + + @config.requirements.autoincrement_on_composite_pk + def test_alter_column_autoincrement_compositepk_explicit_true(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "a", + m1, + Column("id", Integer, primary_key=True, autoincrement=False), + Column("x", Integer, primary_key=True, autoincrement=True), + # on SQLA 1.0 and earlier, this being present + # trips the "add KEY for the primary key" so that the + # AUTO_INCREMENT keyword is accepted by MySQL. SQLA 1.1 and + # greater the columns are just reorganized. + mysql_engine="InnoDB", + ) + Table( + "a", + m2, + Column("id", Integer, primary_key=True, autoincrement=False), + Column("x", BigInteger, primary_key=True, autoincrement=True), + ) + + ops = self._fixture(m1, m2, return_ops=True) + is_(ops.ops[0].ops[0].kw["autoincrement"], True) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_fks.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_fks.py new file mode 100644 index 000000000..0240b98d3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_fks.py @@ -0,0 +1,1190 @@ +from sqlalchemy import Column +from sqlalchemy import ForeignKeyConstraint +from sqlalchemy import Integer +from sqlalchemy import MetaData +from sqlalchemy import String +from sqlalchemy import Table + +from ._autogen_fixtures import AutogenFixtureTest +from ...testing import combinations +from ...testing import config +from ...testing import eq_ +from ...testing import mock +from ...testing import TestBase + + +class AutogenerateForeignKeysTest(AutogenFixtureTest, TestBase): + __backend__ = True + __requires__ = ("foreign_key_constraint_reflection",) + + def test_remove_fk(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ForeignKeyConstraint(["test2"], ["some_table.test"]), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ) + + diffs = self._fixture(m1, m2) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["test2"], + "some_table", + ["test"], + conditional_name="servergenerated", + ) + + def test_add_fk(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ) + + Table( + "some_table", + m2, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ForeignKeyConstraint(["test2"], ["some_table.test"]), + ) + + diffs = self._fixture(m1, m2) + + self._assert_fk_diff( + diffs[0], "add_fk", "user", ["test2"], "some_table", ["test"] + ) + + def test_no_change(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", Integer), + ForeignKeyConstraint(["test2"], ["some_table.id"]), + ) + + Table( + "some_table", + m2, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", Integer), + ForeignKeyConstraint(["test2"], ["some_table.id"]), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + def test_no_change_composite_fk(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ForeignKeyConstraint( + ["other_id_1", "other_id_2"], + ["some_table.id_1", "some_table.id_2"], + ), + ) + + Table( + "some_table", + m2, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ForeignKeyConstraint( + ["other_id_1", "other_id_2"], + ["some_table.id_1", "some_table.id_2"], + ), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + def test_casing_convention_changed_so_put_drops_first(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("test", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ForeignKeyConstraint(["test2"], ["some_table.test"], name="MyFK"), + ) + + Table( + "some_table", + m2, + Column("test", String(10), primary_key=True), + ) + + # foreign key autogen currently does not take "name" into account, + # so change the def just for the purposes of testing the + # add/drop order for now. + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("test2", String(10)), + ForeignKeyConstraint(["a1"], ["some_table.test"], name="myfk"), + ) + + diffs = self._fixture(m1, m2) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["test2"], + "some_table", + ["test"], + name="MyFK" if config.requirements.fk_names.enabled else None, + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["a1"], + "some_table", + ["test"], + name="myfk", + ) + + def test_add_composite_fk_with_name(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ) + + Table( + "some_table", + m2, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ForeignKeyConstraint( + ["other_id_1", "other_id_2"], + ["some_table.id_1", "some_table.id_2"], + name="fk_test_name", + ), + ) + + diffs = self._fixture(m1, m2) + self._assert_fk_diff( + diffs[0], + "add_fk", + "user", + ["other_id_1", "other_id_2"], + "some_table", + ["id_1", "id_2"], + name="fk_test_name", + ) + + @config.requirements.no_name_normalize + def test_remove_composite_fk(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ForeignKeyConstraint( + ["other_id_1", "other_id_2"], + ["some_table.id_1", "some_table.id_2"], + name="fk_test_name", + ), + ) + + Table( + "some_table", + m2, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("a1", String(10), server_default="x"), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ) + + diffs = self._fixture(m1, m2) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["other_id_1", "other_id_2"], + "some_table", + ["id_1", "id_2"], + conditional_name="fk_test_name", + ) + + def test_add_fk_colkeys(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ) + + Table( + "some_table", + m2, + Column("id_1", String(10), key="tid1", primary_key=True), + Column("id_2", String(10), key="tid2", primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("other_id_1", String(10), key="oid1"), + Column("other_id_2", String(10), key="oid2"), + ForeignKeyConstraint( + ["oid1", "oid2"], + ["some_table.tid1", "some_table.tid2"], + name="fk_test_name", + ), + ) + + diffs = self._fixture(m1, m2) + + self._assert_fk_diff( + diffs[0], + "add_fk", + "user", + ["other_id_1", "other_id_2"], + "some_table", + ["id_1", "id_2"], + name="fk_test_name", + ) + + def test_no_change_colkeys(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id_1", String(10), primary_key=True), + Column("id_2", String(10), primary_key=True), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("other_id_1", String(10)), + Column("other_id_2", String(10)), + ForeignKeyConstraint( + ["other_id_1", "other_id_2"], + ["some_table.id_1", "some_table.id_2"], + ), + ) + + Table( + "some_table", + m2, + Column("id_1", String(10), key="tid1", primary_key=True), + Column("id_2", String(10), key="tid2", primary_key=True), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("other_id_1", String(10), key="oid1"), + Column("other_id_2", String(10), key="oid2"), + ForeignKeyConstraint( + ["oid1", "oid2"], ["some_table.tid1", "some_table.tid2"] + ), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + +class IncludeHooksTest(AutogenFixtureTest, TestBase): + __backend__ = True + __requires__ = ("fk_names",) + + @combinations(("object",), ("name",)) + @config.requirements.no_name_normalize + def test_remove_connection_fk(self, hook_type): + m1 = MetaData() + m2 = MetaData() + + ref = Table( + "ref", + m1, + Column("id", Integer, primary_key=True), + ) + t1 = Table( + "t", + m1, + Column("x", Integer), + Column("y", Integer), + ) + t1.append_constraint( + ForeignKeyConstraint([t1.c.x], [ref.c.id], name="fk1") + ) + t1.append_constraint( + ForeignKeyConstraint([t1.c.y], [ref.c.id], name="fk2") + ) + + ref = Table( + "ref", + m2, + Column("id", Integer, primary_key=True), + ) + Table( + "t", + m2, + Column("x", Integer), + Column("y", Integer), + ) + + if hook_type == "object": + + def include_object(object_, name, type_, reflected, compare_to): + return not ( + isinstance(object_, ForeignKeyConstraint) + and type_ == "foreign_key_constraint" + and reflected + and name == "fk1" + ) + + diffs = self._fixture(m1, m2, object_filters=include_object) + elif hook_type == "name": + + def include_name(name, type_, parent_names): + if name == "fk1": + if type_ == "index": # MariaDB thing + return True + eq_(type_, "foreign_key_constraint") + eq_( + parent_names, + { + "schema_name": None, + "table_name": "t", + "schema_qualified_table_name": "t", + }, + ) + return False + else: + return True + + diffs = self._fixture(m1, m2, name_filters=include_name) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "t", + ["y"], + "ref", + ["id"], + conditional_name="fk2", + ) + eq_(len(diffs), 1) + + def test_add_metadata_fk(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "ref", + m1, + Column("id", Integer, primary_key=True), + ) + Table( + "t", + m1, + Column("x", Integer), + Column("y", Integer), + ) + + ref = Table( + "ref", + m2, + Column("id", Integer, primary_key=True), + ) + t2 = Table( + "t", + m2, + Column("x", Integer), + Column("y", Integer), + ) + t2.append_constraint( + ForeignKeyConstraint([t2.c.x], [ref.c.id], name="fk1") + ) + t2.append_constraint( + ForeignKeyConstraint([t2.c.y], [ref.c.id], name="fk2") + ) + + def include_object(object_, name, type_, reflected, compare_to): + return not ( + isinstance(object_, ForeignKeyConstraint) + and type_ == "foreign_key_constraint" + and not reflected + and name == "fk1" + ) + + diffs = self._fixture(m1, m2, object_filters=include_object) + + self._assert_fk_diff( + diffs[0], "add_fk", "t", ["y"], "ref", ["id"], name="fk2" + ) + eq_(len(diffs), 1) + + @combinations(("object",), ("name",)) + @config.requirements.no_name_normalize + def test_change_fk(self, hook_type): + m1 = MetaData() + m2 = MetaData() + + r1a = Table( + "ref_a", + m1, + Column("a", Integer, primary_key=True), + ) + Table( + "ref_b", + m1, + Column("a", Integer, primary_key=True), + Column("b", Integer, primary_key=True), + ) + t1 = Table( + "t", + m1, + Column("x", Integer), + Column("y", Integer), + Column("z", Integer), + ) + t1.append_constraint( + ForeignKeyConstraint([t1.c.x], [r1a.c.a], name="fk1") + ) + t1.append_constraint( + ForeignKeyConstraint([t1.c.y], [r1a.c.a], name="fk2") + ) + + Table( + "ref_a", + m2, + Column("a", Integer, primary_key=True), + ) + r2b = Table( + "ref_b", + m2, + Column("a", Integer, primary_key=True), + Column("b", Integer, primary_key=True), + ) + t2 = Table( + "t", + m2, + Column("x", Integer), + Column("y", Integer), + Column("z", Integer), + ) + t2.append_constraint( + ForeignKeyConstraint( + [t2.c.x, t2.c.z], [r2b.c.a, r2b.c.b], name="fk1" + ) + ) + t2.append_constraint( + ForeignKeyConstraint( + [t2.c.y, t2.c.z], [r2b.c.a, r2b.c.b], name="fk2" + ) + ) + + if hook_type == "object": + + def include_object(object_, name, type_, reflected, compare_to): + return not ( + isinstance(object_, ForeignKeyConstraint) + and type_ == "foreign_key_constraint" + and name == "fk1" + ) + + diffs = self._fixture(m1, m2, object_filters=include_object) + elif hook_type == "name": + + def include_name(name, type_, parent_names): + if type_ == "index": + return True # MariaDB thing + + if name == "fk1": + eq_(type_, "foreign_key_constraint") + eq_( + parent_names, + { + "schema_name": None, + "table_name": "t", + "schema_qualified_table_name": "t", + }, + ) + return False + else: + return True + + diffs = self._fixture(m1, m2, name_filters=include_name) + + if hook_type == "object": + self._assert_fk_diff( + diffs[0], "remove_fk", "t", ["y"], "ref_a", ["a"], name="fk2" + ) + self._assert_fk_diff( + diffs[1], + "add_fk", + "t", + ["y", "z"], + "ref_b", + ["a", "b"], + name="fk2", + ) + eq_(len(diffs), 2) + elif hook_type == "name": + eq_( + {(d[0], d[1].name) for d in diffs}, + {("add_fk", "fk2"), ("add_fk", "fk1"), ("remove_fk", "fk2")}, + ) + + +class AutogenerateFKOptionsTest(AutogenFixtureTest, TestBase): + __backend__ = True + + def _fk_opts_fixture(self, old_opts, new_opts): + m1 = MetaData() + m2 = MetaData() + + Table( + "some_table", + m1, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m1, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("tid", Integer), + ForeignKeyConstraint(["tid"], ["some_table.id"], **old_opts), + ) + + Table( + "some_table", + m2, + Column("id", Integer, primary_key=True), + Column("test", String(10)), + ) + + Table( + "user", + m2, + Column("id", Integer, primary_key=True), + Column("name", String(50), nullable=False), + Column("tid", Integer), + ForeignKeyConstraint(["tid"], ["some_table.id"], **new_opts), + ) + + return self._fixture(m1, m2) + + @config.requirements.fk_ondelete_is_reflected + def test_add_ondelete(self): + diffs = self._fk_opts_fixture({}, {"ondelete": "cascade"}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + ondelete=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + ondelete="cascade", + ) + + @config.requirements.fk_ondelete_is_reflected + def test_remove_ondelete(self): + diffs = self._fk_opts_fixture({"ondelete": "CASCADE"}, {}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + ondelete="CASCADE", + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + ondelete=None, + ) + + def test_nochange_ondelete(self): + """test case sensitivity""" + diffs = self._fk_opts_fixture( + {"ondelete": "caSCAde"}, {"ondelete": "CasCade"} + ) + eq_(diffs, []) + + @config.requirements.fk_onupdate_is_reflected + def test_add_onupdate(self): + diffs = self._fk_opts_fixture({}, {"onupdate": "cascade"}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate="cascade", + ) + + @config.requirements.fk_onupdate_is_reflected + def test_remove_onupdate(self): + diffs = self._fk_opts_fixture({"onupdate": "CASCADE"}, {}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate="CASCADE", + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate=None, + ) + + @config.requirements.fk_onupdate + def test_nochange_onupdate(self): + """test case sensitivity""" + diffs = self._fk_opts_fixture( + {"onupdate": "caSCAde"}, {"onupdate": "CasCade"} + ) + eq_(diffs, []) + + @config.requirements.fk_ondelete_restrict + def test_nochange_ondelete_restrict(self): + """test the RESTRICT option which MySQL doesn't report on""" + + diffs = self._fk_opts_fixture( + {"ondelete": "restrict"}, {"ondelete": "restrict"} + ) + eq_(diffs, []) + + @config.requirements.fk_onupdate_restrict + def test_nochange_onupdate_restrict(self): + """test the RESTRICT option which MySQL doesn't report on""" + + diffs = self._fk_opts_fixture( + {"onupdate": "restrict"}, {"onupdate": "restrict"} + ) + eq_(diffs, []) + + @config.requirements.fk_ondelete_noaction + def test_nochange_ondelete_noaction(self): + """test the NO ACTION option which generally comes back as None""" + + diffs = self._fk_opts_fixture( + {"ondelete": "no action"}, {"ondelete": "no action"} + ) + eq_(diffs, []) + + @config.requirements.fk_onupdate + def test_nochange_onupdate_noaction(self): + """test the NO ACTION option which generally comes back as None""" + + diffs = self._fk_opts_fixture( + {"onupdate": "no action"}, {"onupdate": "no action"} + ) + eq_(diffs, []) + + @config.requirements.fk_ondelete_restrict + def test_change_ondelete_from_restrict(self): + """test the RESTRICT option which MySQL doesn't report on""" + + # note that this is impossible to detect if we change + # from RESTRICT to NO ACTION on MySQL. + diffs = self._fk_opts_fixture( + {"ondelete": "restrict"}, {"ondelete": "cascade"} + ) + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate=None, + ondelete=mock.ANY, # MySQL reports None, PG reports RESTRICT + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate=None, + ondelete="cascade", + ) + + @config.requirements.fk_ondelete_restrict + def test_change_onupdate_from_restrict(self): + """test the RESTRICT option which MySQL doesn't report on""" + + # note that this is impossible to detect if we change + # from RESTRICT to NO ACTION on MySQL. + diffs = self._fk_opts_fixture( + {"onupdate": "restrict"}, {"onupdate": "cascade"} + ) + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate=mock.ANY, # MySQL reports None, PG reports RESTRICT + ondelete=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate="cascade", + ondelete=None, + ) + + @config.requirements.fk_ondelete_is_reflected + @config.requirements.fk_onupdate_is_reflected + def test_ondelete_onupdate_combo(self): + diffs = self._fk_opts_fixture( + {"onupdate": "CASCADE", "ondelete": "SET NULL"}, + {"onupdate": "RESTRICT", "ondelete": "RESTRICT"}, + ) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate="CASCADE", + ondelete="SET NULL", + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + onupdate="RESTRICT", + ondelete="RESTRICT", + ) + + @config.requirements.fk_initially + def test_add_initially_deferred(self): + diffs = self._fk_opts_fixture({}, {"initially": "deferred"}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially="deferred", + ) + + @config.requirements.fk_initially + def test_remove_initially_deferred(self): + diffs = self._fk_opts_fixture({"initially": "deferred"}, {}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially="DEFERRED", + deferrable=True, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially=None, + ) + + @config.requirements.fk_deferrable + @config.requirements.fk_initially + def test_add_initially_immediate_plus_deferrable(self): + diffs = self._fk_opts_fixture( + {}, {"initially": "immediate", "deferrable": True} + ) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially="immediate", + deferrable=True, + ) + + @config.requirements.fk_deferrable + @config.requirements.fk_initially + def test_remove_initially_immediate_plus_deferrable(self): + diffs = self._fk_opts_fixture( + {"initially": "immediate", "deferrable": True}, {} + ) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially=None, # immediate is the default + deferrable=True, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + initially=None, + deferrable=None, + ) + + @config.requirements.fk_initially + @config.requirements.fk_deferrable + def test_add_initially_deferrable_nochange_one(self): + diffs = self._fk_opts_fixture( + {"deferrable": True, "initially": "immediate"}, + {"deferrable": True, "initially": "immediate"}, + ) + + eq_(diffs, []) + + @config.requirements.fk_initially + @config.requirements.fk_deferrable + def test_add_initially_deferrable_nochange_two(self): + diffs = self._fk_opts_fixture( + {"deferrable": True, "initially": "deferred"}, + {"deferrable": True, "initially": "deferred"}, + ) + + eq_(diffs, []) + + @config.requirements.fk_initially + @config.requirements.fk_deferrable + def test_add_initially_deferrable_nochange_three(self): + diffs = self._fk_opts_fixture( + {"deferrable": None, "initially": "deferred"}, + {"deferrable": None, "initially": "deferred"}, + ) + + eq_(diffs, []) + + @config.requirements.fk_deferrable + def test_add_deferrable(self): + diffs = self._fk_opts_fixture({}, {"deferrable": True}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + deferrable=None, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + deferrable=True, + ) + + @config.requirements.fk_deferrable_is_reflected + def test_remove_deferrable(self): + diffs = self._fk_opts_fixture({"deferrable": True}, {}) + + self._assert_fk_diff( + diffs[0], + "remove_fk", + "user", + ["tid"], + "some_table", + ["id"], + deferrable=True, + conditional_name="servergenerated", + ) + + self._assert_fk_diff( + diffs[1], + "add_fk", + "user", + ["tid"], + "some_table", + ["id"], + deferrable=None, + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_identity.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_identity.py new file mode 100644 index 000000000..3dee9fc99 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_autogen_identity.py @@ -0,0 +1,226 @@ +import sqlalchemy as sa +from sqlalchemy import Column +from sqlalchemy import Integer +from sqlalchemy import MetaData +from sqlalchemy import Table + +from alembic.util import sqla_compat +from ._autogen_fixtures import AutogenFixtureTest +from ... import testing +from ...testing import config +from ...testing import eq_ +from ...testing import is_true +from ...testing import TestBase + + +class AutogenerateIdentityTest(AutogenFixtureTest, TestBase): + __requires__ = ("identity_columns",) + __backend__ = True + + def test_add_identity_column(self): + m1 = MetaData() + m2 = MetaData() + + Table("user", m1, Column("other", sa.Text)) + + Table( + "user", + m2, + Column("other", sa.Text), + Column( + "id", + Integer, + sa.Identity(start=5, increment=7), + primary_key=True, + ), + ) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "add_column") + eq_(diffs[0][2], "user") + eq_(diffs[0][3].name, "id") + i = diffs[0][3].identity + + is_true(isinstance(i, sa.Identity)) + eq_(i.start, 5) + eq_(i.increment, 7) + + def test_remove_identity_column(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "user", + m1, + Column( + "id", + Integer, + sa.Identity(start=2, increment=3), + primary_key=True, + ), + ) + + Table("user", m2) + + diffs = self._fixture(m1, m2) + + eq_(diffs[0][0], "remove_column") + eq_(diffs[0][2], "user") + c = diffs[0][3] + eq_(c.name, "id") + + is_true(isinstance(c.identity, sa.Identity)) + eq_(c.identity.start, 2) + eq_(c.identity.increment, 3) + + def test_no_change_identity_column(self): + m1 = MetaData() + m2 = MetaData() + + for m in (m1, m2): + id_ = sa.Identity(start=2) + Table("user", m, Column("id", Integer, id_)) + + diffs = self._fixture(m1, m2) + + eq_(diffs, []) + + def test_dialect_kwargs_changes(self): + m1 = MetaData() + m2 = MetaData() + + if sqla_compat.identity_has_dialect_kwargs: + args = {"oracle_on_null": True, "oracle_order": True} + else: + args = {"on_null": True, "order": True} + + Table("user", m1, Column("id", Integer, sa.Identity(start=2))) + id_ = sa.Identity(start=2, **args) + Table("user", m2, Column("id", Integer, id_)) + + diffs = self._fixture(m1, m2) + if config.db.name == "oracle": + is_true(len(diffs), 1) + eq_(diffs[0][0][0], "modify_default") + else: + eq_(diffs, []) + + @testing.combinations( + (None, dict(start=2)), + (dict(start=2), None), + (dict(start=2), dict(start=2, increment=7)), + (dict(always=False), dict(always=True)), + ( + dict(start=1, minvalue=0, maxvalue=100, cycle=True), + dict(start=1, minvalue=0, maxvalue=100, cycle=False), + ), + ( + dict(start=10, increment=3, maxvalue=9999), + dict(start=10, increment=1, maxvalue=3333), + ), + ) + @config.requirements.identity_columns_alter + def test_change_identity(self, before, after): + arg_before = (sa.Identity(**before),) if before else () + arg_after = (sa.Identity(**after),) if after else () + + m1 = MetaData() + m2 = MetaData() + + Table( + "user", + m1, + Column("id", Integer, *arg_before), + Column("other", sa.Text), + ) + + Table( + "user", + m2, + Column("id", Integer, *arg_after), + Column("other", sa.Text), + ) + + diffs = self._fixture(m1, m2) + + eq_(len(diffs[0]), 1) + diffs = diffs[0][0] + eq_(diffs[0], "modify_default") + eq_(diffs[2], "user") + eq_(diffs[3], "id") + old = diffs[5] + new = diffs[6] + + def check(kw, idt): + if kw: + is_true(isinstance(idt, sa.Identity)) + for k, v in kw.items(): + eq_(getattr(idt, k), v) + else: + is_true(idt in (None, False)) + + check(before, old) + check(after, new) + + def test_add_identity_to_column(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "user", + m1, + Column("id", Integer), + Column("other", sa.Text), + ) + + Table( + "user", + m2, + Column("id", Integer, sa.Identity(start=2, maxvalue=1000)), + Column("other", sa.Text), + ) + + diffs = self._fixture(m1, m2) + + eq_(len(diffs[0]), 1) + diffs = diffs[0][0] + eq_(diffs[0], "modify_default") + eq_(diffs[2], "user") + eq_(diffs[3], "id") + eq_(diffs[5], None) + added = diffs[6] + + is_true(isinstance(added, sa.Identity)) + eq_(added.start, 2) + eq_(added.maxvalue, 1000) + + def test_remove_identity_from_column(self): + m1 = MetaData() + m2 = MetaData() + + Table( + "user", + m1, + Column("id", Integer, sa.Identity(start=2, maxvalue=1000)), + Column("other", sa.Text), + ) + + Table( + "user", + m2, + Column("id", Integer), + Column("other", sa.Text), + ) + + diffs = self._fixture(m1, m2) + + eq_(len(diffs[0]), 1) + diffs = diffs[0][0] + eq_(diffs[0], "modify_default") + eq_(diffs[2], "user") + eq_(diffs[3], "id") + eq_(diffs[6], None) + removed = diffs[5] + + is_true(isinstance(removed, sa.Identity)) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_environment.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_environment.py new file mode 100644 index 000000000..df2d9afbd --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_environment.py @@ -0,0 +1,364 @@ +import io + +from ...migration import MigrationContext +from ...testing import assert_raises +from ...testing import config +from ...testing import eq_ +from ...testing import is_ +from ...testing import is_false +from ...testing import is_not_ +from ...testing import is_true +from ...testing import ne_ +from ...testing.fixtures import TestBase + + +class MigrationTransactionTest(TestBase): + __backend__ = True + + conn = None + + def _fixture(self, opts): + self.conn = conn = config.db.connect() + + if opts.get("as_sql", False): + self.context = MigrationContext.configure( + dialect=conn.dialect, opts=opts + ) + self.context.output_buffer = self.context.impl.output_buffer = ( + io.StringIO() + ) + else: + self.context = MigrationContext.configure( + connection=conn, opts=opts + ) + return self.context + + def teardown_method(self): + if self.conn: + self.conn.close() + + def test_proxy_transaction_rollback(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + + is_false(self.conn.in_transaction()) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + proxy.rollback() + is_false(self.conn.in_transaction()) + + def test_proxy_transaction_commit(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + proxy.commit() + is_false(self.conn.in_transaction()) + + def test_proxy_transaction_contextmanager_commit(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + with proxy: + pass + is_false(self.conn.in_transaction()) + + def test_proxy_transaction_contextmanager_rollback(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + + def go(): + with proxy: + raise Exception("hi") + + assert_raises(Exception, go) + is_false(self.conn.in_transaction()) + + def test_proxy_transaction_contextmanager_explicit_rollback(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + + with proxy: + is_true(self.conn.in_transaction()) + proxy.rollback() + is_false(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + + def test_proxy_transaction_contextmanager_explicit_commit(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + proxy = context.begin_transaction(_per_migration=True) + is_true(self.conn.in_transaction()) + + with proxy: + is_true(self.conn.in_transaction()) + proxy.commit() + is_false(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + + def test_transaction_per_migration_transactional_ddl(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": True} + ) + + is_false(self.conn.in_transaction()) + + with context.begin_transaction(): + is_false(self.conn.in_transaction()) + with context.begin_transaction(_per_migration=True): + is_true(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + is_false(self.conn.in_transaction()) + + def test_transaction_per_migration_non_transactional_ddl(self): + context = self._fixture( + {"transaction_per_migration": True, "transactional_ddl": False} + ) + + is_false(self.conn.in_transaction()) + + with context.begin_transaction(): + is_false(self.conn.in_transaction()) + with context.begin_transaction(_per_migration=True): + is_true(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + is_false(self.conn.in_transaction()) + + def test_transaction_per_all_transactional_ddl(self): + context = self._fixture({"transactional_ddl": True}) + + is_false(self.conn.in_transaction()) + + with context.begin_transaction(): + is_true(self.conn.in_transaction()) + with context.begin_transaction(_per_migration=True): + is_true(self.conn.in_transaction()) + + is_true(self.conn.in_transaction()) + is_false(self.conn.in_transaction()) + + def test_transaction_per_all_non_transactional_ddl(self): + context = self._fixture({"transactional_ddl": False}) + + is_false(self.conn.in_transaction()) + + with context.begin_transaction(): + is_false(self.conn.in_transaction()) + with context.begin_transaction(_per_migration=True): + is_true(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + is_false(self.conn.in_transaction()) + + def test_transaction_per_all_sqlmode(self): + context = self._fixture({"as_sql": True}) + + context.execute("step 1") + with context.begin_transaction(): + context.execute("step 2") + with context.begin_transaction(_per_migration=True): + context.execute("step 3") + + context.execute("step 4") + context.execute("step 5") + + if context.impl.transactional_ddl: + self._assert_impl_steps( + "step 1", + "BEGIN", + "step 2", + "step 3", + "step 4", + "COMMIT", + "step 5", + ) + else: + self._assert_impl_steps( + "step 1", "step 2", "step 3", "step 4", "step 5" + ) + + def test_transaction_per_migration_sqlmode(self): + context = self._fixture( + {"as_sql": True, "transaction_per_migration": True} + ) + + context.execute("step 1") + with context.begin_transaction(): + context.execute("step 2") + with context.begin_transaction(_per_migration=True): + context.execute("step 3") + + context.execute("step 4") + context.execute("step 5") + + if context.impl.transactional_ddl: + self._assert_impl_steps( + "step 1", + "step 2", + "BEGIN", + "step 3", + "COMMIT", + "step 4", + "step 5", + ) + else: + self._assert_impl_steps( + "step 1", "step 2", "step 3", "step 4", "step 5" + ) + + @config.requirements.autocommit_isolation + def test_autocommit_block(self): + context = self._fixture({"transaction_per_migration": True}) + + is_false(self.conn.in_transaction()) + + with context.begin_transaction(): + is_false(self.conn.in_transaction()) + with context.begin_transaction(_per_migration=True): + is_true(self.conn.in_transaction()) + + with context.autocommit_block(): + # in 1.x, self.conn is separate due to the + # execution_options call. however for future they are the + # same connection and there is a "transaction" block + # despite autocommit + if self.is_sqlalchemy_future: + is_(context.connection, self.conn) + else: + is_not_(context.connection, self.conn) + is_false(self.conn.in_transaction()) + + eq_( + context.connection._execution_options[ + "isolation_level" + ], + "AUTOCOMMIT", + ) + + ne_( + context.connection._execution_options.get( + "isolation_level", None + ), + "AUTOCOMMIT", + ) + is_true(self.conn.in_transaction()) + + is_false(self.conn.in_transaction()) + is_false(self.conn.in_transaction()) + + @config.requirements.autocommit_isolation + def test_autocommit_block_no_transaction(self): + context = self._fixture({"transaction_per_migration": True}) + + is_false(self.conn.in_transaction()) + + with context.autocommit_block(): + is_true(context.connection.in_transaction()) + + # in 1.x, self.conn is separate due to the execution_options + # call. however for future they are the same connection and there + # is a "transaction" block despite autocommit + if self.is_sqlalchemy_future: + is_(context.connection, self.conn) + else: + is_not_(context.connection, self.conn) + is_false(self.conn.in_transaction()) + + eq_( + context.connection._execution_options["isolation_level"], + "AUTOCOMMIT", + ) + + ne_( + context.connection._execution_options.get("isolation_level", None), + "AUTOCOMMIT", + ) + + is_false(self.conn.in_transaction()) + + def test_autocommit_block_transactional_ddl_sqlmode(self): + context = self._fixture( + { + "transaction_per_migration": True, + "transactional_ddl": True, + "as_sql": True, + } + ) + + with context.begin_transaction(): + context.execute("step 1") + with context.begin_transaction(_per_migration=True): + context.execute("step 2") + + with context.autocommit_block(): + context.execute("step 3") + + context.execute("step 4") + + context.execute("step 5") + + self._assert_impl_steps( + "step 1", + "BEGIN", + "step 2", + "COMMIT", + "step 3", + "BEGIN", + "step 4", + "COMMIT", + "step 5", + ) + + def test_autocommit_block_nontransactional_ddl_sqlmode(self): + context = self._fixture( + { + "transaction_per_migration": True, + "transactional_ddl": False, + "as_sql": True, + } + ) + + with context.begin_transaction(): + context.execute("step 1") + with context.begin_transaction(_per_migration=True): + context.execute("step 2") + + with context.autocommit_block(): + context.execute("step 3") + + context.execute("step 4") + + context.execute("step 5") + + self._assert_impl_steps( + "step 1", "step 2", "step 3", "step 4", "step 5" + ) + + def _assert_impl_steps(self, *steps): + to_check = self.context.output_buffer.getvalue() + + self.context.impl.output_buffer = buf = io.StringIO() + for step in steps: + if step == "BEGIN": + self.context.impl.emit_begin() + elif step == "COMMIT": + self.context.impl.emit_commit() + else: + self.context.impl._exec(step) + + eq_(to_check, buf.getvalue()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_op.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_op.py new file mode 100644 index 000000000..a63b3f2f9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/suite/test_op.py @@ -0,0 +1,42 @@ +"""Test against the builders in the op.* module.""" + +from sqlalchemy import Column +from sqlalchemy import event +from sqlalchemy import Integer +from sqlalchemy import String +from sqlalchemy import Table +from sqlalchemy.sql import text + +from ...testing.fixtures import AlterColRoundTripFixture +from ...testing.fixtures import TestBase + + +@event.listens_for(Table, "after_parent_attach") +def _add_cols(table, metadata): + if table.name == "tbl_with_auto_appended_column": + table.append_column(Column("bat", Integer)) + + +class BackendAlterColumnTest(AlterColRoundTripFixture, TestBase): + __backend__ = True + + def test_rename_column(self): + self._run_alter_col({}, {"name": "newname"}) + + def test_modify_type_int_str(self): + self._run_alter_col({"type": Integer()}, {"type": String(50)}) + + def test_add_server_default_int(self): + self._run_alter_col({"type": Integer}, {"server_default": text("5")}) + + def test_modify_server_default_int(self): + self._run_alter_col( + {"type": Integer, "server_default": text("2")}, + {"server_default": text("5")}, + ) + + def test_modify_nullable_to_non(self): + self._run_alter_col({}, {"nullable": False}) + + def test_modify_non_nullable_to_nullable(self): + self._run_alter_col({"nullable": False}, {"nullable": True}) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/util.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/util.py new file mode 100644 index 000000000..4517a69f6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/util.py @@ -0,0 +1,126 @@ +# testing/util.py +# Copyright (C) 2005-2019 the SQLAlchemy authors and contributors +# +# +# This module is part of SQLAlchemy and is released under +# the MIT License: http://www.opensource.org/licenses/mit-license.php +from __future__ import annotations + +import types +from typing import Union + +from sqlalchemy.util import inspect_getfullargspec + +from ..util import sqla_2 + + +def flag_combinations(*combinations): + """A facade around @testing.combinations() oriented towards boolean + keyword-based arguments. + + Basically generates a nice looking identifier based on the keywords + and also sets up the argument names. + + E.g.:: + + @testing.flag_combinations( + dict(lazy=False, passive=False), + dict(lazy=True, passive=False), + dict(lazy=False, passive=True), + dict(lazy=False, passive=True, raiseload=True), + ) + + + would result in:: + + @testing.combinations( + ('', False, False, False), + ('lazy', True, False, False), + ('lazy_passive', True, True, False), + ('lazy_passive', True, True, True), + id_='iaaa', + argnames='lazy,passive,raiseload' + ) + + """ + from sqlalchemy.testing import config + + keys = set() + + for d in combinations: + keys.update(d) + + keys = sorted(keys) + + return config.combinations( + *[ + ("_".join(k for k in keys if d.get(k, False)),) + + tuple(d.get(k, False) for k in keys) + for d in combinations + ], + id_="i" + ("a" * len(keys)), + argnames=",".join(keys), + ) + + +def resolve_lambda(__fn, **kw): + """Given a no-arg lambda and a namespace, return a new lambda that + has all the values filled in. + + This is used so that we can have module-level fixtures that + refer to instance-level variables using lambdas. + + """ + + pos_args = inspect_getfullargspec(__fn)[0] + pass_pos_args = {arg: kw.pop(arg) for arg in pos_args} + glb = dict(__fn.__globals__) + glb.update(kw) + new_fn = types.FunctionType(__fn.__code__, glb) + return new_fn(**pass_pos_args) + + +def metadata_fixture(ddl="function"): + """Provide MetaData for a pytest fixture.""" + + from sqlalchemy.testing import config + from . import fixture_functions + + def decorate(fn): + def run_ddl(self): + from sqlalchemy import schema + + metadata = self.metadata = schema.MetaData() + try: + result = fn(self, metadata) + metadata.create_all(config.db) + # TODO: + # somehow get a per-function dml erase fixture here + yield result + finally: + metadata.drop_all(config.db) + + return fixture_functions.fixture(scope=ddl)(run_ddl) + + return decorate + + +def _safe_int(value: str) -> Union[int, str]: + try: + return int(value) + except: + return value + + +def testing_engine(url=None, options=None, future=False): + from sqlalchemy.testing import config + from sqlalchemy.testing.engines import testing_engine + + if not future: + future = getattr(config._current.options, "future_engine", False) + + if not sqla_2: + kw = {"future": future} if future else {} + else: + kw = {} + return testing_engine(url, options, **kw) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/warnings.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/warnings.py new file mode 100644 index 000000000..86d45a0dd --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/testing/warnings.py @@ -0,0 +1,31 @@ +# testing/warnings.py +# Copyright (C) 2005-2021 the SQLAlchemy authors and contributors +# +# +# This module is part of SQLAlchemy and is released under +# the MIT License: http://www.opensource.org/licenses/mit-license.php + + +import warnings + +from sqlalchemy import exc as sa_exc + + +def setup_filters(): + """Set global warning behavior for the test suite.""" + + warnings.resetwarnings() + + warnings.filterwarnings("error", category=sa_exc.SADeprecationWarning) + warnings.filterwarnings("error", category=sa_exc.SAWarning) + + # some selected deprecations... + warnings.filterwarnings("error", category=DeprecationWarning) + try: + import pytest + except ImportError: + pass + else: + warnings.filterwarnings( + "once", category=pytest.PytestDeprecationWarning + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/__init__.py new file mode 100644 index 000000000..1d3a21796 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/__init__.py @@ -0,0 +1,29 @@ +from .editor import open_in_editor as open_in_editor +from .exc import AutogenerateDiffsDetected as AutogenerateDiffsDetected +from .exc import CommandError as CommandError +from .langhelpers import _with_legacy_names as _with_legacy_names +from .langhelpers import asbool as asbool +from .langhelpers import dedupe_tuple as dedupe_tuple +from .langhelpers import Dispatcher as Dispatcher +from .langhelpers import EMPTY_DICT as EMPTY_DICT +from .langhelpers import immutabledict as immutabledict +from .langhelpers import memoized_property as memoized_property +from .langhelpers import ModuleClsProxy as ModuleClsProxy +from .langhelpers import not_none as not_none +from .langhelpers import rev_id as rev_id +from .langhelpers import to_list as to_list +from .langhelpers import to_tuple as to_tuple +from .langhelpers import unique_list as unique_list +from .messaging import err as err +from .messaging import format_as_comma as format_as_comma +from .messaging import msg as msg +from .messaging import obfuscate_url_pw as obfuscate_url_pw +from .messaging import status as status +from .messaging import warn as warn +from .messaging import warn_deprecated as warn_deprecated +from .messaging import write_outstream as write_outstream +from .pyfiles import coerce_resource_to_filename as coerce_resource_to_filename +from .pyfiles import load_python_file as load_python_file +from .pyfiles import pyc_file_from_path as pyc_file_from_path +from .pyfiles import template_to_file as template_to_file +from .sqla_compat import sqla_2 as sqla_2 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/compat.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/compat.py new file mode 100644 index 000000000..131f16a00 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/compat.py @@ -0,0 +1,146 @@ +# mypy: no-warn-unused-ignores + +from __future__ import annotations + +from configparser import ConfigParser +import io +import os +from pathlib import Path +import sys +import typing +from typing import Any +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Union + +if True: + # zimports hack for too-long names + from sqlalchemy.util import ( # noqa: F401 + inspect_getfullargspec as inspect_getfullargspec, + ) + from sqlalchemy.util.compat import ( # noqa: F401 + inspect_formatargspec as inspect_formatargspec, + ) + +is_posix = os.name == "posix" + +py314 = sys.version_info >= (3, 14) +py313 = sys.version_info >= (3, 13) +py312 = sys.version_info >= (3, 12) +py311 = sys.version_info >= (3, 11) +py310 = sys.version_info >= (3, 10) +py39 = sys.version_info >= (3, 9) + + +# produce a wrapper that allows encoded text to stream +# into a given buffer, but doesn't close it. +# not sure of a more idiomatic approach to this. +class EncodedIO(io.TextIOWrapper): + def close(self) -> None: + pass + + +if py39: + from importlib import resources as _resources + + importlib_resources = _resources + from importlib import metadata as _metadata + + importlib_metadata = _metadata + from importlib.metadata import EntryPoint as EntryPoint +else: + import importlib_resources # type:ignore # noqa + import importlib_metadata # type:ignore # noqa + from importlib_metadata import EntryPoint # type:ignore # noqa + +if py311: + import tomllib as tomllib +else: + import tomli as tomllib # type: ignore # noqa + + +if py312: + + def path_walk( + path: Path, *, top_down: bool = True + ) -> Iterator[tuple[Path, list[str], list[str]]]: + return Path.walk(path) + + def path_relative_to( + path: Path, other: Path, *, walk_up: bool = False + ) -> Path: + return path.relative_to(other, walk_up=walk_up) + +else: + + def path_walk( + path: Path, *, top_down: bool = True + ) -> Iterator[tuple[Path, list[str], list[str]]]: + for root, dirs, files in os.walk(path, topdown=top_down): + yield Path(root), dirs, files + + def path_relative_to( + path: Path, other: Path, *, walk_up: bool = False + ) -> Path: + """ + Calculate the relative path of 'path' with respect to 'other', + optionally allowing 'path' to be outside the subtree of 'other'. + + OK I used AI for this, sorry + + """ + try: + return path.relative_to(other) + except ValueError: + if walk_up: + other_ancestors = list(other.parents) + [other] + for ancestor in other_ancestors: + try: + return path.relative_to(ancestor) + except ValueError: + continue + raise ValueError( + f"{path} is not in the same subtree as {other}" + ) + else: + raise + + +def importlib_metadata_get(group: str) -> Sequence[EntryPoint]: + ep = importlib_metadata.entry_points() + if hasattr(ep, "select"): + return ep.select(group=group) + else: + return ep.get(group, ()) # type: ignore + + +def formatannotation_fwdref( + annotation: Any, base_module: Optional[Any] = None +) -> str: + """vendored from python 3.7""" + # copied over _formatannotation from sqlalchemy 2.0 + + if isinstance(annotation, str): + return annotation + + if getattr(annotation, "__module__", None) == "typing": + return repr(annotation).replace("typing.", "").replace("~", "") + if isinstance(annotation, type): + if annotation.__module__ in ("builtins", base_module): + return repr(annotation.__qualname__) + return annotation.__module__ + "." + annotation.__qualname__ + elif isinstance(annotation, typing.TypeVar): + return repr(annotation).replace("~", "") + return repr(annotation).replace("~", "") + + +def read_config_parser( + file_config: ConfigParser, + file_argument: Sequence[Union[str, os.PathLike[str]]], +) -> List[str]: + if py310: + return file_config.read(file_argument, encoding="locale") + else: + return file_config.read(file_argument) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/editor.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/editor.py new file mode 100644 index 000000000..f1d1557f7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/editor.py @@ -0,0 +1,81 @@ +from __future__ import annotations + +import os +from os.path import exists +from os.path import join +from os.path import splitext +from subprocess import check_call +from typing import Dict +from typing import List +from typing import Mapping +from typing import Optional + +from .compat import is_posix +from .exc import CommandError + + +def open_in_editor( + filename: str, environ: Optional[Dict[str, str]] = None +) -> None: + """ + Opens the given file in a text editor. If the environment variable + ``EDITOR`` is set, this is taken as preference. + + Otherwise, a list of commonly installed editors is tried. + + If no editor matches, an :py:exc:`OSError` is raised. + + :param filename: The filename to open. Will be passed verbatim to the + editor command. + :param environ: An optional drop-in replacement for ``os.environ``. Used + mainly for testing. + """ + env = os.environ if environ is None else environ + try: + editor = _find_editor(env) + check_call([editor, filename]) + except Exception as exc: + raise CommandError("Error executing editor (%s)" % (exc,)) from exc + + +def _find_editor(environ: Mapping[str, str]) -> str: + candidates = _default_editors() + for i, var in enumerate(("EDITOR", "VISUAL")): + if var in environ: + user_choice = environ[var] + if exists(user_choice): + return user_choice + if os.sep not in user_choice: + candidates.insert(i, user_choice) + + for candidate in candidates: + path = _find_executable(candidate, environ) + if path is not None: + return path + raise OSError( + "No suitable editor found. Please set the " + '"EDITOR" or "VISUAL" environment variables' + ) + + +def _find_executable( + candidate: str, environ: Mapping[str, str] +) -> Optional[str]: + # Assuming this is on the PATH, we need to determine it's absolute + # location. Otherwise, ``check_call`` will fail + if not is_posix and splitext(candidate)[1] != ".exe": + candidate += ".exe" + for path in environ.get("PATH", "").split(os.pathsep): + value = join(path, candidate) + if exists(value): + return value + return None + + +def _default_editors() -> List[str]: + # Look for an editor. Prefer the user's choice by env-var, fall back to + # most commonly installed editor (nano/vim) + if is_posix: + return ["sensible-editor", "editor", "nano", "vim", "code"] + else: + return ["code.exe", "notepad++.exe", "notepad.exe"] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/exc.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/exc.py new file mode 100644 index 000000000..c790e18a7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/exc.py @@ -0,0 +1,25 @@ +from __future__ import annotations + +from typing import Any +from typing import List +from typing import Tuple +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from alembic.autogenerate import RevisionContext + + +class CommandError(Exception): + pass + + +class AutogenerateDiffsDetected(CommandError): + def __init__( + self, + message: str, + revision_context: RevisionContext, + diffs: List[Tuple[Any, ...]], + ) -> None: + super().__init__(message) + self.revision_context = revision_context + self.diffs = diffs diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/langhelpers.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/langhelpers.py new file mode 100644 index 000000000..80d88cbce --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/langhelpers.py @@ -0,0 +1,332 @@ +from __future__ import annotations + +import collections +from collections.abc import Iterable +import textwrap +from typing import Any +from typing import Callable +from typing import cast +from typing import Dict +from typing import List +from typing import Mapping +from typing import MutableMapping +from typing import NoReturn +from typing import Optional +from typing import overload +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union +import uuid +import warnings + +from sqlalchemy.util import asbool as asbool # noqa: F401 +from sqlalchemy.util import immutabledict as immutabledict # noqa: F401 +from sqlalchemy.util import to_list as to_list # noqa: F401 +from sqlalchemy.util import unique_list as unique_list + +from .compat import inspect_getfullargspec + +if True: + # zimports workaround :( + from sqlalchemy.util import ( # noqa: F401 + memoized_property as memoized_property, + ) + + +EMPTY_DICT: Mapping[Any, Any] = immutabledict() +_T = TypeVar("_T", bound=Any) + +_C = TypeVar("_C", bound=Callable[..., Any]) + + +class _ModuleClsMeta(type): + def __setattr__(cls, key: str, value: Callable[..., Any]) -> None: + super().__setattr__(key, value) + cls._update_module_proxies(key) # type: ignore + + +class ModuleClsProxy(metaclass=_ModuleClsMeta): + """Create module level proxy functions for the + methods on a given class. + + The functions will have a compatible signature + as the methods. + + """ + + _setups: Dict[ + Type[Any], + Tuple[ + Set[str], + List[Tuple[MutableMapping[str, Any], MutableMapping[str, Any]]], + ], + ] = collections.defaultdict(lambda: (set(), [])) + + @classmethod + def _update_module_proxies(cls, name: str) -> None: + attr_names, modules = cls._setups[cls] + for globals_, locals_ in modules: + cls._add_proxied_attribute(name, globals_, locals_, attr_names) + + def _install_proxy(self) -> None: + attr_names, modules = self._setups[self.__class__] + for globals_, locals_ in modules: + globals_["_proxy"] = self + for attr_name in attr_names: + globals_[attr_name] = getattr(self, attr_name) + + def _remove_proxy(self) -> None: + attr_names, modules = self._setups[self.__class__] + for globals_, locals_ in modules: + globals_["_proxy"] = None + for attr_name in attr_names: + del globals_[attr_name] + + @classmethod + def create_module_class_proxy( + cls, + globals_: MutableMapping[str, Any], + locals_: MutableMapping[str, Any], + ) -> None: + attr_names, modules = cls._setups[cls] + modules.append((globals_, locals_)) + cls._setup_proxy(globals_, locals_, attr_names) + + @classmethod + def _setup_proxy( + cls, + globals_: MutableMapping[str, Any], + locals_: MutableMapping[str, Any], + attr_names: Set[str], + ) -> None: + for methname in dir(cls): + cls._add_proxied_attribute(methname, globals_, locals_, attr_names) + + @classmethod + def _add_proxied_attribute( + cls, + methname: str, + globals_: MutableMapping[str, Any], + locals_: MutableMapping[str, Any], + attr_names: Set[str], + ) -> None: + if not methname.startswith("_"): + meth = getattr(cls, methname) + if callable(meth): + locals_[methname] = cls._create_method_proxy( + methname, globals_, locals_ + ) + else: + attr_names.add(methname) + + @classmethod + def _create_method_proxy( + cls, + name: str, + globals_: MutableMapping[str, Any], + locals_: MutableMapping[str, Any], + ) -> Callable[..., Any]: + fn = getattr(cls, name) + + def _name_error(name: str, from_: Exception) -> NoReturn: + raise NameError( + "Can't invoke function '%s', as the proxy object has " + "not yet been " + "established for the Alembic '%s' class. " + "Try placing this code inside a callable." + % (name, cls.__name__) + ) from from_ + + globals_["_name_error"] = _name_error + + translations = getattr(fn, "_legacy_translations", []) + if translations: + spec = inspect_getfullargspec(fn) + if spec[0] and spec[0][0] == "self": + spec[0].pop(0) + + outer_args = inner_args = "*args, **kw" + translate_str = "args, kw = _translate(%r, %r, %r, args, kw)" % ( + fn.__name__, + tuple(spec), + translations, + ) + + def translate( + fn_name: str, spec: Any, translations: Any, args: Any, kw: Any + ) -> Any: + return_kw = {} + return_args = [] + + for oldname, newname in translations: + if oldname in kw: + warnings.warn( + "Argument %r is now named %r " + "for method %s()." % (oldname, newname, fn_name) + ) + return_kw[newname] = kw.pop(oldname) + return_kw.update(kw) + + args = list(args) + if spec[3]: + pos_only = spec[0][: -len(spec[3])] + else: + pos_only = spec[0] + for arg in pos_only: + if arg not in return_kw: + try: + return_args.append(args.pop(0)) + except IndexError: + raise TypeError( + "missing required positional argument: %s" + % arg + ) + return_args.extend(args) + + return return_args, return_kw + + globals_["_translate"] = translate + else: + outer_args = "*args, **kw" + inner_args = "*args, **kw" + translate_str = "" + + func_text = textwrap.dedent( + """\ + def %(name)s(%(args)s): + %(doc)r + %(translate)s + try: + p = _proxy + except NameError as ne: + _name_error('%(name)s', ne) + return _proxy.%(name)s(%(apply_kw)s) + e + """ + % { + "name": name, + "translate": translate_str, + "args": outer_args, + "apply_kw": inner_args, + "doc": fn.__doc__, + } + ) + lcl: MutableMapping[str, Any] = {} + + exec(func_text, cast("Dict[str, Any]", globals_), lcl) + return cast("Callable[..., Any]", lcl[name]) + + +def _with_legacy_names(translations: Any) -> Any: + def decorate(fn: _C) -> _C: + fn._legacy_translations = translations # type: ignore[attr-defined] + return fn + + return decorate + + +def rev_id() -> str: + return uuid.uuid4().hex[-12:] + + +@overload +def to_tuple(x: Any, default: Tuple[Any, ...]) -> Tuple[Any, ...]: ... + + +@overload +def to_tuple(x: None, default: Optional[_T] = ...) -> _T: ... + + +@overload +def to_tuple( + x: Any, default: Optional[Tuple[Any, ...]] = None +) -> Tuple[Any, ...]: ... + + +def to_tuple( + x: Any, default: Optional[Tuple[Any, ...]] = None +) -> Optional[Tuple[Any, ...]]: + if x is None: + return default + elif isinstance(x, str): + return (x,) + elif isinstance(x, Iterable): + return tuple(x) + else: + return (x,) + + +def dedupe_tuple(tup: Tuple[str, ...]) -> Tuple[str, ...]: + return tuple(unique_list(tup)) + + +class Dispatcher: + def __init__(self, uselist: bool = False) -> None: + self._registry: Dict[Tuple[Any, ...], Any] = {} + self.uselist = uselist + + def dispatch_for( + self, target: Any, qualifier: str = "default" + ) -> Callable[[_C], _C]: + def decorate(fn: _C) -> _C: + if self.uselist: + self._registry.setdefault((target, qualifier), []).append(fn) + else: + assert (target, qualifier) not in self._registry + self._registry[(target, qualifier)] = fn + return fn + + return decorate + + def dispatch(self, obj: Any, qualifier: str = "default") -> Any: + if isinstance(obj, str): + targets: Sequence[Any] = [obj] + elif isinstance(obj, type): + targets = obj.__mro__ + else: + targets = type(obj).__mro__ + + for spcls in targets: + if qualifier != "default" and (spcls, qualifier) in self._registry: + return self._fn_or_list(self._registry[(spcls, qualifier)]) + elif (spcls, "default") in self._registry: + return self._fn_or_list(self._registry[(spcls, "default")]) + else: + raise ValueError("no dispatch function for object: %s" % obj) + + def _fn_or_list( + self, fn_or_list: Union[List[Callable[..., Any]], Callable[..., Any]] + ) -> Callable[..., Any]: + if self.uselist: + + def go(*arg: Any, **kw: Any) -> None: + if TYPE_CHECKING: + assert isinstance(fn_or_list, Sequence) + for fn in fn_or_list: + fn(*arg, **kw) + + return go + else: + return fn_or_list # type: ignore + + def branch(self) -> Dispatcher: + """Return a copy of this dispatcher that is independently + writable.""" + + d = Dispatcher() + if self.uselist: + d._registry.update( + (k, [fn for fn in self._registry[k]]) for k in self._registry + ) + else: + d._registry.update(self._registry) + return d + + +def not_none(value: Optional[_T]) -> _T: + assert value is not None + return value diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/messaging.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/messaging.py new file mode 100644 index 000000000..4c08f16e7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/messaging.py @@ -0,0 +1,122 @@ +from __future__ import annotations + +from collections.abc import Iterable +from contextlib import contextmanager +import logging +import sys +import textwrap +from typing import Iterator +from typing import Optional +from typing import TextIO +from typing import Union +import warnings + +from sqlalchemy.engine import url + +log = logging.getLogger(__name__) + +# disable "no handler found" errors +logging.getLogger("alembic").addHandler(logging.NullHandler()) + + +try: + import fcntl + import termios + import struct + + ioctl = fcntl.ioctl(0, termios.TIOCGWINSZ, struct.pack("HHHH", 0, 0, 0, 0)) + _h, TERMWIDTH, _hp, _wp = struct.unpack("HHHH", ioctl) + if TERMWIDTH <= 0: # can occur if running in emacs pseudo-tty + TERMWIDTH = None +except (ImportError, OSError): + TERMWIDTH = None + + +def write_outstream( + stream: TextIO, *text: Union[str, bytes], quiet: bool = False +) -> None: + if quiet: + return + encoding = getattr(stream, "encoding", "ascii") or "ascii" + for t in text: + if not isinstance(t, bytes): + t = t.encode(encoding, "replace") + t = t.decode(encoding) + try: + stream.write(t) + except OSError: + # suppress "broken pipe" errors. + # no known way to handle this on Python 3 however + # as the exception is "ignored" (noisily) in TextIOWrapper. + break + + +@contextmanager +def status( + status_msg: str, newline: bool = False, quiet: bool = False +) -> Iterator[None]: + msg(status_msg + " ...", newline, flush=True, quiet=quiet) + try: + yield + except: + if not quiet: + write_outstream(sys.stdout, " FAILED\n") + raise + else: + if not quiet: + write_outstream(sys.stdout, " done\n") + + +def err(message: str, quiet: bool = False) -> None: + log.error(message) + msg(f"FAILED: {message}", quiet=quiet) + sys.exit(-1) + + +def obfuscate_url_pw(input_url: str) -> str: + return url.make_url(input_url).render_as_string(hide_password=True) + + +def warn(msg: str, stacklevel: int = 2) -> None: + warnings.warn(msg, UserWarning, stacklevel=stacklevel) + + +def warn_deprecated(msg: str, stacklevel: int = 2) -> None: + warnings.warn(msg, DeprecationWarning, stacklevel=stacklevel) + + +def msg( + msg: str, newline: bool = True, flush: bool = False, quiet: bool = False +) -> None: + if quiet: + return + if TERMWIDTH is None: + write_outstream(sys.stdout, msg) + if newline: + write_outstream(sys.stdout, "\n") + else: + # left indent output lines + indent = " " + lines = textwrap.wrap( + msg, + TERMWIDTH, + initial_indent=indent, + subsequent_indent=indent, + ) + if len(lines) > 1: + for line in lines[0:-1]: + write_outstream(sys.stdout, line, "\n") + write_outstream(sys.stdout, lines[-1], ("\n" if newline else "")) + if flush: + sys.stdout.flush() + + +def format_as_comma(value: Optional[Union[str, Iterable[str]]]) -> str: + if value is None: + return "" + elif isinstance(value, str): + return value + elif isinstance(value, Iterable): + return ", ".join(value) + else: + raise ValueError("Don't know how to comma-format %r" % value) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/pyfiles.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/pyfiles.py new file mode 100644 index 000000000..6b75d5779 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/pyfiles.py @@ -0,0 +1,153 @@ +from __future__ import annotations + +import atexit +from contextlib import ExitStack +import importlib +import importlib.machinery +import importlib.util +import os +import pathlib +import re +import tempfile +from types import ModuleType +from typing import Any +from typing import Optional +from typing import Union + +from mako import exceptions +from mako.template import Template + +from . import compat +from .exc import CommandError + + +def template_to_file( + template_file: Union[str, os.PathLike[str]], + dest: Union[str, os.PathLike[str]], + output_encoding: str, + *, + append_with_newlines: bool = False, + **kw: Any, +) -> None: + template = Template(filename=_preserving_path_as_str(template_file)) + try: + output = template.render_unicode(**kw).encode(output_encoding) + except: + with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as ntf: + ntf.write( + exceptions.text_error_template() + .render_unicode() + .encode(output_encoding) + ) + fname = ntf.name + raise CommandError( + "Template rendering failed; see %s for a " + "template-oriented traceback." % fname + ) + else: + with open(dest, "ab" if append_with_newlines else "wb") as f: + if append_with_newlines: + f.write("\n\n".encode(output_encoding)) + f.write(output) + + +def coerce_resource_to_filename(fname_or_resource: str) -> pathlib.Path: + """Interpret a filename as either a filesystem location or as a package + resource. + + Names that are non absolute paths and contain a colon + are interpreted as resources and coerced to a file location. + + """ + # TODO: there seem to be zero tests for the package resource codepath + if not os.path.isabs(fname_or_resource) and ":" in fname_or_resource: + tokens = fname_or_resource.split(":") + + # from https://importlib-resources.readthedocs.io/en/latest/migration.html#pkg-resources-resource-filename # noqa E501 + + file_manager = ExitStack() + atexit.register(file_manager.close) + + ref = compat.importlib_resources.files(tokens[0]) + for tok in tokens[1:]: + ref = ref / tok + fname_or_resource = file_manager.enter_context( # type: ignore[assignment] # noqa: E501 + compat.importlib_resources.as_file(ref) + ) + return pathlib.Path(fname_or_resource) + + +def pyc_file_from_path( + path: Union[str, os.PathLike[str]], +) -> Optional[pathlib.Path]: + """Given a python source path, locate the .pyc.""" + + pathpath = pathlib.Path(path) + candidate = pathlib.Path( + importlib.util.cache_from_source(pathpath.as_posix()) + ) + if candidate.exists(): + return candidate + + # even for pep3147, fall back to the old way of finding .pyc files, + # to support sourceless operation + ext = pathpath.suffix + for ext in importlib.machinery.BYTECODE_SUFFIXES: + if pathpath.with_suffix(ext).exists(): + return pathpath.with_suffix(ext) + else: + return None + + +def load_python_file( + dir_: Union[str, os.PathLike[str]], filename: Union[str, os.PathLike[str]] +) -> ModuleType: + """Load a file from the given path as a Python module.""" + + dir_ = pathlib.Path(dir_) + filename_as_path = pathlib.Path(filename) + filename = filename_as_path.name + + module_id = re.sub(r"\W", "_", filename) + path = dir_ / filename + ext = path.suffix + if ext == ".py": + if path.exists(): + module = load_module_py(module_id, path) + else: + pyc_path = pyc_file_from_path(path) + if pyc_path is None: + raise ImportError("Can't find Python file %s" % path) + else: + module = load_module_py(module_id, pyc_path) + elif ext in (".pyc", ".pyo"): + module = load_module_py(module_id, path) + else: + assert False + return module + + +def load_module_py( + module_id: str, path: Union[str, os.PathLike[str]] +) -> ModuleType: + spec = importlib.util.spec_from_file_location(module_id, path) + assert spec + module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(module) # type: ignore + return module + + +def _preserving_path_as_str(path: Union[str, os.PathLike[str]]) -> str: + """receive str/pathlike and return a string. + + Does not convert an incoming string path to a Path first, to help with + unit tests that are doing string path round trips without OS-specific + processing if not necessary. + + """ + if isinstance(path, str): + return path + elif isinstance(path, pathlib.PurePath): + return str(path) + else: + return str(pathlib.Path(path)) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/sqla_compat.py b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/sqla_compat.py new file mode 100644 index 000000000..a427d3c8c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/alembic/util/sqla_compat.py @@ -0,0 +1,497 @@ +# mypy: allow-untyped-defs, allow-incomplete-defs, allow-untyped-calls +# mypy: no-warn-return-any, allow-any-generics + +from __future__ import annotations + +import contextlib +import re +from typing import Any +from typing import Callable +from typing import Dict +from typing import Iterable +from typing import Iterator +from typing import Optional +from typing import Protocol +from typing import Set +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from sqlalchemy import __version__ +from sqlalchemy import schema +from sqlalchemy import sql +from sqlalchemy import types as sqltypes +from sqlalchemy.schema import CheckConstraint +from sqlalchemy.schema import Column +from sqlalchemy.schema import ForeignKeyConstraint +from sqlalchemy.sql import visitors +from sqlalchemy.sql.base import DialectKWArgs +from sqlalchemy.sql.elements import BindParameter +from sqlalchemy.sql.elements import ColumnClause +from sqlalchemy.sql.elements import TextClause +from sqlalchemy.sql.elements import UnaryExpression +from sqlalchemy.sql.visitors import traverse +from typing_extensions import TypeGuard + +if True: + from sqlalchemy.sql.naming import _NONE_NAME as _NONE_NAME # type: ignore[attr-defined] # noqa: E501 + +if TYPE_CHECKING: + from sqlalchemy import ClauseElement + from sqlalchemy import Identity + from sqlalchemy import Index + from sqlalchemy import Table + from sqlalchemy.engine import Connection + from sqlalchemy.engine import Dialect + from sqlalchemy.engine import Transaction + from sqlalchemy.sql.base import ColumnCollection + from sqlalchemy.sql.compiler import SQLCompiler + from sqlalchemy.sql.elements import ColumnElement + from sqlalchemy.sql.schema import Constraint + from sqlalchemy.sql.schema import SchemaItem + +_CE = TypeVar("_CE", bound=Union["ColumnElement[Any]", "SchemaItem"]) + + +class _CompilerProtocol(Protocol): + def __call__(self, element: Any, compiler: Any, **kw: Any) -> str: ... + + +def _safe_int(value: str) -> Union[int, str]: + try: + return int(value) + except: + return value + + +_vers = tuple( + [_safe_int(x) for x in re.findall(r"(\d+|[abc]\d)", __version__)] +) +# https://docs.sqlalchemy.org/en/latest/changelog/changelog_14.html#change-0c6e0cc67dfe6fac5164720e57ef307d +sqla_14_18 = _vers >= (1, 4, 18) +sqla_14_26 = _vers >= (1, 4, 26) +sqla_2 = _vers >= (2,) +sqlalchemy_version = __version__ + +if TYPE_CHECKING: + + def compiles( + element: Type[ClauseElement], *dialects: str + ) -> Callable[[_CompilerProtocol], _CompilerProtocol]: ... + +else: + from sqlalchemy.ext.compiler import compiles + + +identity_has_dialect_kwargs = issubclass(schema.Identity, DialectKWArgs) + + +def _get_identity_options_dict( + identity: Union[Identity, schema.Sequence, None], + dialect_kwargs: bool = False, +) -> Dict[str, Any]: + if identity is None: + return {} + elif identity_has_dialect_kwargs: + assert hasattr(identity, "_as_dict") + as_dict = identity._as_dict() + if dialect_kwargs: + assert isinstance(identity, DialectKWArgs) + as_dict.update(identity.dialect_kwargs) + else: + as_dict = {} + if isinstance(identity, schema.Identity): + # always=None means something different than always=False + as_dict["always"] = identity.always + if identity.on_null is not None: + as_dict["on_null"] = identity.on_null + # attributes common to Identity and Sequence + attrs = ( + "start", + "increment", + "minvalue", + "maxvalue", + "nominvalue", + "nomaxvalue", + "cycle", + "cache", + "order", + ) + as_dict.update( + { + key: getattr(identity, key, None) + for key in attrs + if getattr(identity, key, None) is not None + } + ) + return as_dict + + +if sqla_2: + from sqlalchemy.sql.base import _NoneName +else: + from sqlalchemy.util import symbol as _NoneName # type: ignore[assignment] + + +_ConstraintName = Union[None, str, _NoneName] +_ConstraintNameDefined = Union[str, _NoneName] + + +def constraint_name_defined( + name: _ConstraintName, +) -> TypeGuard[_ConstraintNameDefined]: + return name is _NONE_NAME or isinstance(name, (str, _NoneName)) + + +def constraint_name_string(name: _ConstraintName) -> TypeGuard[str]: + return isinstance(name, str) + + +def constraint_name_or_none(name: _ConstraintName) -> Optional[str]: + return name if constraint_name_string(name) else None + + +AUTOINCREMENT_DEFAULT = "auto" + + +@contextlib.contextmanager +def _ensure_scope_for_ddl( + connection: Optional[Connection], +) -> Iterator[None]: + try: + in_transaction = connection.in_transaction # type: ignore[union-attr] + except AttributeError: + # catch for MockConnection, None + in_transaction = None + pass + + # yield outside the catch + if in_transaction is None: + yield + else: + if not in_transaction(): + assert connection is not None + with connection.begin(): + yield + else: + yield + + +def _safe_begin_connection_transaction( + connection: Connection, +) -> Transaction: + transaction = connection.get_transaction() + if transaction: + return transaction + else: + return connection.begin() + + +def _safe_commit_connection_transaction( + connection: Connection, +) -> None: + transaction = connection.get_transaction() + if transaction: + transaction.commit() + + +def _safe_rollback_connection_transaction( + connection: Connection, +) -> None: + transaction = connection.get_transaction() + if transaction: + transaction.rollback() + + +def _get_connection_in_transaction(connection: Optional[Connection]) -> bool: + try: + in_transaction = connection.in_transaction # type: ignore + except AttributeError: + # catch for MockConnection + return False + else: + return in_transaction() + + +def _idx_table_bound_expressions(idx: Index) -> Iterable[ColumnElement[Any]]: + return idx.expressions # type: ignore + + +def _copy(schema_item: _CE, **kw) -> _CE: + if hasattr(schema_item, "_copy"): + return schema_item._copy(**kw) + else: + return schema_item.copy(**kw) # type: ignore[union-attr] + + +def _connectable_has_table( + connectable: Connection, tablename: str, schemaname: Union[str, None] +) -> bool: + return connectable.dialect.has_table(connectable, tablename, schemaname) + + +def _exec_on_inspector(inspector, statement, **params): + with inspector._operation_context() as conn: + return conn.execute(statement, params) + + +def _nullability_might_be_unset(metadata_column): + from sqlalchemy.sql import schema + + return metadata_column._user_defined_nullable is schema.NULL_UNSPECIFIED + + +def _server_default_is_computed(*server_default) -> bool: + return any(isinstance(sd, schema.Computed) for sd in server_default) + + +def _server_default_is_identity(*server_default) -> bool: + return any(isinstance(sd, schema.Identity) for sd in server_default) + + +def _table_for_constraint(constraint: Constraint) -> Table: + if isinstance(constraint, ForeignKeyConstraint): + table = constraint.parent + assert table is not None + return table # type: ignore[return-value] + else: + return constraint.table + + +def _columns_for_constraint(constraint): + if isinstance(constraint, ForeignKeyConstraint): + return [fk.parent for fk in constraint.elements] + elif isinstance(constraint, CheckConstraint): + return _find_columns(constraint.sqltext) + else: + return list(constraint.columns) + + +def _resolve_for_variant(type_, dialect): + if _type_has_variants(type_): + base_type, mapping = _get_variant_mapping(type_) + return mapping.get(dialect.name, base_type) + else: + return type_ + + +if hasattr(sqltypes.TypeEngine, "_variant_mapping"): # 2.0 + + def _type_has_variants(type_): + return bool(type_._variant_mapping) + + def _get_variant_mapping(type_): + return type_, type_._variant_mapping + +else: + + def _type_has_variants(type_): + return type(type_) is sqltypes.Variant + + def _get_variant_mapping(type_): + return type_.impl, type_.mapping + + +def _fk_spec(constraint: ForeignKeyConstraint) -> Any: + if TYPE_CHECKING: + assert constraint.columns is not None + assert constraint.elements is not None + assert isinstance(constraint.parent, Table) + + source_columns = [ + constraint.columns[key].name for key in constraint.column_keys + ] + + source_table = constraint.parent.name + source_schema = constraint.parent.schema + target_schema = constraint.elements[0].column.table.schema + target_table = constraint.elements[0].column.table.name + target_columns = [element.column.name for element in constraint.elements] + ondelete = constraint.ondelete + onupdate = constraint.onupdate + deferrable = constraint.deferrable + initially = constraint.initially + return ( + source_schema, + source_table, + source_columns, + target_schema, + target_table, + target_columns, + onupdate, + ondelete, + deferrable, + initially, + ) + + +def _fk_is_self_referential(constraint: ForeignKeyConstraint) -> bool: + spec = constraint.elements[0]._get_colspec() + tokens = spec.split(".") + tokens.pop(-1) # colname + tablekey = ".".join(tokens) + assert constraint.parent is not None + return tablekey == constraint.parent.key + + +def _is_type_bound(constraint: Constraint) -> bool: + # this deals with SQLAlchemy #3260, don't copy CHECK constraints + # that will be generated by the type. + # new feature added for #3260 + return constraint._type_bound + + +def _find_columns(clause): + """locate Column objects within the given expression.""" + + cols: Set[ColumnElement[Any]] = set() + traverse(clause, {}, {"column": cols.add}) + return cols + + +def _remove_column_from_collection( + collection: ColumnCollection, column: Union[Column[Any], ColumnClause[Any]] +) -> None: + """remove a column from a ColumnCollection.""" + + # workaround for older SQLAlchemy, remove the + # same object that's present + assert column.key is not None + to_remove = collection[column.key] + + # SQLAlchemy 2.0 will use more ReadOnlyColumnCollection + # (renamed from ImmutableColumnCollection) + if hasattr(collection, "_immutable") or hasattr(collection, "_readonly"): + collection._parent.remove(to_remove) + else: + collection.remove(to_remove) + + +def _textual_index_column( + table: Table, text_: Union[str, TextClause, ColumnElement[Any]] +) -> Union[ColumnElement[Any], Column[Any]]: + """a workaround for the Index construct's severe lack of flexibility""" + if isinstance(text_, str): + c = Column(text_, sqltypes.NULLTYPE) + table.append_column(c) + return c + elif isinstance(text_, TextClause): + return _textual_index_element(table, text_) + elif isinstance(text_, _textual_index_element): + return _textual_index_column(table, text_.text) + elif isinstance(text_, sql.ColumnElement): + return _copy_expression(text_, table) + else: + raise ValueError("String or text() construct expected") + + +def _copy_expression(expression: _CE, target_table: Table) -> _CE: + def replace(col): + if ( + isinstance(col, Column) + and col.table is not None + and col.table is not target_table + ): + if col.name in target_table.c: + return target_table.c[col.name] + else: + c = _copy(col) + target_table.append_column(c) + return c + else: + return None + + return visitors.replacement_traverse( # type: ignore[call-overload] + expression, {}, replace + ) + + +class _textual_index_element(sql.ColumnElement): + """Wrap around a sqlalchemy text() construct in such a way that + we appear like a column-oriented SQL expression to an Index + construct. + + The issue here is that currently the Postgresql dialect, the biggest + recipient of functional indexes, keys all the index expressions to + the corresponding column expressions when rendering CREATE INDEX, + so the Index we create here needs to have a .columns collection that + is the same length as the .expressions collection. Ultimately + SQLAlchemy should support text() expressions in indexes. + + See SQLAlchemy issue 3174. + + """ + + __visit_name__ = "_textual_idx_element" + + def __init__(self, table: Table, text: TextClause) -> None: + self.table = table + self.text = text + self.key = text.text + self.fake_column = schema.Column(self.text.text, sqltypes.NULLTYPE) + table.append_column(self.fake_column) + + def get_children(self, **kw): + return [self.fake_column] + + +@compiles(_textual_index_element) +def _render_textual_index_column( + element: _textual_index_element, compiler: SQLCompiler, **kw +) -> str: + return compiler.process(element.text, **kw) + + +class _literal_bindparam(BindParameter): + pass + + +@compiles(_literal_bindparam) +def _render_literal_bindparam( + element: _literal_bindparam, compiler: SQLCompiler, **kw +) -> str: + return compiler.render_literal_bindparam(element, **kw) + + +def _get_constraint_final_name( + constraint: Union[Index, Constraint], dialect: Optional[Dialect] +) -> Optional[str]: + if constraint.name is None: + return None + assert dialect is not None + # for SQLAlchemy 1.4 we would like to have the option to expand + # the use of "deferred" names for constraints as well as to have + # some flexibility with "None" name and similar; make use of new + # SQLAlchemy API to return what would be the final compiled form of + # the name for this dialect. + return dialect.identifier_preparer.format_constraint( + constraint, _alembic_quote=False + ) + + +def _constraint_is_named( + constraint: Union[Constraint, Index], dialect: Optional[Dialect] +) -> bool: + if constraint.name is None: + return False + assert dialect is not None + name = dialect.identifier_preparer.format_constraint( + constraint, _alembic_quote=False + ) + return name is not None + + +def is_expression_index(index: Index) -> bool: + for expr in index.expressions: + if is_expression(expr): + return True + return False + + +def is_expression(expr: Any) -> bool: + while isinstance(expr, UnaryExpression): + expr = expr.element + if not isinstance(expr, ColumnClause) or expr.is_literal: + return True + return False diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/LICENSE new file mode 100644 index 000000000..11069edd7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/METADATA new file mode 100644 index 000000000..96f0bdf7d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/METADATA @@ -0,0 +1,330 @@ +Metadata-Version: 2.2 +Name: bcrypt +Version: 4.3.0 +Summary: Modern password hashing for your software and your servers +Author-email: The Python Cryptographic Authority developers +License: Apache-2.0 +Project-URL: homepage, https://github.com/pyca/bcrypt/ +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Requires-Python: >=3.8 +Description-Content-Type: text/x-rst +License-File: LICENSE +Provides-Extra: tests +Requires-Dist: pytest!=3.3.0,>=3.2.1; extra == "tests" +Provides-Extra: typecheck +Requires-Dist: mypy; extra == "typecheck" + +bcrypt +====== + +.. image:: https://img.shields.io/pypi/v/bcrypt.svg + :target: https://pypi.org/project/bcrypt/ + :alt: Latest Version + +.. image:: https://github.com/pyca/bcrypt/workflows/CI/badge.svg?branch=main + :target: https://github.com/pyca/bcrypt/actions?query=workflow%3ACI+branch%3Amain + +Acceptable password hashing for your software and your servers (but you should +really use argon2id or scrypt) + + +Installation +============ + +To install bcrypt, simply: + +.. code:: console + + $ pip install bcrypt + +Note that bcrypt should build very easily on Linux provided you have a C +compiler and a Rust compiler (the minimum supported Rust version is 1.56.0). + +For Debian and Ubuntu, the following command will ensure that the required dependencies are installed: + +.. code:: console + + $ sudo apt-get install build-essential cargo + +For Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed: + +.. code:: console + + $ sudo yum install gcc cargo + +For Alpine, the following command will ensure that the required dependencies are installed: + +.. code:: console + + $ apk add --update musl-dev gcc cargo + + +Alternatives +============ + +While bcrypt remains an acceptable choice for password storage, depending on your specific use case you may also want to consider using scrypt (either via `standard library`_ or `cryptography`_) or argon2id via `argon2_cffi`_. + +Changelog +========= + +Unreleased +---------- + +* Dropped support for Python 3.7. +* We now support free-threaded Python 3.13. +* We now support PyPy 3.11. +* We now publish wheels for free-threaded Python 3.13, for PyPy 3.11 on + ``manylinux``, and for ARMv7l on ``manylinux``. + +4.2.1 +----- + +* Bump Rust dependency versions - this should resolve crashes on Python 3.13 + free-threaded builds. +* We no longer build ``manylinux`` wheels for PyPy 3.9. + +4.2.0 +----- + +* Bump Rust dependency versions +* Removed the ``BCRYPT_ALLOW_RUST_163`` environment variable. + +4.1.3 +----- + +* Bump Rust dependency versions + +4.1.2 +----- + +* Publish both ``py37`` and ``py39`` wheels. This should resolve some errors + relating to initializing a module multiple times per process. + +4.1.1 +----- + +* Fixed the type signature on the ``kdf`` method. +* Fixed packaging bug on Windows. +* Fixed incompatibility with passlib package detection assumptions. + +4.1.0 +----- + +* Dropped support for Python 3.6. +* Bumped MSRV to 1.64. (Note: Rust 1.63 can be used by setting the ``BCRYPT_ALLOW_RUST_163`` environment variable) + +4.0.1 +----- + +* We now build PyPy ``manylinux`` wheels. +* Fixed a bug where passing an invalid ``salt`` to ``checkpw`` could result in + a ``pyo3_runtime.PanicException``. It now correctly raises a ``ValueError``. + +4.0.0 +----- + +* ``bcrypt`` is now implemented in Rust. Users building from source will need + to have a Rust compiler available. Nothing will change for users downloading + wheels. +* We no longer ship ``manylinux2010`` wheels. Users should upgrade to the latest + ``pip`` to ensure this doesn’t cause issues downloading wheels on their + platform. We now ship ``manylinux_2_28`` wheels for users on new enough platforms. +* ``NUL`` bytes are now allowed in inputs. + + +3.2.2 +----- + +* Fixed packaging of ``py.typed`` files in wheels so that ``mypy`` works. + +3.2.1 +----- + +* Added support for compilation on z/OS +* The next release of ``bcrypt`` with be 4.0 and it will require Rust at + compile time, for users building from source. There will be no additional + requirement for users who are installing from wheels. Users on most + platforms will be able to obtain a wheel by making sure they have an up to + date ``pip``. The minimum supported Rust version will be 1.56.0. +* This will be the final release for which we ship ``manylinux2010`` wheels. + Going forward the minimum supported manylinux ABI for our wheels will be + ``manylinux2014``. The vast majority of users will continue to receive + ``manylinux`` wheels provided they have an up to date ``pip``. + + +3.2.0 +----- + +* Added typehints for library functions. +* Dropped support for Python versions less than 3.6 (2.7, 3.4, 3.5). +* Shipped ``abi3`` Windows wheels (requires pip >= 20). + +3.1.7 +----- + +* Set a ``setuptools`` lower bound for PEP517 wheel building. +* We no longer distribute 32-bit ``manylinux1`` wheels. Continuing to produce + them was a maintenance burden. + +3.1.6 +----- + +* Added support for compilation on Haiku. + +3.1.5 +----- + +* Added support for compilation on AIX. +* Dropped Python 2.6 and 3.3 support. +* Switched to using ``abi3`` wheels for Python 3. If you are not getting a + wheel on a compatible platform please upgrade your ``pip`` version. + +3.1.4 +----- + +* Fixed compilation with mingw and on illumos. + +3.1.3 +----- +* Fixed a compilation issue on Solaris. +* Added a warning when using too few rounds with ``kdf``. + +3.1.2 +----- +* Fixed a compile issue affecting big endian platforms. +* Fixed invalid escape sequence warnings on Python 3.6. +* Fixed building in non-UTF8 environments on Python 2. + +3.1.1 +----- +* Resolved a ``UserWarning`` when used with ``cffi`` 1.8.3. + +3.1.0 +----- +* Added support for ``checkpw``, a convenience method for verifying a password. +* Ensure that you get a ``$2y$`` hash when you input a ``$2y$`` salt. +* Fixed a regression where ``$2a`` hashes were vulnerable to a wraparound bug. +* Fixed compilation under Alpine Linux. + +3.0.0 +----- +* Switched the C backend to code obtained from the OpenBSD project rather than + openwall. +* Added support for ``bcrypt_pbkdf`` via the ``kdf`` function. + +2.0.0 +----- +* Added support for an adjustible prefix when calling ``gensalt``. +* Switched to CFFI 1.0+ + +Usage +----- + +Password Hashing +~~~~~~~~~~~~~~~~ + +Hashing and then later checking that a password matches the previous hashed +password is very simple: + +.. code:: pycon + + >>> import bcrypt + >>> password = b"super secret password" + >>> # Hash a password for the first time, with a randomly-generated salt + >>> hashed = bcrypt.hashpw(password, bcrypt.gensalt()) + >>> # Check that an unhashed password matches one that has previously been + >>> # hashed + >>> if bcrypt.checkpw(password, hashed): + ... print("It Matches!") + ... else: + ... print("It Does not Match :(") + +KDF +~~~ + +As of 3.0.0 ``bcrypt`` now offers a ``kdf`` function which does ``bcrypt_pbkdf``. +This KDF is used in OpenSSH's newer encrypted private key format. + +.. code:: pycon + + >>> import bcrypt + >>> key = bcrypt.kdf( + ... password=b'password', + ... salt=b'salt', + ... desired_key_bytes=32, + ... rounds=100) + + +Adjustable Work Factor +~~~~~~~~~~~~~~~~~~~~~~ +One of bcrypt's features is an adjustable logarithmic work factor. To adjust +the work factor merely pass the desired number of rounds to +``bcrypt.gensalt(rounds=12)`` which defaults to 12): + +.. code:: pycon + + >>> import bcrypt + >>> password = b"super secret password" + >>> # Hash a password for the first time, with a certain number of rounds + >>> hashed = bcrypt.hashpw(password, bcrypt.gensalt(14)) + >>> # Check that a unhashed password matches one that has previously been + >>> # hashed + >>> if bcrypt.checkpw(password, hashed): + ... print("It Matches!") + ... else: + ... print("It Does not Match :(") + + +Adjustable Prefix +~~~~~~~~~~~~~~~~~ + +Another one of bcrypt's features is an adjustable prefix to let you define what +libraries you'll remain compatible with. To adjust this, pass either ``2a`` or +``2b`` (the default) to ``bcrypt.gensalt(prefix=b"2b")`` as a bytes object. + +As of 3.0.0 the ``$2y$`` prefix is still supported in ``hashpw`` but deprecated. + +Maximum Password Length +~~~~~~~~~~~~~~~~~~~~~~~ + +The bcrypt algorithm only handles passwords up to 72 characters, any characters +beyond that are ignored. To work around this, a common approach is to hash a +password with a cryptographic hash (such as ``sha256``) and then base64 +encode it to prevent NULL byte problems before hashing the result with +``bcrypt``: + +.. code:: pycon + + >>> password = b"an incredibly long password" * 10 + >>> hashed = bcrypt.hashpw( + ... base64.b64encode(hashlib.sha256(password).digest()), + ... bcrypt.gensalt() + ... ) + +Compatibility +------------- + +This library should be compatible with py-bcrypt and it will run on Python +3.8+ (including free-threaded builds), and PyPy 3. + +Security +-------- + +``bcrypt`` follows the `same security policy as cryptography`_, if you +identify a vulnerability, we ask you to contact us privately. + +.. _`same security policy as cryptography`: https://cryptography.io/en/latest/security.html +.. _`standard library`: https://docs.python.org/3/library/hashlib.html#hashlib.scrypt +.. _`argon2_cffi`: https://argon2-cffi.readthedocs.io +.. _`cryptography`: https://cryptography.io/en/latest/hazmat/primitives/key-derivation-functions/#cryptography.hazmat.primitives.kdf.scrypt.Scrypt diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/RECORD new file mode 100644 index 000000000..aa6b8122c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/RECORD @@ -0,0 +1,11 @@ +bcrypt-4.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +bcrypt-4.3.0.dist-info/LICENSE,sha256=gXPVwptPlW1TJ4HSuG5OMPg-a3h43OGMkZRR1rpwfJA,10850 +bcrypt-4.3.0.dist-info/METADATA,sha256=95qX7ziIfmOF0kNM95YZuWhLVfFy-6EtssVvf1ZgeWg,10042 +bcrypt-4.3.0.dist-info/RECORD,, +bcrypt-4.3.0.dist-info/WHEEL,sha256=XlovOtcAZFqrc4OSNBtc5R3yDeRHyhWP24RdDnylFpY,111 +bcrypt-4.3.0.dist-info/top_level.txt,sha256=BkR_qBzDbSuycMzHWE1vzXrfYecAzUVmQs6G2CukqNI,7 +bcrypt/__init__.py,sha256=cv-NupIX6P7o6A4PK_F0ur6IZoDr3GnvyzFO9k16wKQ,1000 +bcrypt/__init__.pyi,sha256=ITUCB9mPVU8sKUbJQMDUH5YfQXZb1O55F9qvKZR_o8I,333 +bcrypt/__pycache__/__init__.cpython-310.pyc,, +bcrypt/_bcrypt.abi3.so,sha256=oMArVCuY_atg2H4SGNfM-zbfEgUOkd4qSiWn2nPqmXc,644928 +bcrypt/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/WHEEL new file mode 100644 index 000000000..dd95e91e5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (75.8.2) +Root-Is-Purelib: false +Tag: cp39-abi3-manylinux_2_34_x86_64 + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/top_level.txt new file mode 100644 index 000000000..7f0b6e759 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt-4.3.0.dist-info/top_level.txt @@ -0,0 +1 @@ +bcrypt diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.py new file mode 100644 index 000000000..81a92fd42 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.py @@ -0,0 +1,43 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from ._bcrypt import ( + __author__, + __copyright__, + __email__, + __license__, + __summary__, + __title__, + __uri__, + checkpw, + gensalt, + hashpw, + kdf, +) +from ._bcrypt import ( + __version_ex__ as __version__, +) + +__all__ = [ + "__author__", + "__copyright__", + "__email__", + "__license__", + "__summary__", + "__title__", + "__uri__", + "__version__", + "checkpw", + "gensalt", + "hashpw", + "kdf", +] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.pyi b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.pyi new file mode 100644 index 000000000..12e4a2ef4 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/__init__.pyi @@ -0,0 +1,10 @@ +def gensalt(rounds: int = 12, prefix: bytes = b"2b") -> bytes: ... +def hashpw(password: bytes, salt: bytes) -> bytes: ... +def checkpw(password: bytes, hashed_password: bytes) -> bool: ... +def kdf( + password: bytes, + salt: bytes, + desired_key_bytes: int, + rounds: int, + ignore_few_rounds: bool = False, +) -> bytes: ... diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/_bcrypt.abi3.so b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/_bcrypt.abi3.so new file mode 100755 index 000000000..ae9f55b6d Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/_bcrypt.abi3.so differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/bcrypt/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/LICENSE.txt b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/LICENSE.txt new file mode 100644 index 000000000..79c9825ad --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/LICENSE.txt @@ -0,0 +1,20 @@ +Copyright 2010 Jason Kirtland + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/METADATA new file mode 100644 index 000000000..efa45f598 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/METADATA @@ -0,0 +1,60 @@ +Metadata-Version: 2.1 +Name: blinker +Version: 1.8.2 +Summary: Fast, simple object-to-object and broadcast signaling +Author: Jason Kirtland +Maintainer-email: Pallets Ecosystem +Requires-Python: >=3.8 +Description-Content-Type: text/markdown +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Typing :: Typed +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://blinker.readthedocs.io +Project-URL: Source, https://github.com/pallets-eco/blinker/ + +# Blinker + +Blinker provides a fast dispatching system that allows any number of +interested parties to subscribe to events, or "signals". + + +## Pallets Community Ecosystem + +> [!IMPORTANT]\ +> This project is part of the Pallets Community Ecosystem. Pallets is the open +> source organization that maintains Flask; Pallets-Eco enables community +> maintenance of related projects. If you are interested in helping maintain +> this project, please reach out on [the Pallets Discord server][discord]. +> +> [discord]: https://discord.gg/pallets + + +## Example + +Signal receivers can subscribe to specific senders or receive signals +sent by any sender. + +```pycon +>>> from blinker import signal +>>> started = signal('round-started') +>>> def each(round): +... print(f"Round {round}") +... +>>> started.connect(each) + +>>> def round_two(round): +... print("This is round two.") +... +>>> started.connect(round_two, sender=2) + +>>> for round in range(1, 4): +... started.send(round) +... +Round 1! +Round 2! +This is round two. +Round 3! +``` + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/RECORD new file mode 100644 index 000000000..b202054fa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/RECORD @@ -0,0 +1,13 @@ +blinker-1.8.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +blinker-1.8.2.dist-info/LICENSE.txt,sha256=nrc6HzhZekqhcCXSrhvjg5Ykx5XphdTw6Xac4p-spGc,1054 +blinker-1.8.2.dist-info/METADATA,sha256=3tEx40hm9IEofyFqDPJsDPE9MAIEhtifapoSp7FqzuA,1633 +blinker-1.8.2.dist-info/RECORD,, +blinker-1.8.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +blinker-1.8.2.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81 +blinker/__init__.py,sha256=ymyJY_PoTgBzaPgdr4dq-RRsGh7D-sYQIGMNp8Rx4qc,1577 +blinker/__pycache__/__init__.cpython-310.pyc,, +blinker/__pycache__/_utilities.cpython-310.pyc,, +blinker/__pycache__/base.cpython-310.pyc,, +blinker/_utilities.py,sha256=0J7eeXXTUx0Ivf8asfpx0ycVkp0Eqfqnj117x2mYX9E,1675 +blinker/base.py,sha256=nIZJEtXQ8LLZZJrwVp2wQcdfCzDixvAHR9VpSWiyVcQ,22574 +blinker/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/WHEEL new file mode 100644 index 000000000..3b5e64b5e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker-1.8.2.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.9.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/__init__.py new file mode 100644 index 000000000..c93527eca --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/__init__.py @@ -0,0 +1,60 @@ +from __future__ import annotations + +import typing as t + +from .base import ANY +from .base import default_namespace +from .base import NamedSignal +from .base import Namespace +from .base import Signal +from .base import signal + +__all__ = [ + "ANY", + "default_namespace", + "NamedSignal", + "Namespace", + "Signal", + "signal", +] + + +def __getattr__(name: str) -> t.Any: + import warnings + + if name == "__version__": + import importlib.metadata + + warnings.warn( + "The '__version__' attribute is deprecated and will be removed in" + " Blinker 1.9.0. Use feature detection or" + " 'importlib.metadata.version(\"blinker\")' instead.", + DeprecationWarning, + stacklevel=2, + ) + return importlib.metadata.version("blinker") + + if name == "receiver_connected": + from .base import _receiver_connected + + warnings.warn( + "The global 'receiver_connected' signal is deprecated and will be" + " removed in Blinker 1.9. Use 'Signal.receiver_connected' and" + " 'Signal.receiver_disconnected' instead.", + DeprecationWarning, + stacklevel=2, + ) + return _receiver_connected + + if name == "WeakNamespace": + from .base import _WeakNamespace + + warnings.warn( + "'WeakNamespace' is deprecated and will be removed in Blinker 1.9." + " Use 'Namespace' instead.", + DeprecationWarning, + stacklevel=2, + ) + return _WeakNamespace + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker/_utilities.py b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/_utilities.py new file mode 100644 index 000000000..000c902a2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/_utilities.py @@ -0,0 +1,64 @@ +from __future__ import annotations + +import collections.abc as c +import inspect +import typing as t +from weakref import ref +from weakref import WeakMethod + +T = t.TypeVar("T") + + +class Symbol: + """A constant symbol, nicer than ``object()``. Repeated calls return the + same instance. + + >>> Symbol('foo') is Symbol('foo') + True + >>> Symbol('foo') + foo + """ + + symbols: t.ClassVar[dict[str, Symbol]] = {} + + def __new__(cls, name: str) -> Symbol: + if name in cls.symbols: + return cls.symbols[name] + + obj = super().__new__(cls) + cls.symbols[name] = obj + return obj + + def __init__(self, name: str) -> None: + self.name = name + + def __repr__(self) -> str: + return self.name + + def __getnewargs__(self) -> tuple[t.Any, ...]: + return (self.name,) + + +def make_id(obj: object) -> c.Hashable: + """Get a stable identifier for a receiver or sender, to be used as a dict + key or in a set. + """ + if inspect.ismethod(obj): + # The id of a bound method is not stable, but the id of the unbound + # function and instance are. + return id(obj.__func__), id(obj.__self__) + + if isinstance(obj, (str, int)): + # Instances with the same value always compare equal and have the same + # hash, even if the id may change. + return obj + + # Assume other types are not hashable but will always be the same instance. + return id(obj) + + +def make_ref(obj: T, callback: c.Callable[[ref[T]], None] | None = None) -> ref[T]: + if inspect.ismethod(obj): + return WeakMethod(obj, callback) # type: ignore[arg-type, return-value] + + return ref(obj, callback) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker/base.py b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/base.py new file mode 100644 index 000000000..ec494b141 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/base.py @@ -0,0 +1,621 @@ +from __future__ import annotations + +import collections.abc as c +import typing as t +import warnings +import weakref +from collections import defaultdict +from contextlib import AbstractContextManager +from contextlib import contextmanager +from functools import cached_property +from inspect import iscoroutinefunction +from weakref import WeakValueDictionary + +from ._utilities import make_id +from ._utilities import make_ref +from ._utilities import Symbol + +if t.TYPE_CHECKING: + F = t.TypeVar("F", bound=c.Callable[..., t.Any]) + +ANY = Symbol("ANY") +"""Symbol for "any sender".""" + +ANY_ID = 0 + + +class Signal: + """A notification emitter. + + :param doc: The docstring for the signal. + """ + + ANY = ANY + """An alias for the :data:`~blinker.ANY` sender symbol.""" + + set_class: type[set[t.Any]] = set + """The set class to use for tracking connected receivers and senders. + Python's ``set`` is unordered. If receivers must be dispatched in the order + they were connected, an ordered set implementation can be used. + + .. versionadded:: 1.7 + """ + + @cached_property + def receiver_connected(self) -> Signal: + """Emitted at the end of each :meth:`connect` call. + + The signal sender is the signal instance, and the :meth:`connect` + arguments are passed through: ``receiver``, ``sender``, and ``weak``. + + .. versionadded:: 1.2 + """ + return Signal(doc="Emitted after a receiver connects.") + + @cached_property + def receiver_disconnected(self) -> Signal: + """Emitted at the end of each :meth:`disconnect` call. + + The sender is the signal instance, and the :meth:`disconnect` arguments + are passed through: ``receiver`` and ``sender``. + + This signal is emitted **only** when :meth:`disconnect` is called + explicitly. This signal cannot be emitted by an automatic disconnect + when a weakly referenced receiver or sender goes out of scope, as the + instance is no longer be available to be used as the sender for this + signal. + + An alternative approach is available by subscribing to + :attr:`receiver_connected` and setting up a custom weakref cleanup + callback on weak receivers and senders. + + .. versionadded:: 1.2 + """ + return Signal(doc="Emitted after a receiver disconnects.") + + def __init__(self, doc: str | None = None) -> None: + if doc: + self.__doc__ = doc + + self.receivers: dict[ + t.Any, weakref.ref[c.Callable[..., t.Any]] | c.Callable[..., t.Any] + ] = {} + """The map of connected receivers. Useful to quickly check if any + receivers are connected to the signal: ``if s.receivers:``. The + structure and data is not part of the public API, but checking its + boolean value is. + """ + + self.is_muted: bool = False + self._by_receiver: dict[t.Any, set[t.Any]] = defaultdict(self.set_class) + self._by_sender: dict[t.Any, set[t.Any]] = defaultdict(self.set_class) + self._weak_senders: dict[t.Any, weakref.ref[t.Any]] = {} + + def connect(self, receiver: F, sender: t.Any = ANY, weak: bool = True) -> F: + """Connect ``receiver`` to be called when the signal is sent by + ``sender``. + + :param receiver: The callable to call when :meth:`send` is called with + the given ``sender``, passing ``sender`` as a positional argument + along with any extra keyword arguments. + :param sender: Any object or :data:`ANY`. ``receiver`` will only be + called when :meth:`send` is called with this sender. If ``ANY``, the + receiver will be called for any sender. A receiver may be connected + to multiple senders by calling :meth:`connect` multiple times. + :param weak: Track the receiver with a :mod:`weakref`. The receiver will + be automatically disconnected when it is garbage collected. When + connecting a receiver defined within a function, set to ``False``, + otherwise it will be disconnected when the function scope ends. + """ + receiver_id = make_id(receiver) + sender_id = ANY_ID if sender is ANY else make_id(sender) + + if weak: + self.receivers[receiver_id] = make_ref( + receiver, self._make_cleanup_receiver(receiver_id) + ) + else: + self.receivers[receiver_id] = receiver + + self._by_sender[sender_id].add(receiver_id) + self._by_receiver[receiver_id].add(sender_id) + + if sender is not ANY and sender_id not in self._weak_senders: + # store a cleanup for weakref-able senders + try: + self._weak_senders[sender_id] = make_ref( + sender, self._make_cleanup_sender(sender_id) + ) + except TypeError: + pass + + if "receiver_connected" in self.__dict__ and self.receiver_connected.receivers: + try: + self.receiver_connected.send( + self, receiver=receiver, sender=sender, weak=weak + ) + except TypeError: + # TODO no explanation or test for this + self.disconnect(receiver, sender) + raise + + if _receiver_connected.receivers and self is not _receiver_connected: + try: + _receiver_connected.send( + self, receiver_arg=receiver, sender_arg=sender, weak_arg=weak + ) + except TypeError: + self.disconnect(receiver, sender) + raise + + return receiver + + def connect_via(self, sender: t.Any, weak: bool = False) -> c.Callable[[F], F]: + """Connect the decorated function to be called when the signal is sent + by ``sender``. + + The decorated function will be called when :meth:`send` is called with + the given ``sender``, passing ``sender`` as a positional argument along + with any extra keyword arguments. + + :param sender: Any object or :data:`ANY`. ``receiver`` will only be + called when :meth:`send` is called with this sender. If ``ANY``, the + receiver will be called for any sender. A receiver may be connected + to multiple senders by calling :meth:`connect` multiple times. + :param weak: Track the receiver with a :mod:`weakref`. The receiver will + be automatically disconnected when it is garbage collected. When + connecting a receiver defined within a function, set to ``False``, + otherwise it will be disconnected when the function scope ends.= + + .. versionadded:: 1.1 + """ + + def decorator(fn: F) -> F: + self.connect(fn, sender, weak) + return fn + + return decorator + + @contextmanager + def connected_to( + self, receiver: c.Callable[..., t.Any], sender: t.Any = ANY + ) -> c.Generator[None, None, None]: + """A context manager that temporarily connects ``receiver`` to the + signal while a ``with`` block executes. When the block exits, the + receiver is disconnected. Useful for tests. + + :param receiver: The callable to call when :meth:`send` is called with + the given ``sender``, passing ``sender`` as a positional argument + along with any extra keyword arguments. + :param sender: Any object or :data:`ANY`. ``receiver`` will only be + called when :meth:`send` is called with this sender. If ``ANY``, the + receiver will be called for any sender. + + .. versionadded:: 1.1 + """ + self.connect(receiver, sender=sender, weak=False) + + try: + yield None + finally: + self.disconnect(receiver) + + @contextmanager + def muted(self) -> c.Generator[None, None, None]: + """A context manager that temporarily disables the signal. No receivers + will be called if the signal is sent, until the ``with`` block exits. + Useful for tests. + """ + self.is_muted = True + + try: + yield None + finally: + self.is_muted = False + + def temporarily_connected_to( + self, receiver: c.Callable[..., t.Any], sender: t.Any = ANY + ) -> AbstractContextManager[None]: + """Deprecated alias for :meth:`connected_to`. + + .. deprecated:: 1.1 + Renamed to ``connected_to``. Will be removed in Blinker 1.9. + + .. versionadded:: 0.9 + """ + warnings.warn( + "'temporarily_connected_to' is renamed to 'connected_to'. The old name is" + " deprecated and will be removed in Blinker 1.9.", + DeprecationWarning, + stacklevel=2, + ) + return self.connected_to(receiver, sender) + + def send( + self, + sender: t.Any | None = None, + /, + *, + _async_wrapper: c.Callable[ + [c.Callable[..., c.Coroutine[t.Any, t.Any, t.Any]]], c.Callable[..., t.Any] + ] + | None = None, + **kwargs: t.Any, + ) -> list[tuple[c.Callable[..., t.Any], t.Any]]: + """Call all receivers that are connected to the given ``sender`` + or :data:`ANY`. Each receiver is called with ``sender`` as a positional + argument along with any extra keyword arguments. Return a list of + ``(receiver, return value)`` tuples. + + The order receivers are called is undefined, but can be influenced by + setting :attr:`set_class`. + + If a receiver raises an exception, that exception will propagate up. + This makes debugging straightforward, with an assumption that correctly + implemented receivers will not raise. + + :param sender: Call receivers connected to this sender, in addition to + those connected to :data:`ANY`. + :param _async_wrapper: Will be called on any receivers that are async + coroutines to turn them into sync callables. For example, could run + the receiver with an event loop. + :param kwargs: Extra keyword arguments to pass to each receiver. + + .. versionchanged:: 1.7 + Added the ``_async_wrapper`` argument. + """ + if self.is_muted: + return [] + + results = [] + + for receiver in self.receivers_for(sender): + if iscoroutinefunction(receiver): + if _async_wrapper is None: + raise RuntimeError("Cannot send to a coroutine function.") + + result = _async_wrapper(receiver)(sender, **kwargs) + else: + result = receiver(sender, **kwargs) + + results.append((receiver, result)) + + return results + + async def send_async( + self, + sender: t.Any | None = None, + /, + *, + _sync_wrapper: c.Callable[ + [c.Callable[..., t.Any]], c.Callable[..., c.Coroutine[t.Any, t.Any, t.Any]] + ] + | None = None, + **kwargs: t.Any, + ) -> list[tuple[c.Callable[..., t.Any], t.Any]]: + """Await all receivers that are connected to the given ``sender`` + or :data:`ANY`. Each receiver is called with ``sender`` as a positional + argument along with any extra keyword arguments. Return a list of + ``(receiver, return value)`` tuples. + + The order receivers are called is undefined, but can be influenced by + setting :attr:`set_class`. + + If a receiver raises an exception, that exception will propagate up. + This makes debugging straightforward, with an assumption that correctly + implemented receivers will not raise. + + :param sender: Call receivers connected to this sender, in addition to + those connected to :data:`ANY`. + :param _sync_wrapper: Will be called on any receivers that are sync + callables to turn them into async coroutines. For example, + could call the receiver in a thread. + :param kwargs: Extra keyword arguments to pass to each receiver. + + .. versionadded:: 1.7 + """ + if self.is_muted: + return [] + + results = [] + + for receiver in self.receivers_for(sender): + if not iscoroutinefunction(receiver): + if _sync_wrapper is None: + raise RuntimeError("Cannot send to a non-coroutine function.") + + result = await _sync_wrapper(receiver)(sender, **kwargs) + else: + result = await receiver(sender, **kwargs) + + results.append((receiver, result)) + + return results + + def has_receivers_for(self, sender: t.Any) -> bool: + """Check if there is at least one receiver that will be called with the + given ``sender``. A receiver connected to :data:`ANY` will always be + called, regardless of sender. Does not check if weakly referenced + receivers are still live. See :meth:`receivers_for` for a stronger + search. + + :param sender: Check for receivers connected to this sender, in addition + to those connected to :data:`ANY`. + """ + if not self.receivers: + return False + + if self._by_sender[ANY_ID]: + return True + + if sender is ANY: + return False + + return make_id(sender) in self._by_sender + + def receivers_for( + self, sender: t.Any + ) -> c.Generator[c.Callable[..., t.Any], None, None]: + """Yield each receiver to be called for ``sender``, in addition to those + to be called for :data:`ANY`. Weakly referenced receivers that are not + live will be disconnected and skipped. + + :param sender: Yield receivers connected to this sender, in addition + to those connected to :data:`ANY`. + """ + # TODO: test receivers_for(ANY) + if not self.receivers: + return + + sender_id = make_id(sender) + + if sender_id in self._by_sender: + ids = self._by_sender[ANY_ID] | self._by_sender[sender_id] + else: + ids = self._by_sender[ANY_ID].copy() + + for receiver_id in ids: + receiver = self.receivers.get(receiver_id) + + if receiver is None: + continue + + if isinstance(receiver, weakref.ref): + strong = receiver() + + if strong is None: + self._disconnect(receiver_id, ANY_ID) + continue + + yield strong + else: + yield receiver + + def disconnect(self, receiver: c.Callable[..., t.Any], sender: t.Any = ANY) -> None: + """Disconnect ``receiver`` from being called when the signal is sent by + ``sender``. + + :param receiver: A connected receiver callable. + :param sender: Disconnect from only this sender. By default, disconnect + from all senders. + """ + sender_id: c.Hashable + + if sender is ANY: + sender_id = ANY_ID + else: + sender_id = make_id(sender) + + receiver_id = make_id(receiver) + self._disconnect(receiver_id, sender_id) + + if ( + "receiver_disconnected" in self.__dict__ + and self.receiver_disconnected.receivers + ): + self.receiver_disconnected.send(self, receiver=receiver, sender=sender) + + def _disconnect(self, receiver_id: c.Hashable, sender_id: c.Hashable) -> None: + if sender_id == ANY_ID: + if self._by_receiver.pop(receiver_id, None) is not None: + for bucket in self._by_sender.values(): + bucket.discard(receiver_id) + + self.receivers.pop(receiver_id, None) + else: + self._by_sender[sender_id].discard(receiver_id) + self._by_receiver[receiver_id].discard(sender_id) + + def _make_cleanup_receiver( + self, receiver_id: c.Hashable + ) -> c.Callable[[weakref.ref[c.Callable[..., t.Any]]], None]: + """Create a callback function to disconnect a weakly referenced + receiver when it is garbage collected. + """ + + def cleanup(ref: weakref.ref[c.Callable[..., t.Any]]) -> None: + self._disconnect(receiver_id, ANY_ID) + + return cleanup + + def _make_cleanup_sender( + self, sender_id: c.Hashable + ) -> c.Callable[[weakref.ref[t.Any]], None]: + """Create a callback function to disconnect all receivers for a weakly + referenced sender when it is garbage collected. + """ + assert sender_id != ANY_ID + + def cleanup(ref: weakref.ref[t.Any]) -> None: + self._weak_senders.pop(sender_id, None) + + for receiver_id in self._by_sender.pop(sender_id, ()): + self._by_receiver[receiver_id].discard(sender_id) + + return cleanup + + def _cleanup_bookkeeping(self) -> None: + """Prune unused sender/receiver bookkeeping. Not threadsafe. + + Connecting & disconnecting leaves behind a small amount of bookkeeping + data. Typical workloads using Blinker, for example in most web apps, + Flask, CLI scripts, etc., are not adversely affected by this + bookkeeping. + + With a long-running process performing dynamic signal routing with high + volume, e.g. connecting to function closures, senders are all unique + object instances. Doing all of this over and over may cause memory usage + to grow due to extraneous bookkeeping. (An empty ``set`` for each stale + sender/receiver pair.) + + This method will prune that bookkeeping away, with the caveat that such + pruning is not threadsafe. The risk is that cleanup of a fully + disconnected receiver/sender pair occurs while another thread is + connecting that same pair. If you are in the highly dynamic, unique + receiver/sender situation that has lead you to this method, that failure + mode is perhaps not a big deal for you. + """ + for mapping in (self._by_sender, self._by_receiver): + for ident, bucket in list(mapping.items()): + if not bucket: + mapping.pop(ident, None) + + def _clear_state(self) -> None: + """Disconnect all receivers and senders. Useful for tests.""" + self._weak_senders.clear() + self.receivers.clear() + self._by_sender.clear() + self._by_receiver.clear() + + +_receiver_connected = Signal( + """\ +Sent by a :class:`Signal` after a receiver connects. + +:argument: the Signal that was connected to +:keyword receiver_arg: the connected receiver +:keyword sender_arg: the sender to connect to +:keyword weak_arg: true if the connection to receiver_arg is a weak reference + +.. deprecated:: 1.2 + Individual signals have their own :attr:`~Signal.receiver_connected` and + :attr:`~Signal.receiver_disconnected` signals with a slightly simplified + call signature. This global signal will be removed in Blinker 1.9. +""" +) + + +class NamedSignal(Signal): + """A named generic notification emitter. The name is not used by the signal + itself, but matches the key in the :class:`Namespace` that it belongs to. + + :param name: The name of the signal within the namespace. + :param doc: The docstring for the signal. + """ + + def __init__(self, name: str, doc: str | None = None) -> None: + super().__init__(doc) + + #: The name of this signal. + self.name: str = name + + def __repr__(self) -> str: + base = super().__repr__() + return f"{base[:-1]}; {self.name!r}>" # noqa: E702 + + +if t.TYPE_CHECKING: + + class PNamespaceSignal(t.Protocol): + def __call__(self, name: str, doc: str | None = None) -> NamedSignal: ... + + # Python < 3.9 + _NamespaceBase = dict[str, NamedSignal] # type: ignore[misc] +else: + _NamespaceBase = dict + + +class Namespace(_NamespaceBase): + """A dict mapping names to signals.""" + + def signal(self, name: str, doc: str | None = None) -> NamedSignal: + """Return the :class:`NamedSignal` for the given ``name``, creating it + if required. Repeated calls with the same name return the same signal. + + :param name: The name of the signal. + :param doc: The docstring of the signal. + """ + if name not in self: + self[name] = NamedSignal(name, doc) + + return self[name] + + +class _WeakNamespace(WeakValueDictionary): # type: ignore[type-arg] + """A weak mapping of names to signals. + + Automatically cleans up unused signals when the last reference goes out + of scope. This namespace implementation provides similar behavior to Blinker + <= 1.2. + + .. deprecated:: 1.3 + Will be removed in Blinker 1.9. + + .. versionadded:: 1.3 + """ + + def __init__(self) -> None: + warnings.warn( + "'WeakNamespace' is deprecated and will be removed in Blinker 1.9." + " Use 'Namespace' instead.", + DeprecationWarning, + stacklevel=2, + ) + super().__init__() + + def signal(self, name: str, doc: str | None = None) -> NamedSignal: + """Return the :class:`NamedSignal` for the given ``name``, creating it + if required. Repeated calls with the same name return the same signal. + + :param name: The name of the signal. + :param doc: The docstring of the signal. + """ + if name not in self: + self[name] = NamedSignal(name, doc) + + return self[name] # type: ignore[no-any-return] + + +default_namespace: Namespace = Namespace() +"""A default :class:`Namespace` for creating named signals. :func:`signal` +creates a :class:`NamedSignal` in this namespace. +""" + +signal: PNamespaceSignal = default_namespace.signal +"""Return a :class:`NamedSignal` in :data:`default_namespace` with the given +``name``, creating it if required. Repeated calls with the same name return the +same signal. +""" + + +def __getattr__(name: str) -> t.Any: + if name == "receiver_connected": + warnings.warn( + "The global 'receiver_connected' signal is deprecated and will be" + " removed in Blinker 1.9. Use 'Signal.receiver_connected' and" + " 'Signal.receiver_disconnected' instead.", + DeprecationWarning, + stacklevel=2, + ) + return _receiver_connected + + if name == "WeakNamespace": + warnings.warn( + "'WeakNamespace' is deprecated and will be removed in Blinker 1.9." + " Use 'Namespace' instead.", + DeprecationWarning, + stacklevel=2, + ) + return _WeakNamespace + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/blinker/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/blinker/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/LICENSE.txt b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/LICENSE.txt new file mode 100644 index 000000000..d12a84918 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/LICENSE.txt @@ -0,0 +1,28 @@ +Copyright 2014 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/METADATA new file mode 100644 index 000000000..366d1a7e4 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/METADATA @@ -0,0 +1,74 @@ +Metadata-Version: 2.3 +Name: click +Version: 8.1.8 +Summary: Composable command line interface toolkit +Maintainer-email: Pallets +Requires-Python: >=3.7 +Description-Content-Type: text/markdown +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Typing :: Typed +Requires-Dist: colorama; platform_system == 'Windows' +Requires-Dist: importlib-metadata; python_version < '3.8' +Project-URL: Changes, https://click.palletsprojects.com/changes/ +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://click.palletsprojects.com/ +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Source, https://github.com/pallets/click/ + +# $ click_ + +Click is a Python package for creating beautiful command line interfaces +in a composable way with as little code as necessary. It's the "Command +Line Interface Creation Kit". It's highly configurable but comes with +sensible defaults out of the box. + +It aims to make the process of writing command line tools quick and fun +while also preventing any frustration caused by the inability to +implement an intended CLI API. + +Click in three points: + +- Arbitrary nesting of commands +- Automatic help page generation +- Supports lazy loading of subcommands at runtime + + +## A Simple Example + +```python +import click + +@click.command() +@click.option("--count", default=1, help="Number of greetings.") +@click.option("--name", prompt="Your name", help="The person to greet.") +def hello(count, name): + """Simple program that greets NAME for a total of COUNT times.""" + for _ in range(count): + click.echo(f"Hello, {name}!") + +if __name__ == '__main__': + hello() +``` + +``` +$ python hello.py --count=3 +Your name: Click +Hello, Click! +Hello, Click! +Hello, Click! +``` + + +## Donate + +The Pallets organization develops and supports Click and other popular +packages. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, [please +donate today][]. + +[please donate today]: https://palletsprojects.com/donate + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/RECORD new file mode 100644 index 000000000..c0a5d1650 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/RECORD @@ -0,0 +1,39 @@ +click-8.1.8.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +click-8.1.8.dist-info/LICENSE.txt,sha256=morRBqOU6FO_4h9C9OctWSgZoigF2ZG18ydQKSkrZY0,1475 +click-8.1.8.dist-info/METADATA,sha256=WJtQ6uGS2ybLfvUE4vC0XIhIBr4yFGwjrMBR2fiCQ-Q,2263 +click-8.1.8.dist-info/RECORD,, +click-8.1.8.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +click-8.1.8.dist-info/WHEEL,sha256=CpUCUxeHQbRN5UGRQHYRJorO5Af-Qy_fHMctcQ8DSGI,82 +click/__init__.py,sha256=j1DJeCbga4ribkv5uyvIAzI0oFN13fW9mevDKShFelo,3188 +click/__pycache__/__init__.cpython-310.pyc,, +click/__pycache__/_compat.cpython-310.pyc,, +click/__pycache__/_termui_impl.cpython-310.pyc,, +click/__pycache__/_textwrap.cpython-310.pyc,, +click/__pycache__/_winconsole.cpython-310.pyc,, +click/__pycache__/core.cpython-310.pyc,, +click/__pycache__/decorators.cpython-310.pyc,, +click/__pycache__/exceptions.cpython-310.pyc,, +click/__pycache__/formatting.cpython-310.pyc,, +click/__pycache__/globals.cpython-310.pyc,, +click/__pycache__/parser.cpython-310.pyc,, +click/__pycache__/shell_completion.cpython-310.pyc,, +click/__pycache__/termui.cpython-310.pyc,, +click/__pycache__/testing.cpython-310.pyc,, +click/__pycache__/types.cpython-310.pyc,, +click/__pycache__/utils.cpython-310.pyc,, +click/_compat.py,sha256=IGKh_J5QdfKELitnRfTGHneejWxoCw_NX9tfMbdcg3w,18730 +click/_termui_impl.py,sha256=a5z7I9gOFeMmu7Gb6_RPyQ8GPuVP1EeblixcWSPSQPk,24783 +click/_textwrap.py,sha256=10fQ64OcBUMuK7mFvh8363_uoOxPlRItZBmKzRJDgoY,1353 +click/_winconsole.py,sha256=5ju3jQkcZD0W27WEMGqmEP4y_crUVzPCqsX_FYb7BO0,7860 +click/core.py,sha256=Q1nEVdctZwvIPOlt4vfHko0TYnHCeE40UEEul8Wpyvs,114748 +click/decorators.py,sha256=7t6F-QWowtLh6F_6l-4YV4Y4yNTcqFQEu9i37zIz68s,18925 +click/exceptions.py,sha256=V7zDT6emqJ8iNl0kF1P5kpFmLMWQ1T1L7aNNKM4YR0w,9600 +click/formatting.py,sha256=Frf0-5W33-loyY_i9qrwXR8-STnW3m5gvyxLVUdyxyk,9706 +click/globals.py,sha256=cuJ6Bbo073lgEEmhjr394PeM-QFmXM-Ci-wmfsd7H5g,1954 +click/parser.py,sha256=h4sndcpF5OHrZQN8vD8IWb5OByvW7ABbhRToxovrqS8,19067 +click/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +click/shell_completion.py,sha256=TR0dXEGcvWb9Eo3aaQEXGhnvNS3FF4H4QcuLnvAvYo4,18636 +click/termui.py,sha256=dLxiS70UOvIYBda_nEEZaPAFOVDVmRs1sEPMuLDowQo,28310 +click/testing.py,sha256=3RA8anCf7TZ8-5RAF5it2Te-aWXBAL5VLasQnMiC2ZQ,16282 +click/types.py,sha256=BD5Qqq4h-8kawBmOIzJlmq4xzThAf4wCvaOLZSBDNx0,36422 +click/utils.py,sha256=ce-IrO9ilII76LGkU354pOdHbepM8UftfNH7SfMU_28,20330 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/WHEEL new file mode 100644 index 000000000..e3c6feefa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click-8.1.8.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.10.1 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/__init__.py new file mode 100644 index 000000000..2610d0e14 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/__init__.py @@ -0,0 +1,75 @@ +""" +Click is a simple Python module inspired by the stdlib optparse to make +writing command line scripts fun. Unlike other modules, it's based +around a simple API that does not come with too much magic and is +composable. +""" + +from .core import Argument as Argument +from .core import BaseCommand as BaseCommand +from .core import Command as Command +from .core import CommandCollection as CommandCollection +from .core import Context as Context +from .core import Group as Group +from .core import MultiCommand as MultiCommand +from .core import Option as Option +from .core import Parameter as Parameter +from .decorators import argument as argument +from .decorators import command as command +from .decorators import confirmation_option as confirmation_option +from .decorators import group as group +from .decorators import help_option as help_option +from .decorators import HelpOption as HelpOption +from .decorators import make_pass_decorator as make_pass_decorator +from .decorators import option as option +from .decorators import pass_context as pass_context +from .decorators import pass_obj as pass_obj +from .decorators import password_option as password_option +from .decorators import version_option as version_option +from .exceptions import Abort as Abort +from .exceptions import BadArgumentUsage as BadArgumentUsage +from .exceptions import BadOptionUsage as BadOptionUsage +from .exceptions import BadParameter as BadParameter +from .exceptions import ClickException as ClickException +from .exceptions import FileError as FileError +from .exceptions import MissingParameter as MissingParameter +from .exceptions import NoSuchOption as NoSuchOption +from .exceptions import UsageError as UsageError +from .formatting import HelpFormatter as HelpFormatter +from .formatting import wrap_text as wrap_text +from .globals import get_current_context as get_current_context +from .parser import OptionParser as OptionParser +from .termui import clear as clear +from .termui import confirm as confirm +from .termui import echo_via_pager as echo_via_pager +from .termui import edit as edit +from .termui import getchar as getchar +from .termui import launch as launch +from .termui import pause as pause +from .termui import progressbar as progressbar +from .termui import prompt as prompt +from .termui import secho as secho +from .termui import style as style +from .termui import unstyle as unstyle +from .types import BOOL as BOOL +from .types import Choice as Choice +from .types import DateTime as DateTime +from .types import File as File +from .types import FLOAT as FLOAT +from .types import FloatRange as FloatRange +from .types import INT as INT +from .types import IntRange as IntRange +from .types import ParamType as ParamType +from .types import Path as Path +from .types import STRING as STRING +from .types import Tuple as Tuple +from .types import UNPROCESSED as UNPROCESSED +from .types import UUID as UUID +from .utils import echo as echo +from .utils import format_filename as format_filename +from .utils import get_app_dir as get_app_dir +from .utils import get_binary_stream as get_binary_stream +from .utils import get_text_stream as get_text_stream +from .utils import open_file as open_file + +__version__ = "8.1.8" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/_compat.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/_compat.py new file mode 100644 index 000000000..9153d150c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/_compat.py @@ -0,0 +1,623 @@ +import codecs +import io +import os +import re +import sys +import typing as t +from weakref import WeakKeyDictionary + +CYGWIN = sys.platform.startswith("cygwin") +WIN = sys.platform.startswith("win") +auto_wrap_for_ansi: t.Optional[t.Callable[[t.TextIO], t.TextIO]] = None +_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") + + +def _make_text_stream( + stream: t.BinaryIO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, + force_writable: bool = False, +) -> t.TextIO: + if encoding is None: + encoding = get_best_encoding(stream) + if errors is None: + errors = "replace" + return _NonClosingTextIOWrapper( + stream, + encoding, + errors, + line_buffering=True, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def is_ascii_encoding(encoding: str) -> bool: + """Checks if a given encoding is ascii.""" + try: + return codecs.lookup(encoding).name == "ascii" + except LookupError: + return False + + +def get_best_encoding(stream: t.IO[t.Any]) -> str: + """Returns the default stream encoding if not found.""" + rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() + if is_ascii_encoding(rv): + return "utf-8" + return rv + + +class _NonClosingTextIOWrapper(io.TextIOWrapper): + def __init__( + self, + stream: t.BinaryIO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, + force_writable: bool = False, + **extra: t.Any, + ) -> None: + self._stream = stream = t.cast( + t.BinaryIO, _FixupStream(stream, force_readable, force_writable) + ) + super().__init__(stream, encoding, errors, **extra) + + def __del__(self) -> None: + try: + self.detach() + except Exception: + pass + + def isatty(self) -> bool: + # https://bitbucket.org/pypy/pypy/issue/1803 + return self._stream.isatty() + + +class _FixupStream: + """The new io interface needs more from streams than streams + traditionally implement. As such, this fix-up code is necessary in + some circumstances. + + The forcing of readable and writable flags are there because some tools + put badly patched objects on sys (one such offender are certain version + of jupyter notebook). + """ + + def __init__( + self, + stream: t.BinaryIO, + force_readable: bool = False, + force_writable: bool = False, + ): + self._stream = stream + self._force_readable = force_readable + self._force_writable = force_writable + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._stream, name) + + def read1(self, size: int) -> bytes: + f = getattr(self._stream, "read1", None) + + if f is not None: + return t.cast(bytes, f(size)) + + return self._stream.read(size) + + def readable(self) -> bool: + if self._force_readable: + return True + x = getattr(self._stream, "readable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.read(0) + except Exception: + return False + return True + + def writable(self) -> bool: + if self._force_writable: + return True + x = getattr(self._stream, "writable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.write("") # type: ignore + except Exception: + try: + self._stream.write(b"") + except Exception: + return False + return True + + def seekable(self) -> bool: + x = getattr(self._stream, "seekable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.seek(self._stream.tell()) + except Exception: + return False + return True + + +def _is_binary_reader(stream: t.IO[t.Any], default: bool = False) -> bool: + try: + return isinstance(stream.read(0), bytes) + except Exception: + return default + # This happens in some cases where the stream was already + # closed. In this case, we assume the default. + + +def _is_binary_writer(stream: t.IO[t.Any], default: bool = False) -> bool: + try: + stream.write(b"") + except Exception: + try: + stream.write("") + return False + except Exception: + pass + return default + return True + + +def _find_binary_reader(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]: + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_reader(stream, False): + return t.cast(t.BinaryIO, stream) + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_reader(buf, True): + return t.cast(t.BinaryIO, buf) + + return None + + +def _find_binary_writer(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]: + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_writer(stream, False): + return t.cast(t.BinaryIO, stream) + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_writer(buf, True): + return t.cast(t.BinaryIO, buf) + + return None + + +def _stream_is_misconfigured(stream: t.TextIO) -> bool: + """A stream is misconfigured if its encoding is ASCII.""" + # If the stream does not have an encoding set, we assume it's set + # to ASCII. This appears to happen in certain unittest + # environments. It's not quite clear what the correct behavior is + # but this at least will force Click to recover somehow. + return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") + + +def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: t.Optional[str]) -> bool: + """A stream attribute is compatible if it is equal to the + desired value or the desired value is unset and the attribute + has a value. + """ + stream_value = getattr(stream, attr, None) + return stream_value == value or (value is None and stream_value is not None) + + +def _is_compatible_text_stream( + stream: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] +) -> bool: + """Check if a stream's encoding and errors attributes are + compatible with the desired values. + """ + return _is_compat_stream_attr( + stream, "encoding", encoding + ) and _is_compat_stream_attr(stream, "errors", errors) + + +def _force_correct_text_stream( + text_stream: t.IO[t.Any], + encoding: t.Optional[str], + errors: t.Optional[str], + is_binary: t.Callable[[t.IO[t.Any], bool], bool], + find_binary: t.Callable[[t.IO[t.Any]], t.Optional[t.BinaryIO]], + force_readable: bool = False, + force_writable: bool = False, +) -> t.TextIO: + if is_binary(text_stream, False): + binary_reader = t.cast(t.BinaryIO, text_stream) + else: + text_stream = t.cast(t.TextIO, text_stream) + # If the stream looks compatible, and won't default to a + # misconfigured ascii encoding, return it as-is. + if _is_compatible_text_stream(text_stream, encoding, errors) and not ( + encoding is None and _stream_is_misconfigured(text_stream) + ): + return text_stream + + # Otherwise, get the underlying binary reader. + possible_binary_reader = find_binary(text_stream) + + # If that's not possible, silently use the original reader + # and get mojibake instead of exceptions. + if possible_binary_reader is None: + return text_stream + + binary_reader = possible_binary_reader + + # Default errors to replace instead of strict in order to get + # something that works. + if errors is None: + errors = "replace" + + # Wrap the binary stream in a text stream with the correct + # encoding parameters. + return _make_text_stream( + binary_reader, + encoding, + errors, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def _force_correct_text_reader( + text_reader: t.IO[t.Any], + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, +) -> t.TextIO: + return _force_correct_text_stream( + text_reader, + encoding, + errors, + _is_binary_reader, + _find_binary_reader, + force_readable=force_readable, + ) + + +def _force_correct_text_writer( + text_writer: t.IO[t.Any], + encoding: t.Optional[str], + errors: t.Optional[str], + force_writable: bool = False, +) -> t.TextIO: + return _force_correct_text_stream( + text_writer, + encoding, + errors, + _is_binary_writer, + _find_binary_writer, + force_writable=force_writable, + ) + + +def get_binary_stdin() -> t.BinaryIO: + reader = _find_binary_reader(sys.stdin) + if reader is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdin.") + return reader + + +def get_binary_stdout() -> t.BinaryIO: + writer = _find_binary_writer(sys.stdout) + if writer is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdout.") + return writer + + +def get_binary_stderr() -> t.BinaryIO: + writer = _find_binary_writer(sys.stderr) + if writer is None: + raise RuntimeError("Was not able to determine binary stream for sys.stderr.") + return writer + + +def get_text_stdin( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True) + + +def get_text_stdout( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True) + + +def get_text_stderr( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True) + + +def _wrap_io_open( + file: t.Union[str, "os.PathLike[str]", int], + mode: str, + encoding: t.Optional[str], + errors: t.Optional[str], +) -> t.IO[t.Any]: + """Handles not passing ``encoding`` and ``errors`` in binary mode.""" + if "b" in mode: + return open(file, mode) + + return open(file, mode, encoding=encoding, errors=errors) + + +def open_stream( + filename: "t.Union[str, os.PathLike[str]]", + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + atomic: bool = False, +) -> t.Tuple[t.IO[t.Any], bool]: + binary = "b" in mode + filename = os.fspath(filename) + + # Standard streams first. These are simple because they ignore the + # atomic flag. Use fsdecode to handle Path("-"). + if os.fsdecode(filename) == "-": + if any(m in mode for m in ["w", "a", "x"]): + if binary: + return get_binary_stdout(), False + return get_text_stdout(encoding=encoding, errors=errors), False + if binary: + return get_binary_stdin(), False + return get_text_stdin(encoding=encoding, errors=errors), False + + # Non-atomic writes directly go out through the regular open functions. + if not atomic: + return _wrap_io_open(filename, mode, encoding, errors), True + + # Some usability stuff for atomic writes + if "a" in mode: + raise ValueError( + "Appending to an existing file is not supported, because that" + " would involve an expensive `copy`-operation to a temporary" + " file. Open the file in normal `w`-mode and copy explicitly" + " if that's what you're after." + ) + if "x" in mode: + raise ValueError("Use the `overwrite`-parameter instead.") + if "w" not in mode: + raise ValueError("Atomic writes only make sense with `w`-mode.") + + # Atomic writes are more complicated. They work by opening a file + # as a proxy in the same folder and then using the fdopen + # functionality to wrap it in a Python file. Then we wrap it in an + # atomic file that moves the file over on close. + import errno + import random + + try: + perm: t.Optional[int] = os.stat(filename).st_mode + except OSError: + perm = None + + flags = os.O_RDWR | os.O_CREAT | os.O_EXCL + + if binary: + flags |= getattr(os, "O_BINARY", 0) + + while True: + tmp_filename = os.path.join( + os.path.dirname(filename), + f".__atomic-write{random.randrange(1 << 32):08x}", + ) + try: + fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) + break + except OSError as e: + if e.errno == errno.EEXIST or ( + os.name == "nt" + and e.errno == errno.EACCES + and os.path.isdir(e.filename) + and os.access(e.filename, os.W_OK) + ): + continue + raise + + if perm is not None: + os.chmod(tmp_filename, perm) # in case perm includes bits in umask + + f = _wrap_io_open(fd, mode, encoding, errors) + af = _AtomicFile(f, tmp_filename, os.path.realpath(filename)) + return t.cast(t.IO[t.Any], af), True + + +class _AtomicFile: + def __init__(self, f: t.IO[t.Any], tmp_filename: str, real_filename: str) -> None: + self._f = f + self._tmp_filename = tmp_filename + self._real_filename = real_filename + self.closed = False + + @property + def name(self) -> str: + return self._real_filename + + def close(self, delete: bool = False) -> None: + if self.closed: + return + self._f.close() + os.replace(self._tmp_filename, self._real_filename) + self.closed = True + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._f, name) + + def __enter__(self) -> "_AtomicFile": + return self + + def __exit__(self, exc_type: t.Optional[t.Type[BaseException]], *_: t.Any) -> None: + self.close(delete=exc_type is not None) + + def __repr__(self) -> str: + return repr(self._f) + + +def strip_ansi(value: str) -> str: + return _ansi_re.sub("", value) + + +def _is_jupyter_kernel_output(stream: t.IO[t.Any]) -> bool: + while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): + stream = stream._stream + + return stream.__class__.__module__.startswith("ipykernel.") + + +def should_strip_ansi( + stream: t.Optional[t.IO[t.Any]] = None, color: t.Optional[bool] = None +) -> bool: + if color is None: + if stream is None: + stream = sys.stdin + return not isatty(stream) and not _is_jupyter_kernel_output(stream) + return not color + + +# On Windows, wrap the output streams with colorama to support ANSI +# color codes. +# NOTE: double check is needed so mypy does not analyze this on Linux +if sys.platform.startswith("win") and WIN: + from ._winconsole import _get_windows_console_stream + + def _get_argv_encoding() -> str: + import locale + + return locale.getpreferredencoding() + + _ansi_stream_wrappers: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() + + def auto_wrap_for_ansi( + stream: t.TextIO, color: t.Optional[bool] = None + ) -> t.TextIO: + """Support ANSI color and style codes on Windows by wrapping a + stream with colorama. + """ + try: + cached = _ansi_stream_wrappers.get(stream) + except Exception: + cached = None + + if cached is not None: + return cached + + import colorama + + strip = should_strip_ansi(stream, color) + ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) + rv = t.cast(t.TextIO, ansi_wrapper.stream) + _write = rv.write + + def _safe_write(s): + try: + return _write(s) + except BaseException: + ansi_wrapper.reset_all() + raise + + rv.write = _safe_write + + try: + _ansi_stream_wrappers[stream] = rv + except Exception: + pass + + return rv + +else: + + def _get_argv_encoding() -> str: + return getattr(sys.stdin, "encoding", None) or sys.getfilesystemencoding() + + def _get_windows_console_stream( + f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] + ) -> t.Optional[t.TextIO]: + return None + + +def term_len(x: str) -> int: + return len(strip_ansi(x)) + + +def isatty(stream: t.IO[t.Any]) -> bool: + try: + return stream.isatty() + except Exception: + return False + + +def _make_cached_stream_func( + src_func: t.Callable[[], t.Optional[t.TextIO]], + wrapper_func: t.Callable[[], t.TextIO], +) -> t.Callable[[], t.Optional[t.TextIO]]: + cache: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() + + def func() -> t.Optional[t.TextIO]: + stream = src_func() + + if stream is None: + return None + + try: + rv = cache.get(stream) + except Exception: + rv = None + if rv is not None: + return rv + rv = wrapper_func() + try: + cache[stream] = rv + except Exception: + pass + return rv + + return func + + +_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) +_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) +_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) + + +binary_streams: t.Mapping[str, t.Callable[[], t.BinaryIO]] = { + "stdin": get_binary_stdin, + "stdout": get_binary_stdout, + "stderr": get_binary_stderr, +} + +text_streams: t.Mapping[ + str, t.Callable[[t.Optional[str], t.Optional[str]], t.TextIO] +] = { + "stdin": get_text_stdin, + "stdout": get_text_stdout, + "stderr": get_text_stderr, +} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/_termui_impl.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/_termui_impl.py new file mode 100644 index 000000000..ad9f8f6c9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/_termui_impl.py @@ -0,0 +1,788 @@ +""" +This module contains implementations for the termui module. To keep the +import time of Click down, some infrequently used functionality is +placed in this module and only imported as needed. +""" + +import contextlib +import math +import os +import sys +import time +import typing as t +from gettext import gettext as _ +from io import StringIO +from shutil import which +from types import TracebackType + +from ._compat import _default_text_stdout +from ._compat import CYGWIN +from ._compat import get_best_encoding +from ._compat import isatty +from ._compat import open_stream +from ._compat import strip_ansi +from ._compat import term_len +from ._compat import WIN +from .exceptions import ClickException +from .utils import echo + +V = t.TypeVar("V") + +if os.name == "nt": + BEFORE_BAR = "\r" + AFTER_BAR = "\n" +else: + BEFORE_BAR = "\r\033[?25l" + AFTER_BAR = "\033[?25h\n" + + +class ProgressBar(t.Generic[V]): + def __init__( + self, + iterable: t.Optional[t.Iterable[V]], + length: t.Optional[int] = None, + fill_char: str = "#", + empty_char: str = " ", + bar_template: str = "%(bar)s", + info_sep: str = " ", + show_eta: bool = True, + show_percent: t.Optional[bool] = None, + show_pos: bool = False, + item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, + label: t.Optional[str] = None, + file: t.Optional[t.TextIO] = None, + color: t.Optional[bool] = None, + update_min_steps: int = 1, + width: int = 30, + ) -> None: + self.fill_char = fill_char + self.empty_char = empty_char + self.bar_template = bar_template + self.info_sep = info_sep + self.show_eta = show_eta + self.show_percent = show_percent + self.show_pos = show_pos + self.item_show_func = item_show_func + self.label: str = label or "" + + if file is None: + file = _default_text_stdout() + + # There are no standard streams attached to write to. For example, + # pythonw on Windows. + if file is None: + file = StringIO() + + self.file = file + self.color = color + self.update_min_steps = update_min_steps + self._completed_intervals = 0 + self.width: int = width + self.autowidth: bool = width == 0 + + if length is None: + from operator import length_hint + + length = length_hint(iterable, -1) + + if length == -1: + length = None + if iterable is None: + if length is None: + raise TypeError("iterable or length is required") + iterable = t.cast(t.Iterable[V], range(length)) + self.iter: t.Iterable[V] = iter(iterable) + self.length = length + self.pos = 0 + self.avg: t.List[float] = [] + self.last_eta: float + self.start: float + self.start = self.last_eta = time.time() + self.eta_known: bool = False + self.finished: bool = False + self.max_width: t.Optional[int] = None + self.entered: bool = False + self.current_item: t.Optional[V] = None + self.is_hidden: bool = not isatty(self.file) + self._last_line: t.Optional[str] = None + + def __enter__(self) -> "ProgressBar[V]": + self.entered = True + self.render_progress() + return self + + def __exit__( + self, + exc_type: t.Optional[t.Type[BaseException]], + exc_value: t.Optional[BaseException], + tb: t.Optional[TracebackType], + ) -> None: + self.render_finish() + + def __iter__(self) -> t.Iterator[V]: + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + self.render_progress() + return self.generator() + + def __next__(self) -> V: + # Iteration is defined in terms of a generator function, + # returned by iter(self); use that to define next(). This works + # because `self.iter` is an iterable consumed by that generator, + # so it is re-entry safe. Calling `next(self.generator())` + # twice works and does "what you want". + return next(iter(self)) + + def render_finish(self) -> None: + if self.is_hidden: + return + self.file.write(AFTER_BAR) + self.file.flush() + + @property + def pct(self) -> float: + if self.finished: + return 1.0 + return min(self.pos / (float(self.length or 1) or 1), 1.0) + + @property + def time_per_iteration(self) -> float: + if not self.avg: + return 0.0 + return sum(self.avg) / float(len(self.avg)) + + @property + def eta(self) -> float: + if self.length is not None and not self.finished: + return self.time_per_iteration * (self.length - self.pos) + return 0.0 + + def format_eta(self) -> str: + if self.eta_known: + t = int(self.eta) + seconds = t % 60 + t //= 60 + minutes = t % 60 + t //= 60 + hours = t % 24 + t //= 24 + if t > 0: + return f"{t}d {hours:02}:{minutes:02}:{seconds:02}" + else: + return f"{hours:02}:{minutes:02}:{seconds:02}" + return "" + + def format_pos(self) -> str: + pos = str(self.pos) + if self.length is not None: + pos += f"/{self.length}" + return pos + + def format_pct(self) -> str: + return f"{int(self.pct * 100): 4}%"[1:] + + def format_bar(self) -> str: + if self.length is not None: + bar_length = int(self.pct * self.width) + bar = self.fill_char * bar_length + bar += self.empty_char * (self.width - bar_length) + elif self.finished: + bar = self.fill_char * self.width + else: + chars = list(self.empty_char * (self.width or 1)) + if self.time_per_iteration != 0: + chars[ + int( + (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) + * self.width + ) + ] = self.fill_char + bar = "".join(chars) + return bar + + def format_progress_line(self) -> str: + show_percent = self.show_percent + + info_bits = [] + if self.length is not None and show_percent is None: + show_percent = not self.show_pos + + if self.show_pos: + info_bits.append(self.format_pos()) + if show_percent: + info_bits.append(self.format_pct()) + if self.show_eta and self.eta_known and not self.finished: + info_bits.append(self.format_eta()) + if self.item_show_func is not None: + item_info = self.item_show_func(self.current_item) + if item_info is not None: + info_bits.append(item_info) + + return ( + self.bar_template + % { + "label": self.label, + "bar": self.format_bar(), + "info": self.info_sep.join(info_bits), + } + ).rstrip() + + def render_progress(self) -> None: + import shutil + + if self.is_hidden: + # Only output the label as it changes if the output is not a + # TTY. Use file=stderr if you expect to be piping stdout. + if self._last_line != self.label: + self._last_line = self.label + echo(self.label, file=self.file, color=self.color) + + return + + buf = [] + # Update width in case the terminal has been resized + if self.autowidth: + old_width = self.width + self.width = 0 + clutter_length = term_len(self.format_progress_line()) + new_width = max(0, shutil.get_terminal_size().columns - clutter_length) + if new_width < old_width: + buf.append(BEFORE_BAR) + buf.append(" " * self.max_width) # type: ignore + self.max_width = new_width + self.width = new_width + + clear_width = self.width + if self.max_width is not None: + clear_width = self.max_width + + buf.append(BEFORE_BAR) + line = self.format_progress_line() + line_len = term_len(line) + if self.max_width is None or self.max_width < line_len: + self.max_width = line_len + + buf.append(line) + buf.append(" " * (clear_width - line_len)) + line = "".join(buf) + # Render the line only if it changed. + + if line != self._last_line: + self._last_line = line + echo(line, file=self.file, color=self.color, nl=False) + self.file.flush() + + def make_step(self, n_steps: int) -> None: + self.pos += n_steps + if self.length is not None and self.pos >= self.length: + self.finished = True + + if (time.time() - self.last_eta) < 1.0: + return + + self.last_eta = time.time() + + # self.avg is a rolling list of length <= 7 of steps where steps are + # defined as time elapsed divided by the total progress through + # self.length. + if self.pos: + step = (time.time() - self.start) / self.pos + else: + step = time.time() - self.start + + self.avg = self.avg[-6:] + [step] + + self.eta_known = self.length is not None + + def update(self, n_steps: int, current_item: t.Optional[V] = None) -> None: + """Update the progress bar by advancing a specified number of + steps, and optionally set the ``current_item`` for this new + position. + + :param n_steps: Number of steps to advance. + :param current_item: Optional item to set as ``current_item`` + for the updated position. + + .. versionchanged:: 8.0 + Added the ``current_item`` optional parameter. + + .. versionchanged:: 8.0 + Only render when the number of steps meets the + ``update_min_steps`` threshold. + """ + if current_item is not None: + self.current_item = current_item + + self._completed_intervals += n_steps + + if self._completed_intervals >= self.update_min_steps: + self.make_step(self._completed_intervals) + self.render_progress() + self._completed_intervals = 0 + + def finish(self) -> None: + self.eta_known = False + self.current_item = None + self.finished = True + + def generator(self) -> t.Iterator[V]: + """Return a generator which yields the items added to the bar + during construction, and updates the progress bar *after* the + yielded block returns. + """ + # WARNING: the iterator interface for `ProgressBar` relies on + # this and only works because this is a simple generator which + # doesn't create or manage additional state. If this function + # changes, the impact should be evaluated both against + # `iter(bar)` and `next(bar)`. `next()` in particular may call + # `self.generator()` repeatedly, and this must remain safe in + # order for that interface to work. + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + + if self.is_hidden: + yield from self.iter + else: + for rv in self.iter: + self.current_item = rv + + # This allows show_item_func to be updated before the + # item is processed. Only trigger at the beginning of + # the update interval. + if self._completed_intervals == 0: + self.render_progress() + + yield rv + self.update(1) + + self.finish() + self.render_progress() + + +def pager(generator: t.Iterable[str], color: t.Optional[bool] = None) -> None: + """Decide what method to use for paging through text.""" + stdout = _default_text_stdout() + + # There are no standard streams attached to write to. For example, + # pythonw on Windows. + if stdout is None: + stdout = StringIO() + + if not isatty(sys.stdin) or not isatty(stdout): + return _nullpager(stdout, generator, color) + pager_cmd = (os.environ.get("PAGER", None) or "").strip() + if pager_cmd: + if WIN: + if _tempfilepager(generator, pager_cmd, color): + return + elif _pipepager(generator, pager_cmd, color): + return + if os.environ.get("TERM") in ("dumb", "emacs"): + return _nullpager(stdout, generator, color) + if (WIN or sys.platform.startswith("os2")) and _tempfilepager( + generator, "more", color + ): + return + if _pipepager(generator, "less", color): + return + + import tempfile + + fd, filename = tempfile.mkstemp() + os.close(fd) + try: + if _pipepager(generator, "more", color): + return + return _nullpager(stdout, generator, color) + finally: + os.unlink(filename) + + +def _pipepager(generator: t.Iterable[str], cmd: str, color: t.Optional[bool]) -> bool: + """Page through text by feeding it to another program. Invoking a + pager through this might support colors. + + Returns True if the command was found, False otherwise and thus another + pager should be attempted. + """ + cmd_absolute = which(cmd) + if cmd_absolute is None: + return False + + import subprocess + + env = dict(os.environ) + + # If we're piping to less we might support colors under the + # condition that + cmd_detail = cmd.rsplit("/", 1)[-1].split() + if color is None and cmd_detail[0] == "less": + less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_detail[1:])}" + if not less_flags: + env["LESS"] = "-R" + color = True + elif "r" in less_flags or "R" in less_flags: + color = True + + c = subprocess.Popen( + [cmd_absolute], + shell=True, + stdin=subprocess.PIPE, + env=env, + errors="replace", + text=True, + ) + assert c.stdin is not None + try: + for text in generator: + if not color: + text = strip_ansi(text) + + c.stdin.write(text) + except (OSError, KeyboardInterrupt): + pass + else: + c.stdin.close() + + # Less doesn't respect ^C, but catches it for its own UI purposes (aborting + # search or other commands inside less). + # + # That means when the user hits ^C, the parent process (click) terminates, + # but less is still alive, paging the output and messing up the terminal. + # + # If the user wants to make the pager exit on ^C, they should set + # `LESS='-K'`. It's not our decision to make. + while True: + try: + c.wait() + except KeyboardInterrupt: + pass + else: + break + + return True + + +def _tempfilepager( + generator: t.Iterable[str], + cmd: str, + color: t.Optional[bool], +) -> bool: + """Page through text by invoking a program on a temporary file. + + Returns True if the command was found, False otherwise and thus another + pager should be attempted. + """ + # Which is necessary for Windows, it is also recommended in the Popen docs. + cmd_absolute = which(cmd) + if cmd_absolute is None: + return False + + import subprocess + import tempfile + + fd, filename = tempfile.mkstemp() + # TODO: This never terminates if the passed generator never terminates. + text = "".join(generator) + if not color: + text = strip_ansi(text) + encoding = get_best_encoding(sys.stdout) + with open_stream(filename, "wb")[0] as f: + f.write(text.encode(encoding)) + try: + subprocess.call([cmd_absolute, filename]) + except OSError: + # Command not found + pass + finally: + os.close(fd) + os.unlink(filename) + + return True + + +def _nullpager( + stream: t.TextIO, generator: t.Iterable[str], color: t.Optional[bool] +) -> None: + """Simply print unformatted text. This is the ultimate fallback.""" + for text in generator: + if not color: + text = strip_ansi(text) + stream.write(text) + + +class Editor: + def __init__( + self, + editor: t.Optional[str] = None, + env: t.Optional[t.Mapping[str, str]] = None, + require_save: bool = True, + extension: str = ".txt", + ) -> None: + self.editor = editor + self.env = env + self.require_save = require_save + self.extension = extension + + def get_editor(self) -> str: + if self.editor is not None: + return self.editor + for key in "VISUAL", "EDITOR": + rv = os.environ.get(key) + if rv: + return rv + if WIN: + return "notepad" + for editor in "sensible-editor", "vim", "nano": + if which(editor) is not None: + return editor + return "vi" + + def edit_file(self, filename: str) -> None: + import subprocess + + editor = self.get_editor() + environ: t.Optional[t.Dict[str, str]] = None + + if self.env: + environ = os.environ.copy() + environ.update(self.env) + + try: + c = subprocess.Popen(f'{editor} "{filename}"', env=environ, shell=True) + exit_code = c.wait() + if exit_code != 0: + raise ClickException( + _("{editor}: Editing failed").format(editor=editor) + ) + except OSError as e: + raise ClickException( + _("{editor}: Editing failed: {e}").format(editor=editor, e=e) + ) from e + + def edit(self, text: t.Optional[t.AnyStr]) -> t.Optional[t.AnyStr]: + import tempfile + + if not text: + data = b"" + elif isinstance(text, (bytes, bytearray)): + data = text + else: + if text and not text.endswith("\n"): + text += "\n" + + if WIN: + data = text.replace("\n", "\r\n").encode("utf-8-sig") + else: + data = text.encode("utf-8") + + fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) + f: t.BinaryIO + + try: + with os.fdopen(fd, "wb") as f: + f.write(data) + + # If the filesystem resolution is 1 second, like Mac OS + # 10.12 Extended, or 2 seconds, like FAT32, and the editor + # closes very fast, require_save can fail. Set the modified + # time to be 2 seconds in the past to work around this. + os.utime(name, (os.path.getatime(name), os.path.getmtime(name) - 2)) + # Depending on the resolution, the exact value might not be + # recorded, so get the new recorded value. + timestamp = os.path.getmtime(name) + + self.edit_file(name) + + if self.require_save and os.path.getmtime(name) == timestamp: + return None + + with open(name, "rb") as f: + rv = f.read() + + if isinstance(text, (bytes, bytearray)): + return rv + + return rv.decode("utf-8-sig").replace("\r\n", "\n") # type: ignore + finally: + os.unlink(name) + + +def open_url(url: str, wait: bool = False, locate: bool = False) -> int: + import subprocess + + def _unquote_file(url: str) -> str: + from urllib.parse import unquote + + if url.startswith("file://"): + url = unquote(url[7:]) + + return url + + if sys.platform == "darwin": + args = ["open"] + if wait: + args.append("-W") + if locate: + args.append("-R") + args.append(_unquote_file(url)) + null = open("/dev/null", "w") + try: + return subprocess.Popen(args, stderr=null).wait() + finally: + null.close() + elif WIN: + if locate: + url = _unquote_file(url) + args = ["explorer", f"/select,{url}"] + else: + args = ["start"] + if wait: + args.append("/WAIT") + args.append("") + args.append(url) + try: + return subprocess.call(args) + except OSError: + # Command not found + return 127 + elif CYGWIN: + if locate: + url = _unquote_file(url) + args = ["cygstart", os.path.dirname(url)] + else: + args = ["cygstart"] + if wait: + args.append("-w") + args.append(url) + try: + return subprocess.call(args) + except OSError: + # Command not found + return 127 + + try: + if locate: + url = os.path.dirname(_unquote_file(url)) or "." + else: + url = _unquote_file(url) + c = subprocess.Popen(["xdg-open", url]) + if wait: + return c.wait() + return 0 + except OSError: + if url.startswith(("http://", "https://")) and not locate and not wait: + import webbrowser + + webbrowser.open(url) + return 0 + return 1 + + +def _translate_ch_to_exc(ch: str) -> t.Optional[BaseException]: + if ch == "\x03": + raise KeyboardInterrupt() + + if ch == "\x04" and not WIN: # Unix-like, Ctrl+D + raise EOFError() + + if ch == "\x1a" and WIN: # Windows, Ctrl+Z + raise EOFError() + + return None + + +if WIN: + import msvcrt + + @contextlib.contextmanager + def raw_terminal() -> t.Iterator[int]: + yield -1 + + def getchar(echo: bool) -> str: + # The function `getch` will return a bytes object corresponding to + # the pressed character. Since Windows 10 build 1803, it will also + # return \x00 when called a second time after pressing a regular key. + # + # `getwch` does not share this probably-bugged behavior. Moreover, it + # returns a Unicode object by default, which is what we want. + # + # Either of these functions will return \x00 or \xe0 to indicate + # a special key, and you need to call the same function again to get + # the "rest" of the code. The fun part is that \u00e0 is + # "latin small letter a with grave", so if you type that on a French + # keyboard, you _also_ get a \xe0. + # E.g., consider the Up arrow. This returns \xe0 and then \x48. The + # resulting Unicode string reads as "a with grave" + "capital H". + # This is indistinguishable from when the user actually types + # "a with grave" and then "capital H". + # + # When \xe0 is returned, we assume it's part of a special-key sequence + # and call `getwch` again, but that means that when the user types + # the \u00e0 character, `getchar` doesn't return until a second + # character is typed. + # The alternative is returning immediately, but that would mess up + # cross-platform handling of arrow keys and others that start with + # \xe0. Another option is using `getch`, but then we can't reliably + # read non-ASCII characters, because return values of `getch` are + # limited to the current 8-bit codepage. + # + # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` + # is doing the right thing in more situations than with `getch`. + func: t.Callable[[], str] + + if echo: + func = msvcrt.getwche # type: ignore + else: + func = msvcrt.getwch # type: ignore + + rv = func() + + if rv in ("\x00", "\xe0"): + # \x00 and \xe0 are control characters that indicate special key, + # see above. + rv += func() + + _translate_ch_to_exc(rv) + return rv + +else: + import termios + import tty + + @contextlib.contextmanager + def raw_terminal() -> t.Iterator[int]: + f: t.Optional[t.TextIO] + fd: int + + if not isatty(sys.stdin): + f = open("/dev/tty") + fd = f.fileno() + else: + fd = sys.stdin.fileno() + f = None + + try: + old_settings = termios.tcgetattr(fd) + + try: + tty.setraw(fd) + yield fd + finally: + termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) + sys.stdout.flush() + + if f is not None: + f.close() + except termios.error: + pass + + def getchar(echo: bool) -> str: + with raw_terminal() as fd: + ch = os.read(fd, 32).decode(get_best_encoding(sys.stdin), "replace") + + if echo and isatty(sys.stdout): + sys.stdout.write(ch) + + _translate_ch_to_exc(ch) + return ch diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/_textwrap.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/_textwrap.py new file mode 100644 index 000000000..b47dcbd42 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/_textwrap.py @@ -0,0 +1,49 @@ +import textwrap +import typing as t +from contextlib import contextmanager + + +class TextWrapper(textwrap.TextWrapper): + def _handle_long_word( + self, + reversed_chunks: t.List[str], + cur_line: t.List[str], + cur_len: int, + width: int, + ) -> None: + space_left = max(width - cur_len, 1) + + if self.break_long_words: + last = reversed_chunks[-1] + cut = last[:space_left] + res = last[space_left:] + cur_line.append(cut) + reversed_chunks[-1] = res + elif not cur_line: + cur_line.append(reversed_chunks.pop()) + + @contextmanager + def extra_indent(self, indent: str) -> t.Iterator[None]: + old_initial_indent = self.initial_indent + old_subsequent_indent = self.subsequent_indent + self.initial_indent += indent + self.subsequent_indent += indent + + try: + yield + finally: + self.initial_indent = old_initial_indent + self.subsequent_indent = old_subsequent_indent + + def indent_only(self, text: str) -> str: + rv = [] + + for idx, line in enumerate(text.splitlines()): + indent = self.initial_indent + + if idx > 0: + indent = self.subsequent_indent + + rv.append(f"{indent}{line}") + + return "\n".join(rv) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/_winconsole.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/_winconsole.py new file mode 100644 index 000000000..6b20df315 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/_winconsole.py @@ -0,0 +1,279 @@ +# This module is based on the excellent work by Adam Bartoš who +# provided a lot of what went into the implementation here in +# the discussion to issue1602 in the Python bug tracker. +# +# There are some general differences in regards to how this works +# compared to the original patches as we do not need to patch +# the entire interpreter but just work in our little world of +# echo and prompt. +import io +import sys +import time +import typing as t +from ctypes import byref +from ctypes import c_char +from ctypes import c_char_p +from ctypes import c_int +from ctypes import c_ssize_t +from ctypes import c_ulong +from ctypes import c_void_p +from ctypes import POINTER +from ctypes import py_object +from ctypes import Structure +from ctypes.wintypes import DWORD +from ctypes.wintypes import HANDLE +from ctypes.wintypes import LPCWSTR +from ctypes.wintypes import LPWSTR + +from ._compat import _NonClosingTextIOWrapper + +assert sys.platform == "win32" +import msvcrt # noqa: E402 +from ctypes import windll # noqa: E402 +from ctypes import WINFUNCTYPE # noqa: E402 + +c_ssize_p = POINTER(c_ssize_t) + +kernel32 = windll.kernel32 +GetStdHandle = kernel32.GetStdHandle +ReadConsoleW = kernel32.ReadConsoleW +WriteConsoleW = kernel32.WriteConsoleW +GetConsoleMode = kernel32.GetConsoleMode +GetLastError = kernel32.GetLastError +GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) +CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( + ("CommandLineToArgvW", windll.shell32) +) +LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32)) + +STDIN_HANDLE = GetStdHandle(-10) +STDOUT_HANDLE = GetStdHandle(-11) +STDERR_HANDLE = GetStdHandle(-12) + +PyBUF_SIMPLE = 0 +PyBUF_WRITABLE = 1 + +ERROR_SUCCESS = 0 +ERROR_NOT_ENOUGH_MEMORY = 8 +ERROR_OPERATION_ABORTED = 995 + +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + +EOF = b"\x1a" +MAX_BYTES_WRITTEN = 32767 + +try: + from ctypes import pythonapi +except ImportError: + # On PyPy we cannot get buffers so our ability to operate here is + # severely limited. + get_buffer = None +else: + + class Py_buffer(Structure): + _fields_ = [ + ("buf", c_void_p), + ("obj", py_object), + ("len", c_ssize_t), + ("itemsize", c_ssize_t), + ("readonly", c_int), + ("ndim", c_int), + ("format", c_char_p), + ("shape", c_ssize_p), + ("strides", c_ssize_p), + ("suboffsets", c_ssize_p), + ("internal", c_void_p), + ] + + PyObject_GetBuffer = pythonapi.PyObject_GetBuffer + PyBuffer_Release = pythonapi.PyBuffer_Release + + def get_buffer(obj, writable=False): + buf = Py_buffer() + flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE + PyObject_GetBuffer(py_object(obj), byref(buf), flags) + + try: + buffer_type = c_char * buf.len + return buffer_type.from_address(buf.buf) + finally: + PyBuffer_Release(byref(buf)) + + +class _WindowsConsoleRawIOBase(io.RawIOBase): + def __init__(self, handle): + self.handle = handle + + def isatty(self): + super().isatty() + return True + + +class _WindowsConsoleReader(_WindowsConsoleRawIOBase): + def readable(self): + return True + + def readinto(self, b): + bytes_to_be_read = len(b) + if not bytes_to_be_read: + return 0 + elif bytes_to_be_read % 2: + raise ValueError( + "cannot read odd number of bytes from UTF-16-LE encoded console" + ) + + buffer = get_buffer(b, writable=True) + code_units_to_be_read = bytes_to_be_read // 2 + code_units_read = c_ulong() + + rv = ReadConsoleW( + HANDLE(self.handle), + buffer, + code_units_to_be_read, + byref(code_units_read), + None, + ) + if GetLastError() == ERROR_OPERATION_ABORTED: + # wait for KeyboardInterrupt + time.sleep(0.1) + if not rv: + raise OSError(f"Windows error: {GetLastError()}") + + if buffer[0] == EOF: + return 0 + return 2 * code_units_read.value + + +class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): + def writable(self): + return True + + @staticmethod + def _get_error_message(errno): + if errno == ERROR_SUCCESS: + return "ERROR_SUCCESS" + elif errno == ERROR_NOT_ENOUGH_MEMORY: + return "ERROR_NOT_ENOUGH_MEMORY" + return f"Windows error {errno}" + + def write(self, b): + bytes_to_be_written = len(b) + buf = get_buffer(b) + code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 + code_units_written = c_ulong() + + WriteConsoleW( + HANDLE(self.handle), + buf, + code_units_to_be_written, + byref(code_units_written), + None, + ) + bytes_written = 2 * code_units_written.value + + if bytes_written == 0 and bytes_to_be_written > 0: + raise OSError(self._get_error_message(GetLastError())) + return bytes_written + + +class ConsoleStream: + def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None: + self._text_stream = text_stream + self.buffer = byte_stream + + @property + def name(self) -> str: + return self.buffer.name + + def write(self, x: t.AnyStr) -> int: + if isinstance(x, str): + return self._text_stream.write(x) + try: + self.flush() + except Exception: + pass + return self.buffer.write(x) + + def writelines(self, lines: t.Iterable[t.AnyStr]) -> None: + for line in lines: + self.write(line) + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._text_stream, name) + + def isatty(self) -> bool: + return self.buffer.isatty() + + def __repr__(self): + return f"" + + +def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +_stream_factories: t.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = { + 0: _get_text_stdin, + 1: _get_text_stdout, + 2: _get_text_stderr, +} + + +def _is_console(f: t.TextIO) -> bool: + if not hasattr(f, "fileno"): + return False + + try: + fileno = f.fileno() + except (OSError, io.UnsupportedOperation): + return False + + handle = msvcrt.get_osfhandle(fileno) + return bool(GetConsoleMode(handle, byref(DWORD()))) + + +def _get_windows_console_stream( + f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] +) -> t.Optional[t.TextIO]: + if ( + get_buffer is not None + and encoding in {"utf-16-le", None} + and errors in {"strict", None} + and _is_console(f) + ): + func = _stream_factories.get(f.fileno()) + if func is not None: + b = getattr(f, "buffer", None) + + if b is None: + return None + + return func(b) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/core.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/core.py new file mode 100644 index 000000000..e6305011a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/core.py @@ -0,0 +1,3047 @@ +import enum +import errno +import inspect +import os +import sys +import typing as t +from collections import abc +from contextlib import contextmanager +from contextlib import ExitStack +from functools import update_wrapper +from gettext import gettext as _ +from gettext import ngettext +from itertools import repeat +from types import TracebackType + +from . import types +from .exceptions import Abort +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import Exit +from .exceptions import MissingParameter +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import join_options +from .globals import pop_context +from .globals import push_context +from .parser import _flag_needs_value +from .parser import OptionParser +from .parser import split_opt +from .termui import confirm +from .termui import prompt +from .termui import style +from .utils import _detect_program_name +from .utils import _expand_args +from .utils import echo +from .utils import make_default_short_help +from .utils import make_str +from .utils import PacifyFlushWrapper + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .decorators import HelpOption + from .shell_completion import CompletionItem + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) +V = t.TypeVar("V") + + +def _complete_visible_commands( + ctx: "Context", incomplete: str +) -> t.Iterator[t.Tuple[str, "Command"]]: + """List all the subcommands of a group that start with the + incomplete value and aren't hidden. + + :param ctx: Invocation context for the group. + :param incomplete: Value being completed. May be empty. + """ + multi = t.cast(MultiCommand, ctx.command) + + for name in multi.list_commands(ctx): + if name.startswith(incomplete): + command = multi.get_command(ctx, name) + + if command is not None and not command.hidden: + yield name, command + + +def _check_multicommand( + base_command: "MultiCommand", cmd_name: str, cmd: "Command", register: bool = False +) -> None: + if not base_command.chain or not isinstance(cmd, MultiCommand): + return + if register: + hint = ( + "It is not possible to add multi commands as children to" + " another multi command that is in chain mode." + ) + else: + hint = ( + "Found a multi command as subcommand to a multi command" + " that is in chain mode. This is not supported." + ) + raise RuntimeError( + f"{hint}. Command {base_command.name!r} is set to chain and" + f" {cmd_name!r} was added as a subcommand but it in itself is a" + f" multi command. ({cmd_name!r} is a {type(cmd).__name__}" + f" within a chained {type(base_command).__name__} named" + f" {base_command.name!r})." + ) + + +def batch(iterable: t.Iterable[V], batch_size: int) -> t.List[t.Tuple[V, ...]]: + return list(zip(*repeat(iter(iterable), batch_size))) + + +@contextmanager +def augment_usage_errors( + ctx: "Context", param: t.Optional["Parameter"] = None +) -> t.Iterator[None]: + """Context manager that attaches extra information to exceptions.""" + try: + yield + except BadParameter as e: + if e.ctx is None: + e.ctx = ctx + if param is not None and e.param is None: + e.param = param + raise + except UsageError as e: + if e.ctx is None: + e.ctx = ctx + raise + + +def iter_params_for_processing( + invocation_order: t.Sequence["Parameter"], + declaration_order: t.Sequence["Parameter"], +) -> t.List["Parameter"]: + """Returns all declared parameters in the order they should be processed. + + The declared parameters are re-shuffled depending on the order in which + they were invoked, as well as the eagerness of each parameters. + + The invocation order takes precedence over the declaration order. I.e. the + order in which the user provided them to the CLI is respected. + + This behavior and its effect on callback evaluation is detailed at: + https://click.palletsprojects.com/en/stable/advanced/#callback-evaluation-order + """ + + def sort_key(item: "Parameter") -> t.Tuple[bool, float]: + try: + idx: float = invocation_order.index(item) + except ValueError: + idx = float("inf") + + return not item.is_eager, idx + + return sorted(declaration_order, key=sort_key) + + +class ParameterSource(enum.Enum): + """This is an :class:`~enum.Enum` that indicates the source of a + parameter's value. + + Use :meth:`click.Context.get_parameter_source` to get the + source for a parameter by name. + + .. versionchanged:: 8.0 + Use :class:`~enum.Enum` and drop the ``validate`` method. + + .. versionchanged:: 8.0 + Added the ``PROMPT`` value. + """ + + COMMANDLINE = enum.auto() + """The value was provided by the command line args.""" + ENVIRONMENT = enum.auto() + """The value was provided with an environment variable.""" + DEFAULT = enum.auto() + """Used the default specified by the parameter.""" + DEFAULT_MAP = enum.auto() + """Used a default provided by :attr:`Context.default_map`.""" + PROMPT = enum.auto() + """Used a prompt to confirm a default or provide a value.""" + + +class Context: + """The context is a special internal object that holds state relevant + for the script execution at every single level. It's normally invisible + to commands unless they opt-in to getting access to it. + + The context is useful as it can pass internal objects around and can + control special execution features such as reading data from + environment variables. + + A context can be used as context manager in which case it will call + :meth:`close` on teardown. + + :param command: the command class for this context. + :param parent: the parent context. + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it is usually + the name of the script, for commands below it it's + the name of the script. + :param obj: an arbitrary object of user data. + :param auto_envvar_prefix: the prefix to use for automatic environment + variables. If this is `None` then reading + from environment variables is disabled. This + does not affect manually set environment + variables which are always read. + :param default_map: a dictionary (like object) with default values + for parameters. + :param terminal_width: the width of the terminal. The default is + inherit from parent context. If no context + defines the terminal width then auto + detection will be applied. + :param max_content_width: the maximum width for content rendered by + Click (this currently only affects help + pages). This defaults to 80 characters if + not overridden. In other words: even if the + terminal is larger than that, Click will not + format things wider than 80 characters by + default. In addition to that, formatters might + add some safety mapping on the right. + :param resilient_parsing: if this flag is enabled then Click will + parse without any interactivity or callback + invocation. Default values will also be + ignored. This is useful for implementing + things such as completion support. + :param allow_extra_args: if this is set to `True` then extra arguments + at the end will not raise an error and will be + kept on the context. The default is to inherit + from the command. + :param allow_interspersed_args: if this is set to `False` then options + and arguments cannot be mixed. The + default is to inherit from the command. + :param ignore_unknown_options: instructs click to ignore options it does + not know and keeps them for later + processing. + :param help_option_names: optionally a list of strings that define how + the default help parameter is named. The + default is ``['--help']``. + :param token_normalize_func: an optional function that is used to + normalize tokens (options, choices, + etc.). This for instance can be used to + implement case insensitive behavior. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are used in texts that Click prints which is by + default not the case. This for instance would affect + help output. + :param show_default: Show the default value for commands. If this + value is not set, it defaults to the value from the parent + context. ``Command.show_default`` overrides this default for the + specific command. + + .. versionchanged:: 8.1 + The ``show_default`` parameter is overridden by + ``Command.show_default``, instead of the other way around. + + .. versionchanged:: 8.0 + The ``show_default`` parameter defaults to the value from the + parent context. + + .. versionchanged:: 7.1 + Added the ``show_default`` parameter. + + .. versionchanged:: 4.0 + Added the ``color``, ``ignore_unknown_options``, and + ``max_content_width`` parameters. + + .. versionchanged:: 3.0 + Added the ``allow_extra_args`` and ``allow_interspersed_args`` + parameters. + + .. versionchanged:: 2.0 + Added the ``resilient_parsing``, ``help_option_names``, and + ``token_normalize_func`` parameters. + """ + + #: The formatter class to create with :meth:`make_formatter`. + #: + #: .. versionadded:: 8.0 + formatter_class: t.Type["HelpFormatter"] = HelpFormatter + + def __init__( + self, + command: "Command", + parent: t.Optional["Context"] = None, + info_name: t.Optional[str] = None, + obj: t.Optional[t.Any] = None, + auto_envvar_prefix: t.Optional[str] = None, + default_map: t.Optional[t.MutableMapping[str, t.Any]] = None, + terminal_width: t.Optional[int] = None, + max_content_width: t.Optional[int] = None, + resilient_parsing: bool = False, + allow_extra_args: t.Optional[bool] = None, + allow_interspersed_args: t.Optional[bool] = None, + ignore_unknown_options: t.Optional[bool] = None, + help_option_names: t.Optional[t.List[str]] = None, + token_normalize_func: t.Optional[t.Callable[[str], str]] = None, + color: t.Optional[bool] = None, + show_default: t.Optional[bool] = None, + ) -> None: + #: the parent context or `None` if none exists. + self.parent = parent + #: the :class:`Command` for this context. + self.command = command + #: the descriptive information name + self.info_name = info_name + #: Map of parameter names to their parsed values. Parameters + #: with ``expose_value=False`` are not stored. + self.params: t.Dict[str, t.Any] = {} + #: the leftover arguments. + self.args: t.List[str] = [] + #: protected arguments. These are arguments that are prepended + #: to `args` when certain parsing scenarios are encountered but + #: must be never propagated to another arguments. This is used + #: to implement nested parsing. + self.protected_args: t.List[str] = [] + #: the collected prefixes of the command's options. + self._opt_prefixes: t.Set[str] = set(parent._opt_prefixes) if parent else set() + + if obj is None and parent is not None: + obj = parent.obj + + #: the user object stored. + self.obj: t.Any = obj + self._meta: t.Dict[str, t.Any] = getattr(parent, "meta", {}) + + #: A dictionary (-like object) with defaults for parameters. + if ( + default_map is None + and info_name is not None + and parent is not None + and parent.default_map is not None + ): + default_map = parent.default_map.get(info_name) + + self.default_map: t.Optional[t.MutableMapping[str, t.Any]] = default_map + + #: This flag indicates if a subcommand is going to be executed. A + #: group callback can use this information to figure out if it's + #: being executed directly or because the execution flow passes + #: onwards to a subcommand. By default it's None, but it can be + #: the name of the subcommand to execute. + #: + #: If chaining is enabled this will be set to ``'*'`` in case + #: any commands are executed. It is however not possible to + #: figure out which ones. If you require this knowledge you + #: should use a :func:`result_callback`. + self.invoked_subcommand: t.Optional[str] = None + + if terminal_width is None and parent is not None: + terminal_width = parent.terminal_width + + #: The width of the terminal (None is autodetection). + self.terminal_width: t.Optional[int] = terminal_width + + if max_content_width is None and parent is not None: + max_content_width = parent.max_content_width + + #: The maximum width of formatted content (None implies a sensible + #: default which is 80 for most things). + self.max_content_width: t.Optional[int] = max_content_width + + if allow_extra_args is None: + allow_extra_args = command.allow_extra_args + + #: Indicates if the context allows extra args or if it should + #: fail on parsing. + #: + #: .. versionadded:: 3.0 + self.allow_extra_args = allow_extra_args + + if allow_interspersed_args is None: + allow_interspersed_args = command.allow_interspersed_args + + #: Indicates if the context allows mixing of arguments and + #: options or not. + #: + #: .. versionadded:: 3.0 + self.allow_interspersed_args: bool = allow_interspersed_args + + if ignore_unknown_options is None: + ignore_unknown_options = command.ignore_unknown_options + + #: Instructs click to ignore options that a command does not + #: understand and will store it on the context for later + #: processing. This is primarily useful for situations where you + #: want to call into external programs. Generally this pattern is + #: strongly discouraged because it's not possibly to losslessly + #: forward all arguments. + #: + #: .. versionadded:: 4.0 + self.ignore_unknown_options: bool = ignore_unknown_options + + if help_option_names is None: + if parent is not None: + help_option_names = parent.help_option_names + else: + help_option_names = ["--help"] + + #: The names for the help options. + self.help_option_names: t.List[str] = help_option_names + + if token_normalize_func is None and parent is not None: + token_normalize_func = parent.token_normalize_func + + #: An optional normalization function for tokens. This is + #: options, choices, commands etc. + self.token_normalize_func: t.Optional[t.Callable[[str], str]] = ( + token_normalize_func + ) + + #: Indicates if resilient parsing is enabled. In that case Click + #: will do its best to not cause any failures and default values + #: will be ignored. Useful for completion. + self.resilient_parsing: bool = resilient_parsing + + # If there is no envvar prefix yet, but the parent has one and + # the command on this level has a name, we can expand the envvar + # prefix automatically. + if auto_envvar_prefix is None: + if ( + parent is not None + and parent.auto_envvar_prefix is not None + and self.info_name is not None + ): + auto_envvar_prefix = ( + f"{parent.auto_envvar_prefix}_{self.info_name.upper()}" + ) + else: + auto_envvar_prefix = auto_envvar_prefix.upper() + + if auto_envvar_prefix is not None: + auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") + + self.auto_envvar_prefix: t.Optional[str] = auto_envvar_prefix + + if color is None and parent is not None: + color = parent.color + + #: Controls if styling output is wanted or not. + self.color: t.Optional[bool] = color + + if show_default is None and parent is not None: + show_default = parent.show_default + + #: Show option default values when formatting help text. + self.show_default: t.Optional[bool] = show_default + + self._close_callbacks: t.List[t.Callable[[], t.Any]] = [] + self._depth = 0 + self._parameter_source: t.Dict[str, ParameterSource] = {} + self._exit_stack = ExitStack() + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. This traverses the entire CLI + structure. + + .. code-block:: python + + with Context(cli) as ctx: + info = ctx.to_info_dict() + + .. versionadded:: 8.0 + """ + return { + "command": self.command.to_info_dict(self), + "info_name": self.info_name, + "allow_extra_args": self.allow_extra_args, + "allow_interspersed_args": self.allow_interspersed_args, + "ignore_unknown_options": self.ignore_unknown_options, + "auto_envvar_prefix": self.auto_envvar_prefix, + } + + def __enter__(self) -> "Context": + self._depth += 1 + push_context(self) + return self + + def __exit__( + self, + exc_type: t.Optional[t.Type[BaseException]], + exc_value: t.Optional[BaseException], + tb: t.Optional[TracebackType], + ) -> None: + self._depth -= 1 + if self._depth == 0: + self.close() + pop_context() + + @contextmanager + def scope(self, cleanup: bool = True) -> t.Iterator["Context"]: + """This helper method can be used with the context object to promote + it to the current thread local (see :func:`get_current_context`). + The default behavior of this is to invoke the cleanup functions which + can be disabled by setting `cleanup` to `False`. The cleanup + functions are typically used for things such as closing file handles. + + If the cleanup is intended the context object can also be directly + used as a context manager. + + Example usage:: + + with ctx.scope(): + assert get_current_context() is ctx + + This is equivalent:: + + with ctx: + assert get_current_context() is ctx + + .. versionadded:: 5.0 + + :param cleanup: controls if the cleanup functions should be run or + not. The default is to run these functions. In + some situations the context only wants to be + temporarily pushed in which case this can be disabled. + Nested pushes automatically defer the cleanup. + """ + if not cleanup: + self._depth += 1 + try: + with self as rv: + yield rv + finally: + if not cleanup: + self._depth -= 1 + + @property + def meta(self) -> t.Dict[str, t.Any]: + """This is a dictionary which is shared with all the contexts + that are nested. It exists so that click utilities can store some + state here if they need to. It is however the responsibility of + that code to manage this dictionary well. + + The keys are supposed to be unique dotted strings. For instance + module paths are a good choice for it. What is stored in there is + irrelevant for the operation of click. However what is important is + that code that places data here adheres to the general semantics of + the system. + + Example usage:: + + LANG_KEY = f'{__name__}.lang' + + def set_language(value): + ctx = get_current_context() + ctx.meta[LANG_KEY] = value + + def get_language(): + return get_current_context().meta.get(LANG_KEY, 'en_US') + + .. versionadded:: 5.0 + """ + return self._meta + + def make_formatter(self) -> HelpFormatter: + """Creates the :class:`~click.HelpFormatter` for the help and + usage output. + + To quickly customize the formatter class used without overriding + this method, set the :attr:`formatter_class` attribute. + + .. versionchanged:: 8.0 + Added the :attr:`formatter_class` attribute. + """ + return self.formatter_class( + width=self.terminal_width, max_width=self.max_content_width + ) + + def with_resource(self, context_manager: t.ContextManager[V]) -> V: + """Register a resource as if it were used in a ``with`` + statement. The resource will be cleaned up when the context is + popped. + + Uses :meth:`contextlib.ExitStack.enter_context`. It calls the + resource's ``__enter__()`` method and returns the result. When + the context is popped, it closes the stack, which calls the + resource's ``__exit__()`` method. + + To register a cleanup function for something that isn't a + context manager, use :meth:`call_on_close`. Or use something + from :mod:`contextlib` to turn it into a context manager first. + + .. code-block:: python + + @click.group() + @click.option("--name") + @click.pass_context + def cli(ctx): + ctx.obj = ctx.with_resource(connect_db(name)) + + :param context_manager: The context manager to enter. + :return: Whatever ``context_manager.__enter__()`` returns. + + .. versionadded:: 8.0 + """ + return self._exit_stack.enter_context(context_manager) + + def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]: + """Register a function to be called when the context tears down. + + This can be used to close resources opened during the script + execution. Resources that support Python's context manager + protocol which would be used in a ``with`` statement should be + registered with :meth:`with_resource` instead. + + :param f: The function to execute on teardown. + """ + return self._exit_stack.callback(f) + + def close(self) -> None: + """Invoke all close callbacks registered with + :meth:`call_on_close`, and exit all context managers entered + with :meth:`with_resource`. + """ + self._exit_stack.close() + # In case the context is reused, create a new exit stack. + self._exit_stack = ExitStack() + + @property + def command_path(self) -> str: + """The computed command path. This is used for the ``usage`` + information on the help page. It's automatically created by + combining the info names of the chain of contexts to the root. + """ + rv = "" + if self.info_name is not None: + rv = self.info_name + if self.parent is not None: + parent_command_path = [self.parent.command_path] + + if isinstance(self.parent.command, Command): + for param in self.parent.command.get_params(self): + parent_command_path.extend(param.get_usage_pieces(self)) + + rv = f"{' '.join(parent_command_path)} {rv}" + return rv.lstrip() + + def find_root(self) -> "Context": + """Finds the outermost context.""" + node = self + while node.parent is not None: + node = node.parent + return node + + def find_object(self, object_type: t.Type[V]) -> t.Optional[V]: + """Finds the closest object of a given type.""" + node: t.Optional[Context] = self + + while node is not None: + if isinstance(node.obj, object_type): + return node.obj + + node = node.parent + + return None + + def ensure_object(self, object_type: t.Type[V]) -> V: + """Like :meth:`find_object` but sets the innermost object to a + new instance of `object_type` if it does not exist. + """ + rv = self.find_object(object_type) + if rv is None: + self.obj = rv = object_type() + return rv + + @t.overload + def lookup_default( + self, name: str, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: ... + + @t.overload + def lookup_default( + self, name: str, call: "te.Literal[False]" = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: ... + + def lookup_default(self, name: str, call: bool = True) -> t.Optional[t.Any]: + """Get the default for a parameter from :attr:`default_map`. + + :param name: Name of the parameter. + :param call: If the default is a callable, call it. Disable to + return the callable instead. + + .. versionchanged:: 8.0 + Added the ``call`` parameter. + """ + if self.default_map is not None: + value = self.default_map.get(name) + + if call and callable(value): + return value() + + return value + + return None + + def fail(self, message: str) -> "te.NoReturn": + """Aborts the execution of the program with a specific error + message. + + :param message: the error message to fail with. + """ + raise UsageError(message, self) + + def abort(self) -> "te.NoReturn": + """Aborts the script.""" + raise Abort() + + def exit(self, code: int = 0) -> "te.NoReturn": + """Exits the application with a given exit code.""" + raise Exit(code) + + def get_usage(self) -> str: + """Helper method to get formatted usage string for the current + context and command. + """ + return self.command.get_usage(self) + + def get_help(self) -> str: + """Helper method to get formatted help page for the current + context and command. + """ + return self.command.get_help(self) + + def _make_sub_context(self, command: "Command") -> "Context": + """Create a new context of the same type as this context, but + for a new command. + + :meta private: + """ + return type(self)(command, info_name=command.name, parent=self) + + @t.overload + def invoke( + __self, + __callback: "t.Callable[..., V]", + *args: t.Any, + **kwargs: t.Any, + ) -> V: ... + + @t.overload + def invoke( + __self, + __callback: "Command", + *args: t.Any, + **kwargs: t.Any, + ) -> t.Any: ... + + def invoke( + __self, + __callback: t.Union["Command", "t.Callable[..., V]"], + *args: t.Any, + **kwargs: t.Any, + ) -> t.Union[t.Any, V]: + """Invokes a command callback in exactly the way it expects. There + are two ways to invoke this method: + + 1. the first argument can be a callback and all other arguments and + keyword arguments are forwarded directly to the function. + 2. the first argument is a click command object. In that case all + arguments are forwarded as well but proper click parameters + (options and click arguments) must be keyword arguments and Click + will fill in defaults. + + Note that before Click 3.2 keyword arguments were not properly filled + in against the intention of this code and no context was created. For + more information about this change and why it was done in a bugfix + release see :ref:`upgrade-to-3.2`. + + .. versionchanged:: 8.0 + All ``kwargs`` are tracked in :attr:`params` so they will be + passed if :meth:`forward` is called at multiple levels. + """ + if isinstance(__callback, Command): + other_cmd = __callback + + if other_cmd.callback is None: + raise TypeError( + "The given command does not have a callback that can be invoked." + ) + else: + __callback = t.cast("t.Callable[..., V]", other_cmd.callback) + + ctx = __self._make_sub_context(other_cmd) + + for param in other_cmd.params: + if param.name not in kwargs and param.expose_value: + kwargs[param.name] = param.type_cast_value( # type: ignore + ctx, param.get_default(ctx) + ) + + # Track all kwargs as params, so that forward() will pass + # them on in subsequent calls. + ctx.params.update(kwargs) + else: + ctx = __self + + with augment_usage_errors(__self): + with ctx: + return __callback(*args, **kwargs) + + def forward(__self, __cmd: "Command", *args: t.Any, **kwargs: t.Any) -> t.Any: + """Similar to :meth:`invoke` but fills in default keyword + arguments from the current context if the other command expects + it. This cannot invoke callbacks directly, only other commands. + + .. versionchanged:: 8.0 + All ``kwargs`` are tracked in :attr:`params` so they will be + passed if ``forward`` is called at multiple levels. + """ + # Can only forward to other commands, not direct callbacks. + if not isinstance(__cmd, Command): + raise TypeError("Callback is not a command.") + + for param in __self.params: + if param not in kwargs: + kwargs[param] = __self.params[param] + + return __self.invoke(__cmd, *args, **kwargs) + + def set_parameter_source(self, name: str, source: ParameterSource) -> None: + """Set the source of a parameter. This indicates the location + from which the value of the parameter was obtained. + + :param name: The name of the parameter. + :param source: A member of :class:`~click.core.ParameterSource`. + """ + self._parameter_source[name] = source + + def get_parameter_source(self, name: str) -> t.Optional[ParameterSource]: + """Get the source of a parameter. This indicates the location + from which the value of the parameter was obtained. + + This can be useful for determining when a user specified a value + on the command line that is the same as the default value. It + will be :attr:`~click.core.ParameterSource.DEFAULT` only if the + value was actually taken from the default. + + :param name: The name of the parameter. + :rtype: ParameterSource + + .. versionchanged:: 8.0 + Returns ``None`` if the parameter was not provided from any + source. + """ + return self._parameter_source.get(name) + + +class BaseCommand: + """The base command implements the minimal API contract of commands. + Most code will never use this as it does not implement a lot of useful + functionality but it can act as the direct subclass of alternative + parsing methods that do not depend on the Click parser. + + For instance, this can be used to bridge Click and other systems like + argparse or docopt. + + Because base commands do not implement a lot of the API that other + parts of Click take for granted, they are not supported for all + operations. For instance, they cannot be used with the decorators + usually and they have no built-in callback system. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + """ + + #: The context class to create with :meth:`make_context`. + #: + #: .. versionadded:: 8.0 + context_class: t.Type[Context] = Context + #: the default for the :attr:`Context.allow_extra_args` flag. + allow_extra_args = False + #: the default for the :attr:`Context.allow_interspersed_args` flag. + allow_interspersed_args = True + #: the default for the :attr:`Context.ignore_unknown_options` flag. + ignore_unknown_options = False + + def __init__( + self, + name: t.Optional[str], + context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None, + ) -> None: + #: the name the command thinks it has. Upon registering a command + #: on a :class:`Group` the group will default the command name + #: with this information. You should instead use the + #: :class:`Context`\'s :attr:`~Context.info_name` attribute. + self.name = name + + if context_settings is None: + context_settings = {} + + #: an optional dictionary with defaults passed to the context. + self.context_settings: t.MutableMapping[str, t.Any] = context_settings + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. This traverses the entire structure + below this command. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + :param ctx: A :class:`Context` representing this command. + + .. versionadded:: 8.0 + """ + return {"name": self.name} + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.name}>" + + def get_usage(self, ctx: Context) -> str: + raise NotImplementedError("Base commands cannot get usage") + + def get_help(self, ctx: Context) -> str: + raise NotImplementedError("Base commands cannot get help") + + def make_context( + self, + info_name: t.Optional[str], + args: t.List[str], + parent: t.Optional[Context] = None, + **extra: t.Any, + ) -> Context: + """This function when given an info name and arguments will kick + off the parsing and create a new :class:`Context`. It does not + invoke the actual command callback though. + + To quickly customize the context class used without overriding + this method, set the :attr:`context_class` attribute. + + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it's usually + the name of the script, for commands below it's + the name of the command. + :param args: the arguments to parse as list of strings. + :param parent: the parent context if available. + :param extra: extra keyword arguments forwarded to the context + constructor. + + .. versionchanged:: 8.0 + Added the :attr:`context_class` attribute. + """ + for key, value in self.context_settings.items(): + if key not in extra: + extra[key] = value + + ctx = self.context_class( + self, # type: ignore[arg-type] + info_name=info_name, + parent=parent, + **extra, + ) + + with ctx.scope(cleanup=False): + self.parse_args(ctx, args) + return ctx + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + """Given a context and a list of arguments this creates the parser + and parses the arguments, then modifies the context as necessary. + This is automatically invoked by :meth:`make_context`. + """ + raise NotImplementedError("Base commands do not know how to parse arguments.") + + def invoke(self, ctx: Context) -> t.Any: + """Given a context, this invokes the command. The default + implementation is raising a not implemented error. + """ + raise NotImplementedError("Base commands are not invocable by default") + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of chained multi-commands. + + Any command could be part of a chained multi-command, so sibling + commands are valid at any point during command completion. Other + command classes will return more completions. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results: t.List[CompletionItem] = [] + + while ctx.parent is not None: + ctx = ctx.parent + + if isinstance(ctx.command, MultiCommand) and ctx.command.chain: + results.extend( + CompletionItem(name, help=command.get_short_help_str()) + for name, command in _complete_visible_commands(ctx, incomplete) + if name not in ctx.protected_args + ) + + return results + + @t.overload + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: "te.Literal[True]" = True, + **extra: t.Any, + ) -> "te.NoReturn": ... + + @t.overload + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: bool = ..., + **extra: t.Any, + ) -> t.Any: ... + + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: bool = True, + windows_expand_args: bool = True, + **extra: t.Any, + ) -> t.Any: + """This is the way to invoke a script with all the bells and + whistles as a command line application. This will always terminate + the application after a call. If this is not wanted, ``SystemExit`` + needs to be caught. + + This method is also available by directly calling the instance of + a :class:`Command`. + + :param args: the arguments that should be used for parsing. If not + provided, ``sys.argv[1:]`` is used. + :param prog_name: the program name that should be used. By default + the program name is constructed by taking the file + name from ``sys.argv[0]``. + :param complete_var: the environment variable that controls the + bash completion support. The default is + ``"__COMPLETE"`` with prog_name in + uppercase. + :param standalone_mode: the default behavior is to invoke the script + in standalone mode. Click will then + handle exceptions and convert them into + error messages and the function will never + return but shut down the interpreter. If + this is set to `False` they will be + propagated to the caller and the return + value of this function is the return value + of :meth:`invoke`. + :param windows_expand_args: Expand glob patterns, user dir, and + env vars in command line args on Windows. + :param extra: extra keyword arguments are forwarded to the context + constructor. See :class:`Context` for more information. + + .. versionchanged:: 8.0.1 + Added the ``windows_expand_args`` parameter to allow + disabling command line arg expansion on Windows. + + .. versionchanged:: 8.0 + When taking arguments from ``sys.argv`` on Windows, glob + patterns, user dir, and env vars are expanded. + + .. versionchanged:: 3.0 + Added the ``standalone_mode`` parameter. + """ + if args is None: + args = sys.argv[1:] + + if os.name == "nt" and windows_expand_args: + args = _expand_args(args) + else: + args = list(args) + + if prog_name is None: + prog_name = _detect_program_name() + + # Process shell completion requests and exit early. + self._main_shell_completion(extra, prog_name, complete_var) + + try: + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt) as e: + echo(file=sys.stderr) + raise Abort() from e + except ClickException as e: + if not standalone_mode: + raise + e.show() + sys.exit(e.exit_code) + except OSError as e: + if e.errno == errno.EPIPE: + sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout)) + sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr)) + sys.exit(1) + else: + raise + except Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + # in non-standalone mode, return the exit code + # note that this is only reached if `self.invoke` above raises + # an Exit explicitly -- thus bypassing the check there which + # would return its result + # the results of non-standalone execution may therefore be + # somewhat ambiguous: if there are codepaths which lead to + # `ctx.exit(1)` and to `return 1`, the caller won't be able to + # tell the difference between the two + return e.exit_code + except Abort: + if not standalone_mode: + raise + echo(_("Aborted!"), file=sys.stderr) + sys.exit(1) + + def _main_shell_completion( + self, + ctx_args: t.MutableMapping[str, t.Any], + prog_name: str, + complete_var: t.Optional[str] = None, + ) -> None: + """Check if the shell is asking for tab completion, process + that, then exit early. Called from :meth:`main` before the + program is invoked. + + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. Defaults to + ``_{PROG_NAME}_COMPLETE``. + + .. versionchanged:: 8.2.0 + Dots (``.``) in ``prog_name`` are replaced with underscores (``_``). + """ + if complete_var is None: + complete_name = prog_name.replace("-", "_").replace(".", "_") + complete_var = f"_{complete_name}_COMPLETE".upper() + + instruction = os.environ.get(complete_var) + + if not instruction: + return + + from .shell_completion import shell_complete + + rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction) + sys.exit(rv) + + def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: + """Alias for :meth:`main`.""" + return self.main(*args, **kwargs) + + +class Command(BaseCommand): + """Commands are the basic building block of command line interfaces in + Click. A basic command handles command line parsing and might dispatch + more parsing to commands nested below it. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + :param callback: the callback to invoke. This is optional. + :param params: the parameters to register with this command. This can + be either :class:`Option` or :class:`Argument` objects. + :param help: the help string to use for this command. + :param epilog: like the help string but it's printed at the end of the + help page after everything else. + :param short_help: the short help to use for this command. This is + shown on the command listing of the parent command. + :param add_help_option: by default each command registers a ``--help`` + option. This can be disabled by this parameter. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is disabled by default. + If enabled this will add ``--help`` as argument + if no arguments are passed + :param hidden: hide this command from help outputs. + + :param deprecated: issues a message indicating that + the command is deprecated. + + .. versionchanged:: 8.1 + ``help``, ``epilog``, and ``short_help`` are stored unprocessed, + all formatting is done when outputting help text, not at init, + and is done even if not using the ``@command`` decorator. + + .. versionchanged:: 8.0 + Added a ``repr`` showing the command name. + + .. versionchanged:: 7.1 + Added the ``no_args_is_help`` parameter. + + .. versionchanged:: 2.0 + Added the ``context_settings`` parameter. + """ + + def __init__( + self, + name: t.Optional[str], + context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None, + callback: t.Optional[t.Callable[..., t.Any]] = None, + params: t.Optional[t.List["Parameter"]] = None, + help: t.Optional[str] = None, + epilog: t.Optional[str] = None, + short_help: t.Optional[str] = None, + options_metavar: t.Optional[str] = "[OPTIONS]", + add_help_option: bool = True, + no_args_is_help: bool = False, + hidden: bool = False, + deprecated: bool = False, + ) -> None: + super().__init__(name, context_settings) + #: the callback to execute when the command fires. This might be + #: `None` in which case nothing happens. + self.callback = callback + #: the list of parameters for this command in the order they + #: should show up in the help page and execute. Eager parameters + #: will automatically be handled before non eager ones. + self.params: t.List[Parameter] = params or [] + self.help = help + self.epilog = epilog + self.options_metavar = options_metavar + self.short_help = short_help + self.add_help_option = add_help_option + self._help_option: t.Optional[HelpOption] = None + self.no_args_is_help = no_args_is_help + self.hidden = hidden + self.deprecated = deprecated + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict(ctx) + info_dict.update( + params=[param.to_info_dict() for param in self.get_params(ctx)], + help=self.help, + epilog=self.epilog, + short_help=self.short_help, + hidden=self.hidden, + deprecated=self.deprecated, + ) + return info_dict + + def get_usage(self, ctx: Context) -> str: + """Formats the usage line into a string and returns it. + + Calls :meth:`format_usage` internally. + """ + formatter = ctx.make_formatter() + self.format_usage(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_params(self, ctx: Context) -> t.List["Parameter"]: + rv = self.params + help_option = self.get_help_option(ctx) + + if help_option is not None: + rv = [*rv, help_option] + + return rv + + def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the usage line into the formatter. + + This is a low-level method called by :meth:`get_usage`. + """ + pieces = self.collect_usage_pieces(ctx) + formatter.write_usage(ctx.command_path, " ".join(pieces)) + + def collect_usage_pieces(self, ctx: Context) -> t.List[str]: + """Returns all the pieces that go into the usage line and returns + it as a list of strings. + """ + rv = [self.options_metavar] if self.options_metavar else [] + + for param in self.get_params(ctx): + rv.extend(param.get_usage_pieces(ctx)) + + return rv + + def get_help_option_names(self, ctx: Context) -> t.List[str]: + """Returns the names for the help option.""" + all_names = set(ctx.help_option_names) + for param in self.params: + all_names.difference_update(param.opts) + all_names.difference_update(param.secondary_opts) + return list(all_names) + + def get_help_option(self, ctx: Context) -> t.Optional["Option"]: + """Returns the help option object. + + Unless ``add_help_option`` is ``False``. + + .. versionchanged:: 8.1.8 + The help option is now cached to avoid creating it multiple times. + """ + help_options = self.get_help_option_names(ctx) + + if not help_options or not self.add_help_option: + return None + + # Cache the help option object in private _help_option attribute to + # avoid creating it multiple times. Not doing this will break the + # callback odering by iter_params_for_processing(), which relies on + # object comparison. + if self._help_option is None: + # Avoid circular import. + from .decorators import HelpOption + + self._help_option = HelpOption(help_options) + + return self._help_option + + def make_parser(self, ctx: Context) -> OptionParser: + """Creates the underlying option parser for this command.""" + parser = OptionParser(ctx) + for param in self.get_params(ctx): + param.add_to_parser(parser, ctx) + return parser + + def get_help(self, ctx: Context) -> str: + """Formats the help into a string and returns it. + + Calls :meth:`format_help` internally. + """ + formatter = ctx.make_formatter() + self.format_help(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_short_help_str(self, limit: int = 45) -> str: + """Gets short help for the command or makes it by shortening the + long help string. + """ + if self.short_help: + text = inspect.cleandoc(self.short_help) + elif self.help: + text = make_default_short_help(self.help, limit) + else: + text = "" + + if self.deprecated: + text = _("(Deprecated) {text}").format(text=text) + + return text.strip() + + def format_help(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the help into the formatter if it exists. + + This is a low-level method called by :meth:`get_help`. + + This calls the following methods: + + - :meth:`format_usage` + - :meth:`format_help_text` + - :meth:`format_options` + - :meth:`format_epilog` + """ + self.format_usage(ctx, formatter) + self.format_help_text(ctx, formatter) + self.format_options(ctx, formatter) + self.format_epilog(ctx, formatter) + + def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the help text to the formatter if it exists.""" + if self.help is not None: + # truncate the help text to the first form feed + text = inspect.cleandoc(self.help).partition("\f")[0] + else: + text = "" + + if self.deprecated: + text = _("(Deprecated) {text}").format(text=text) + + if text: + formatter.write_paragraph() + + with formatter.indentation(): + formatter.write_text(text) + + def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes all the options into the formatter if they exist.""" + opts = [] + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + opts.append(rv) + + if opts: + with formatter.section(_("Options")): + formatter.write_dl(opts) + + def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the epilog into the formatter if it exists.""" + if self.epilog: + epilog = inspect.cleandoc(self.epilog) + formatter.write_paragraph() + + with formatter.indentation(): + formatter.write_text(epilog) + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + parser = self.make_parser(ctx) + opts, args, param_order = parser.parse_args(args=args) + + for param in iter_params_for_processing(param_order, self.get_params(ctx)): + value, args = param.handle_parse_result(ctx, opts, args) + + if args and not ctx.allow_extra_args and not ctx.resilient_parsing: + ctx.fail( + ngettext( + "Got unexpected extra argument ({args})", + "Got unexpected extra arguments ({args})", + len(args), + ).format(args=" ".join(map(str, args))) + ) + + ctx.args = args + ctx._opt_prefixes.update(parser._opt_prefixes) + return args + + def invoke(self, ctx: Context) -> t.Any: + """Given a context, this invokes the attached callback (if it exists) + in the right way. + """ + if self.deprecated: + message = _( + "DeprecationWarning: The command {name!r} is deprecated." + ).format(name=self.name) + echo(style(message, fg="red"), err=True) + + if self.callback is not None: + return ctx.invoke(self.callback, **ctx.params) + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of options and chained multi-commands. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results: t.List[CompletionItem] = [] + + if incomplete and not incomplete[0].isalnum(): + for param in self.get_params(ctx): + if ( + not isinstance(param, Option) + or param.hidden + or ( + not param.multiple + and ctx.get_parameter_source(param.name) # type: ignore + is ParameterSource.COMMANDLINE + ) + ): + continue + + results.extend( + CompletionItem(name, help=param.help) + for name in [*param.opts, *param.secondary_opts] + if name.startswith(incomplete) + ) + + results.extend(super().shell_complete(ctx, incomplete)) + return results + + +class MultiCommand(Command): + """A multi command is the basic implementation of a command that + dispatches to subcommands. The most common version is the + :class:`Group`. + + :param invoke_without_command: this controls how the multi command itself + is invoked. By default it's only invoked + if a subcommand is provided. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is enabled by default if + `invoke_without_command` is disabled or disabled + if it's enabled. If enabled this will add + ``--help`` as argument if no arguments are + passed. + :param subcommand_metavar: the string that is used in the documentation + to indicate the subcommand place. + :param chain: if this is set to `True` chaining of multiple subcommands + is enabled. This restricts the form of commands in that + they cannot have optional arguments but it allows + multiple commands to be chained together. + :param result_callback: The result callback to attach to this multi + command. This can be set or changed later with the + :meth:`result_callback` decorator. + :param attrs: Other command arguments described in :class:`Command`. + """ + + allow_extra_args = True + allow_interspersed_args = False + + def __init__( + self, + name: t.Optional[str] = None, + invoke_without_command: bool = False, + no_args_is_help: t.Optional[bool] = None, + subcommand_metavar: t.Optional[str] = None, + chain: bool = False, + result_callback: t.Optional[t.Callable[..., t.Any]] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + + if no_args_is_help is None: + no_args_is_help = not invoke_without_command + + self.no_args_is_help = no_args_is_help + self.invoke_without_command = invoke_without_command + + if subcommand_metavar is None: + if chain: + subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." + else: + subcommand_metavar = "COMMAND [ARGS]..." + + self.subcommand_metavar = subcommand_metavar + self.chain = chain + # The result callback that is stored. This can be set or + # overridden with the :func:`result_callback` decorator. + self._result_callback = result_callback + + if self.chain: + for param in self.params: + if isinstance(param, Argument) and not param.required: + raise RuntimeError( + "Multi commands in chain mode cannot have" + " optional arguments." + ) + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict(ctx) + commands = {} + + for name in self.list_commands(ctx): + command = self.get_command(ctx, name) + + if command is None: + continue + + sub_ctx = ctx._make_sub_context(command) + + with sub_ctx.scope(cleanup=False): + commands[name] = command.to_info_dict(sub_ctx) + + info_dict.update(commands=commands, chain=self.chain) + return info_dict + + def collect_usage_pieces(self, ctx: Context) -> t.List[str]: + rv = super().collect_usage_pieces(ctx) + rv.append(self.subcommand_metavar) + return rv + + def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: + super().format_options(ctx, formatter) + self.format_commands(ctx, formatter) + + def result_callback(self, replace: bool = False) -> t.Callable[[F], F]: + """Adds a result callback to the command. By default if a + result callback is already registered this will chain them but + this can be disabled with the `replace` parameter. The result + callback is invoked with the return value of the subcommand + (or the list of return values from all subcommands if chaining + is enabled) as well as the parameters as they would be passed + to the main callback. + + Example:: + + @click.group() + @click.option('-i', '--input', default=23) + def cli(input): + return 42 + + @cli.result_callback() + def process_result(result, input): + return result + input + + :param replace: if set to `True` an already existing result + callback will be removed. + + .. versionchanged:: 8.0 + Renamed from ``resultcallback``. + + .. versionadded:: 3.0 + """ + + def decorator(f: F) -> F: + old_callback = self._result_callback + + if old_callback is None or replace: + self._result_callback = f + return f + + def function(__value, *args, **kwargs): # type: ignore + inner = old_callback(__value, *args, **kwargs) + return f(inner, *args, **kwargs) + + self._result_callback = rv = update_wrapper(t.cast(F, function), f) + return rv # type: ignore[return-value] + + return decorator + + def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None: + """Extra format methods for multi methods that adds all the commands + after the options. + """ + commands = [] + for subcommand in self.list_commands(ctx): + cmd = self.get_command(ctx, subcommand) + # What is this, the tool lied about a command. Ignore it + if cmd is None: + continue + if cmd.hidden: + continue + + commands.append((subcommand, cmd)) + + # allow for 3 times the default spacing + if len(commands): + limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) + + rows = [] + for subcommand, cmd in commands: + help = cmd.get_short_help_str(limit) + rows.append((subcommand, help)) + + if rows: + with formatter.section(_("Commands")): + formatter.write_dl(rows) + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + rest = super().parse_args(ctx, args) + + if self.chain: + ctx.protected_args = rest + ctx.args = [] + elif rest: + ctx.protected_args, ctx.args = rest[:1], rest[1:] + + return ctx.args + + def invoke(self, ctx: Context) -> t.Any: + def _process_result(value: t.Any) -> t.Any: + if self._result_callback is not None: + value = ctx.invoke(self._result_callback, value, **ctx.params) + return value + + if not ctx.protected_args: + if self.invoke_without_command: + # No subcommand was invoked, so the result callback is + # invoked with the group return value for regular + # groups, or an empty list for chained groups. + with ctx: + rv = super().invoke(ctx) + return _process_result([] if self.chain else rv) + ctx.fail(_("Missing command.")) + + # Fetch args back out + args = [*ctx.protected_args, *ctx.args] + ctx.args = [] + ctx.protected_args = [] + + # If we're not in chain mode, we only allow the invocation of a + # single command but we also inform the current context about the + # name of the command to invoke. + if not self.chain: + # Make sure the context is entered so we do not clean up + # resources until the result processor has worked. + with ctx: + cmd_name, cmd, args = self.resolve_command(ctx, args) + assert cmd is not None + ctx.invoked_subcommand = cmd_name + super().invoke(ctx) + sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) + with sub_ctx: + return _process_result(sub_ctx.command.invoke(sub_ctx)) + + # In chain mode we create the contexts step by step, but after the + # base command has been invoked. Because at that point we do not + # know the subcommands yet, the invoked subcommand attribute is + # set to ``*`` to inform the command that subcommands are executed + # but nothing else. + with ctx: + ctx.invoked_subcommand = "*" if args else None + super().invoke(ctx) + + # Otherwise we make every single context and invoke them in a + # chain. In that case the return value to the result processor + # is the list of all invoked subcommand's results. + contexts = [] + while args: + cmd_name, cmd, args = self.resolve_command(ctx, args) + assert cmd is not None + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + ) + contexts.append(sub_ctx) + args, sub_ctx.args = sub_ctx.args, [] + + rv = [] + for sub_ctx in contexts: + with sub_ctx: + rv.append(sub_ctx.command.invoke(sub_ctx)) + return _process_result(rv) + + def resolve_command( + self, ctx: Context, args: t.List[str] + ) -> t.Tuple[t.Optional[str], t.Optional[Command], t.List[str]]: + cmd_name = make_str(args[0]) + original_cmd_name = cmd_name + + # Get the command + cmd = self.get_command(ctx, cmd_name) + + # If we can't find the command but there is a normalization + # function available, we try with that one. + if cmd is None and ctx.token_normalize_func is not None: + cmd_name = ctx.token_normalize_func(cmd_name) + cmd = self.get_command(ctx, cmd_name) + + # If we don't find the command we want to show an error message + # to the user that it was not provided. However, there is + # something else we should do: if the first argument looks like + # an option we want to kick off parsing again for arguments to + # resolve things like --help which now should go to the main + # place. + if cmd is None and not ctx.resilient_parsing: + if split_opt(cmd_name)[0]: + self.parse_args(ctx, ctx.args) + ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name)) + return cmd_name if cmd else None, cmd, args[1:] + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + """Given a context and a command name, this returns a + :class:`Command` object if it exists or returns `None`. + """ + raise NotImplementedError + + def list_commands(self, ctx: Context) -> t.List[str]: + """Returns a list of subcommand names in the order they should + appear. + """ + return [] + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of options, subcommands, and chained + multi-commands. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results = [ + CompletionItem(name, help=command.get_short_help_str()) + for name, command in _complete_visible_commands(ctx, incomplete) + ] + results.extend(super().shell_complete(ctx, incomplete)) + return results + + +class Group(MultiCommand): + """A group allows a command to have subcommands attached. This is + the most common way to implement nesting in Click. + + :param name: The name of the group command. + :param commands: A dict mapping names to :class:`Command` objects. + Can also be a list of :class:`Command`, which will use + :attr:`Command.name` to create the dict. + :param attrs: Other command arguments described in + :class:`MultiCommand`, :class:`Command`, and + :class:`BaseCommand`. + + .. versionchanged:: 8.0 + The ``commands`` argument can be a list of command objects. + """ + + #: If set, this is used by the group's :meth:`command` decorator + #: as the default :class:`Command` class. This is useful to make all + #: subcommands use a custom command class. + #: + #: .. versionadded:: 8.0 + command_class: t.Optional[t.Type[Command]] = None + + #: If set, this is used by the group's :meth:`group` decorator + #: as the default :class:`Group` class. This is useful to make all + #: subgroups use a custom group class. + #: + #: If set to the special value :class:`type` (literally + #: ``group_class = type``), this group's class will be used as the + #: default class. This makes a custom group class continue to make + #: custom groups. + #: + #: .. versionadded:: 8.0 + group_class: t.Optional[t.Union[t.Type["Group"], t.Type[type]]] = None + # Literal[type] isn't valid, so use Type[type] + + def __init__( + self, + name: t.Optional[str] = None, + commands: t.Optional[ + t.Union[t.MutableMapping[str, Command], t.Sequence[Command]] + ] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + + if commands is None: + commands = {} + elif isinstance(commands, abc.Sequence): + commands = {c.name: c for c in commands if c.name is not None} + + #: The registered subcommands by their exported names. + self.commands: t.MutableMapping[str, Command] = commands + + def add_command(self, cmd: Command, name: t.Optional[str] = None) -> None: + """Registers another :class:`Command` with this group. If the name + is not provided, the name of the command is used. + """ + name = name or cmd.name + if name is None: + raise TypeError("Command has no name.") + _check_multicommand(self, name, cmd, register=True) + self.commands[name] = cmd + + @t.overload + def command(self, __func: t.Callable[..., t.Any]) -> Command: ... + + @t.overload + def command( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], Command]: ... + + def command( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], Command], Command]: + """A shortcut decorator for declaring and attaching a command to + the group. This takes the same arguments as :func:`command` and + immediately registers the created command with this group by + calling :meth:`add_command`. + + To customize the command class used, set the + :attr:`command_class` attribute. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.0 + Added the :attr:`command_class` attribute. + """ + from .decorators import command + + func: t.Optional[t.Callable[..., t.Any]] = None + + if args and callable(args[0]): + assert ( + len(args) == 1 and not kwargs + ), "Use 'command(**kwargs)(callable)' to provide arguments." + (func,) = args + args = () + + if self.command_class and kwargs.get("cls") is None: + kwargs["cls"] = self.command_class + + def decorator(f: t.Callable[..., t.Any]) -> Command: + cmd: Command = command(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + if func is not None: + return decorator(func) + + return decorator + + @t.overload + def group(self, __func: t.Callable[..., t.Any]) -> "Group": ... + + @t.overload + def group( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], "Group"]: ... + + def group( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], "Group"], "Group"]: + """A shortcut decorator for declaring and attaching a group to + the group. This takes the same arguments as :func:`group` and + immediately registers the created group with this group by + calling :meth:`add_command`. + + To customize the group class used, set the :attr:`group_class` + attribute. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.0 + Added the :attr:`group_class` attribute. + """ + from .decorators import group + + func: t.Optional[t.Callable[..., t.Any]] = None + + if args and callable(args[0]): + assert ( + len(args) == 1 and not kwargs + ), "Use 'group(**kwargs)(callable)' to provide arguments." + (func,) = args + args = () + + if self.group_class is not None and kwargs.get("cls") is None: + if self.group_class is type: + kwargs["cls"] = type(self) + else: + kwargs["cls"] = self.group_class + + def decorator(f: t.Callable[..., t.Any]) -> "Group": + cmd: Group = group(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + if func is not None: + return decorator(func) + + return decorator + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + return self.commands.get(cmd_name) + + def list_commands(self, ctx: Context) -> t.List[str]: + return sorted(self.commands) + + +class CommandCollection(MultiCommand): + """A command collection is a multi command that merges multiple multi + commands together into one. This is a straightforward implementation + that accepts a list of different multi commands as sources and + provides all the commands for each of them. + + See :class:`MultiCommand` and :class:`Command` for the description of + ``name`` and ``attrs``. + """ + + def __init__( + self, + name: t.Optional[str] = None, + sources: t.Optional[t.List[MultiCommand]] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + #: The list of registered multi commands. + self.sources: t.List[MultiCommand] = sources or [] + + def add_source(self, multi_cmd: MultiCommand) -> None: + """Adds a new multi command to the chain dispatcher.""" + self.sources.append(multi_cmd) + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + for source in self.sources: + rv = source.get_command(ctx, cmd_name) + + if rv is not None: + if self.chain: + _check_multicommand(self, cmd_name, rv) + + return rv + + return None + + def list_commands(self, ctx: Context) -> t.List[str]: + rv: t.Set[str] = set() + + for source in self.sources: + rv.update(source.list_commands(ctx)) + + return sorted(rv) + + +def _check_iter(value: t.Any) -> t.Iterator[t.Any]: + """Check if the value is iterable but not a string. Raises a type + error, or return an iterator over the value. + """ + if isinstance(value, str): + raise TypeError + + return iter(value) + + +class Parameter: + r"""A parameter to a command comes in two versions: they are either + :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently + not supported by design as some of the internals for parsing are + intentionally not finalized. + + Some settings are supported by both options and arguments. + + :param param_decls: the parameter declarations for this option or + argument. This is a list of flags or argument + names. + :param type: the type that should be used. Either a :class:`ParamType` + or a Python type. The latter is converted into the former + automatically if supported. + :param required: controls if this is optional or not. + :param default: the default value if omitted. This can also be a callable, + in which case it's invoked when the default is needed + without any arguments. + :param callback: A function to further process or validate the value + after type conversion. It is called as ``f(ctx, param, value)`` + and must return the value. It is called for all sources, + including prompts. + :param nargs: the number of arguments to match. If not ``1`` the return + value is a tuple instead of single value. The default for + nargs is ``1`` (except if the type is a tuple, then it's + the arity of the tuple). If ``nargs=-1``, all remaining + parameters are collected. + :param metavar: how the value is represented in the help page. + :param expose_value: if this is `True` then the value is passed onwards + to the command callback and stored on the context, + otherwise it's skipped. + :param is_eager: eager values are processed before non eager ones. This + should not be set for arguments or it will inverse the + order of processing. + :param envvar: a string or list of strings that are environment variables + that should be checked. + :param shell_complete: A function that returns custom shell + completions. Used instead of the param's type completion if + given. Takes ``ctx, param, incomplete`` and must return a list + of :class:`~click.shell_completion.CompletionItem` or a list of + strings. + + .. versionchanged:: 8.0 + ``process_value`` validates required parameters and bounded + ``nargs``, and invokes the parameter callback before returning + the value. This allows the callback to validate prompts. + ``full_process_value`` is removed. + + .. versionchanged:: 8.0 + ``autocompletion`` is renamed to ``shell_complete`` and has new + semantics described above. The old name is deprecated and will + be removed in 8.1, until then it will be wrapped to match the + new requirements. + + .. versionchanged:: 8.0 + For ``multiple=True, nargs>1``, the default must be a list of + tuples. + + .. versionchanged:: 8.0 + Setting a default is no longer required for ``nargs>1``, it will + default to ``None``. ``multiple=True`` or ``nargs=-1`` will + default to ``()``. + + .. versionchanged:: 7.1 + Empty environment variables are ignored rather than taking the + empty string value. This makes it possible for scripts to clear + variables if they can't unset them. + + .. versionchanged:: 2.0 + Changed signature for parameter callback to also be passed the + parameter. The old callback format will still work, but it will + raise a warning to give you a chance to migrate the code easier. + """ + + param_type_name = "parameter" + + def __init__( + self, + param_decls: t.Optional[t.Sequence[str]] = None, + type: t.Optional[t.Union[types.ParamType, t.Any]] = None, + required: bool = False, + default: t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]] = None, + callback: t.Optional[t.Callable[[Context, "Parameter", t.Any], t.Any]] = None, + nargs: t.Optional[int] = None, + multiple: bool = False, + metavar: t.Optional[str] = None, + expose_value: bool = True, + is_eager: bool = False, + envvar: t.Optional[t.Union[str, t.Sequence[str]]] = None, + shell_complete: t.Optional[ + t.Callable[ + [Context, "Parameter", str], + t.Union[t.List["CompletionItem"], t.List[str]], + ] + ] = None, + ) -> None: + self.name: t.Optional[str] + self.opts: t.List[str] + self.secondary_opts: t.List[str] + self.name, self.opts, self.secondary_opts = self._parse_decls( + param_decls or (), expose_value + ) + self.type: types.ParamType = types.convert_type(type, default) + + # Default nargs to what the type tells us if we have that + # information available. + if nargs is None: + if self.type.is_composite: + nargs = self.type.arity + else: + nargs = 1 + + self.required = required + self.callback = callback + self.nargs = nargs + self.multiple = multiple + self.expose_value = expose_value + self.default = default + self.is_eager = is_eager + self.metavar = metavar + self.envvar = envvar + self._custom_shell_complete = shell_complete + + if __debug__: + if self.type.is_composite and nargs != self.type.arity: + raise ValueError( + f"'nargs' must be {self.type.arity} (or None) for" + f" type {self.type!r}, but it was {nargs}." + ) + + # Skip no default or callable default. + check_default = default if not callable(default) else None + + if check_default is not None: + if multiple: + try: + # Only check the first value against nargs. + check_default = next(_check_iter(check_default), None) + except TypeError: + raise ValueError( + "'default' must be a list when 'multiple' is true." + ) from None + + # Can be None for multiple with empty default. + if nargs != 1 and check_default is not None: + try: + _check_iter(check_default) + except TypeError: + if multiple: + message = ( + "'default' must be a list of lists when 'multiple' is" + " true and 'nargs' != 1." + ) + else: + message = "'default' must be a list when 'nargs' != 1." + + raise ValueError(message) from None + + if nargs > 1 and len(check_default) != nargs: + subject = "item length" if multiple else "length" + raise ValueError( + f"'default' {subject} must match nargs={nargs}." + ) + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + .. versionadded:: 8.0 + """ + return { + "name": self.name, + "param_type_name": self.param_type_name, + "opts": self.opts, + "secondary_opts": self.secondary_opts, + "type": self.type.to_info_dict(), + "required": self.required, + "nargs": self.nargs, + "multiple": self.multiple, + "default": self.default, + "envvar": self.envvar, + } + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.name}>" + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + raise NotImplementedError() + + @property + def human_readable_name(self) -> str: + """Returns the human readable name of this parameter. This is the + same as the name for options, but the metavar for arguments. + """ + return self.name # type: ignore + + def make_metavar(self) -> str: + if self.metavar is not None: + return self.metavar + + metavar = self.type.get_metavar(self) + + if metavar is None: + metavar = self.type.name.upper() + + if self.nargs != 1: + metavar += "..." + + return metavar + + @t.overload + def get_default( + self, ctx: Context, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: ... + + @t.overload + def get_default( + self, ctx: Context, call: bool = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: ... + + def get_default( + self, ctx: Context, call: bool = True + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + """Get the default for the parameter. Tries + :meth:`Context.lookup_default` first, then the local default. + + :param ctx: Current context. + :param call: If the default is a callable, call it. Disable to + return the callable instead. + + .. versionchanged:: 8.0.2 + Type casting is no longer performed when getting a default. + + .. versionchanged:: 8.0.1 + Type casting can fail in resilient parsing mode. Invalid + defaults will not prevent showing help text. + + .. versionchanged:: 8.0 + Looks at ``ctx.default_map`` first. + + .. versionchanged:: 8.0 + Added the ``call`` parameter. + """ + value = ctx.lookup_default(self.name, call=False) # type: ignore + + if value is None: + value = self.default + + if call and callable(value): + value = value() + + return value + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + raise NotImplementedError() + + def consume_value( + self, ctx: Context, opts: t.Mapping[str, t.Any] + ) -> t.Tuple[t.Any, ParameterSource]: + value = opts.get(self.name) # type: ignore + source = ParameterSource.COMMANDLINE + + if value is None: + value = self.value_from_envvar(ctx) + source = ParameterSource.ENVIRONMENT + + if value is None: + value = ctx.lookup_default(self.name) # type: ignore + source = ParameterSource.DEFAULT_MAP + + if value is None: + value = self.get_default(ctx) + source = ParameterSource.DEFAULT + + return value, source + + def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any: + """Convert and validate a value against the option's + :attr:`type`, :attr:`multiple`, and :attr:`nargs`. + """ + if value is None: + return () if self.multiple or self.nargs == -1 else None + + def check_iter(value: t.Any) -> t.Iterator[t.Any]: + try: + return _check_iter(value) + except TypeError: + # This should only happen when passing in args manually, + # the parser should construct an iterable when parsing + # the command line. + raise BadParameter( + _("Value must be an iterable."), ctx=ctx, param=self + ) from None + + if self.nargs == 1 or self.type.is_composite: + + def convert(value: t.Any) -> t.Any: + return self.type(value, param=self, ctx=ctx) + + elif self.nargs == -1: + + def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...] + return tuple(self.type(x, self, ctx) for x in check_iter(value)) + + else: # nargs > 1 + + def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...] + value = tuple(check_iter(value)) + + if len(value) != self.nargs: + raise BadParameter( + ngettext( + "Takes {nargs} values but 1 was given.", + "Takes {nargs} values but {len} were given.", + len(value), + ).format(nargs=self.nargs, len=len(value)), + ctx=ctx, + param=self, + ) + + return tuple(self.type(x, self, ctx) for x in value) + + if self.multiple: + return tuple(convert(x) for x in check_iter(value)) + + return convert(value) + + def value_is_missing(self, value: t.Any) -> bool: + if value is None: + return True + + if (self.nargs != 1 or self.multiple) and value == (): + return True + + return False + + def process_value(self, ctx: Context, value: t.Any) -> t.Any: + value = self.type_cast_value(ctx, value) + + if self.required and self.value_is_missing(value): + raise MissingParameter(ctx=ctx, param=self) + + if self.callback is not None: + value = self.callback(ctx, self, value) + + return value + + def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: + if self.envvar is None: + return None + + if isinstance(self.envvar, str): + rv = os.environ.get(self.envvar) + + if rv: + return rv + else: + for envvar in self.envvar: + rv = os.environ.get(envvar) + + if rv: + return rv + + return None + + def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: + rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) + + if rv is not None and self.nargs != 1: + rv = self.type.split_envvar_value(rv) + + return rv + + def handle_parse_result( + self, ctx: Context, opts: t.Mapping[str, t.Any], args: t.List[str] + ) -> t.Tuple[t.Any, t.List[str]]: + with augment_usage_errors(ctx, param=self): + value, source = self.consume_value(ctx, opts) + ctx.set_parameter_source(self.name, source) # type: ignore + + try: + value = self.process_value(ctx, value) + except Exception: + if not ctx.resilient_parsing: + raise + + value = None + + if self.expose_value: + ctx.params[self.name] = value # type: ignore + + return value, args + + def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: + pass + + def get_usage_pieces(self, ctx: Context) -> t.List[str]: + return [] + + def get_error_hint(self, ctx: Context) -> str: + """Get a stringified version of the param for use in error messages to + indicate which param caused the error. + """ + hint_list = self.opts or [self.human_readable_name] + return " / ".join(f"'{x}'" for x in hint_list) + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. If a + ``shell_complete`` function was given during init, it is used. + Otherwise, the :attr:`type` + :meth:`~click.types.ParamType.shell_complete` function is used. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + if self._custom_shell_complete is not None: + results = self._custom_shell_complete(ctx, self, incomplete) + + if results and isinstance(results[0], str): + from click.shell_completion import CompletionItem + + results = [CompletionItem(c) for c in results] + + return t.cast(t.List["CompletionItem"], results) + + return self.type.shell_complete(ctx, self, incomplete) + + +class Option(Parameter): + """Options are usually optional values on the command line and + have some extra features that arguments don't have. + + All other parameters are passed onwards to the parameter constructor. + + :param show_default: Show the default value for this option in its + help text. Values are not shown by default, unless + :attr:`Context.show_default` is ``True``. If this value is a + string, it shows that string in parentheses instead of the + actual value. This is particularly useful for dynamic options. + For single option boolean flags, the default remains hidden if + its value is ``False``. + :param show_envvar: Controls if an environment variable should be + shown on the help page. Normally, environment variables are not + shown. + :param prompt: If set to ``True`` or a non empty string then the + user will be prompted for input. If set to ``True`` the prompt + will be the option name capitalized. + :param confirmation_prompt: Prompt a second time to confirm the + value if it was prompted for. Can be set to a string instead of + ``True`` to customize the message. + :param prompt_required: If set to ``False``, the user will be + prompted for input only when the option was specified as a flag + without a value. + :param hide_input: If this is ``True`` then the input on the prompt + will be hidden from the user. This is useful for password input. + :param is_flag: forces this option to act as a flag. The default is + auto detection. + :param flag_value: which value should be used for this flag if it's + enabled. This is set to a boolean automatically if + the option string contains a slash to mark two options. + :param multiple: if this is set to `True` then the argument is accepted + multiple times and recorded. This is similar to ``nargs`` + in how it works but supports arbitrary number of + arguments. + :param count: this flag makes an option increment an integer. + :param allow_from_autoenv: if this is enabled then the value of this + parameter will be pulled from an environment + variable in case a prefix is defined on the + context. + :param help: the help string. + :param hidden: hide this option from help outputs. + :param attrs: Other command arguments described in :class:`Parameter`. + + .. versionchanged:: 8.1.0 + Help text indentation is cleaned here instead of only in the + ``@option`` decorator. + + .. versionchanged:: 8.1.0 + The ``show_default`` parameter overrides + ``Context.show_default``. + + .. versionchanged:: 8.1.0 + The default of a single option boolean flag is not shown if the + default value is ``False``. + + .. versionchanged:: 8.0.1 + ``type`` is detected from ``flag_value`` if given. + """ + + param_type_name = "option" + + def __init__( + self, + param_decls: t.Optional[t.Sequence[str]] = None, + show_default: t.Union[bool, str, None] = None, + prompt: t.Union[bool, str] = False, + confirmation_prompt: t.Union[bool, str] = False, + prompt_required: bool = True, + hide_input: bool = False, + is_flag: t.Optional[bool] = None, + flag_value: t.Optional[t.Any] = None, + multiple: bool = False, + count: bool = False, + allow_from_autoenv: bool = True, + type: t.Optional[t.Union[types.ParamType, t.Any]] = None, + help: t.Optional[str] = None, + hidden: bool = False, + show_choices: bool = True, + show_envvar: bool = False, + **attrs: t.Any, + ) -> None: + if help: + help = inspect.cleandoc(help) + + default_is_missing = "default" not in attrs + super().__init__(param_decls, type=type, multiple=multiple, **attrs) + + if prompt is True: + if self.name is None: + raise TypeError("'name' is required with 'prompt=True'.") + + prompt_text: t.Optional[str] = self.name.replace("_", " ").capitalize() + elif prompt is False: + prompt_text = None + else: + prompt_text = prompt + + self.prompt = prompt_text + self.confirmation_prompt = confirmation_prompt + self.prompt_required = prompt_required + self.hide_input = hide_input + self.hidden = hidden + + # If prompt is enabled but not required, then the option can be + # used as a flag to indicate using prompt or flag_value. + self._flag_needs_value = self.prompt is not None and not self.prompt_required + + if is_flag is None: + if flag_value is not None: + # Implicitly a flag because flag_value was set. + is_flag = True + elif self._flag_needs_value: + # Not a flag, but when used as a flag it shows a prompt. + is_flag = False + else: + # Implicitly a flag because flag options were given. + is_flag = bool(self.secondary_opts) + elif is_flag is False and not self._flag_needs_value: + # Not a flag, and prompt is not enabled, can be used as a + # flag if flag_value is set. + self._flag_needs_value = flag_value is not None + + self.default: t.Union[t.Any, t.Callable[[], t.Any]] + + if is_flag and default_is_missing and not self.required: + if multiple: + self.default = () + else: + self.default = False + + if flag_value is None: + flag_value = not self.default + + self.type: types.ParamType + if is_flag and type is None: + # Re-guess the type from the flag value instead of the + # default. + self.type = types.convert_type(None, flag_value) + + self.is_flag: bool = is_flag + self.is_bool_flag: bool = is_flag and isinstance(self.type, types.BoolParamType) + self.flag_value: t.Any = flag_value + + # Counting + self.count = count + if count: + if type is None: + self.type = types.IntRange(min=0) + if default_is_missing: + self.default = 0 + + self.allow_from_autoenv = allow_from_autoenv + self.help = help + self.show_default = show_default + self.show_choices = show_choices + self.show_envvar = show_envvar + + if __debug__: + if self.nargs == -1: + raise TypeError("nargs=-1 is not supported for options.") + + if self.prompt and self.is_flag and not self.is_bool_flag: + raise TypeError("'prompt' is not valid for non-boolean flag.") + + if not self.is_bool_flag and self.secondary_opts: + raise TypeError("Secondary flag is not valid for non-boolean flag.") + + if self.is_bool_flag and self.hide_input and self.prompt is not None: + raise TypeError( + "'prompt' with 'hide_input' is not valid for boolean flag." + ) + + if self.count: + if self.multiple: + raise TypeError("'count' is not valid with 'multiple'.") + + if self.is_flag: + raise TypeError("'count' is not valid with 'is_flag'.") + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + help=self.help, + prompt=self.prompt, + is_flag=self.is_flag, + flag_value=self.flag_value, + count=self.count, + hidden=self.hidden, + ) + return info_dict + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + opts = [] + secondary_opts = [] + name = None + possible_names = [] + + for decl in decls: + if decl.isidentifier(): + if name is not None: + raise TypeError(f"Name '{name}' defined twice") + name = decl + else: + split_char = ";" if decl[:1] == "/" else "/" + if split_char in decl: + first, second = decl.split(split_char, 1) + first = first.rstrip() + if first: + possible_names.append(split_opt(first)) + opts.append(first) + second = second.lstrip() + if second: + secondary_opts.append(second.lstrip()) + if first == second: + raise ValueError( + f"Boolean option {decl!r} cannot use the" + " same flag for true/false." + ) + else: + possible_names.append(split_opt(decl)) + opts.append(decl) + + if name is None and possible_names: + possible_names.sort(key=lambda x: -len(x[0])) # group long options first + name = possible_names[0][1].replace("-", "_").lower() + if not name.isidentifier(): + name = None + + if name is None: + if not expose_value: + return None, opts, secondary_opts + raise TypeError( + f"Could not determine name for option with declarations {decls!r}" + ) + + if not opts and not secondary_opts: + raise TypeError( + f"No options defined but a name was passed ({name})." + " Did you mean to declare an argument instead? Did" + f" you mean to pass '--{name}'?" + ) + + return name, opts, secondary_opts + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + if self.multiple: + action = "append" + elif self.count: + action = "count" + else: + action = "store" + + if self.is_flag: + action = f"{action}_const" + + if self.is_bool_flag and self.secondary_opts: + parser.add_option( + obj=self, opts=self.opts, dest=self.name, action=action, const=True + ) + parser.add_option( + obj=self, + opts=self.secondary_opts, + dest=self.name, + action=action, + const=False, + ) + else: + parser.add_option( + obj=self, + opts=self.opts, + dest=self.name, + action=action, + const=self.flag_value, + ) + else: + parser.add_option( + obj=self, + opts=self.opts, + dest=self.name, + action=action, + nargs=self.nargs, + ) + + def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: + if self.hidden: + return None + + any_prefix_is_slash = False + + def _write_opts(opts: t.Sequence[str]) -> str: + nonlocal any_prefix_is_slash + + rv, any_slashes = join_options(opts) + + if any_slashes: + any_prefix_is_slash = True + + if not self.is_flag and not self.count: + rv += f" {self.make_metavar()}" + + return rv + + rv = [_write_opts(self.opts)] + + if self.secondary_opts: + rv.append(_write_opts(self.secondary_opts)) + + help = self.help or "" + extra = [] + + if self.show_envvar: + envvar = self.envvar + + if envvar is None: + if ( + self.allow_from_autoenv + and ctx.auto_envvar_prefix is not None + and self.name is not None + ): + envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" + + if envvar is not None: + var_str = ( + envvar + if isinstance(envvar, str) + else ", ".join(str(d) for d in envvar) + ) + extra.append(_("env var: {var}").format(var=var_str)) + + # Temporarily enable resilient parsing to avoid type casting + # failing for the default. Might be possible to extend this to + # help formatting in general. + resilient = ctx.resilient_parsing + ctx.resilient_parsing = True + + try: + default_value = self.get_default(ctx, call=False) + finally: + ctx.resilient_parsing = resilient + + show_default = False + show_default_is_str = False + + if self.show_default is not None: + if isinstance(self.show_default, str): + show_default_is_str = show_default = True + else: + show_default = self.show_default + elif ctx.show_default is not None: + show_default = ctx.show_default + + if show_default_is_str or (show_default and (default_value is not None)): + if show_default_is_str: + default_string = f"({self.show_default})" + elif isinstance(default_value, (list, tuple)): + default_string = ", ".join(str(d) for d in default_value) + elif inspect.isfunction(default_value): + default_string = _("(dynamic)") + elif self.is_bool_flag and self.secondary_opts: + # For boolean flags that have distinct True/False opts, + # use the opt without prefix instead of the value. + default_string = split_opt( + (self.opts if default_value else self.secondary_opts)[0] + )[1] + elif self.is_bool_flag and not self.secondary_opts and not default_value: + default_string = "" + elif default_value == "": + default_string = '""' + else: + default_string = str(default_value) + + if default_string: + extra.append(_("default: {default}").format(default=default_string)) + + if ( + isinstance(self.type, types._NumberRangeBase) + # skip count with default range type + and not (self.count and self.type.min == 0 and self.type.max is None) + ): + range_str = self.type._describe_range() + + if range_str: + extra.append(range_str) + + if self.required: + extra.append(_("required")) + + if extra: + extra_str = "; ".join(extra) + help = f"{help} [{extra_str}]" if help else f"[{extra_str}]" + + return ("; " if any_prefix_is_slash else " / ").join(rv), help + + @t.overload + def get_default( + self, ctx: Context, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: ... + + @t.overload + def get_default( + self, ctx: Context, call: bool = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: ... + + def get_default( + self, ctx: Context, call: bool = True + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + # If we're a non boolean flag our default is more complex because + # we need to look at all flags in the same group to figure out + # if we're the default one in which case we return the flag + # value as default. + if self.is_flag and not self.is_bool_flag: + for param in ctx.command.params: + if param.name == self.name and param.default: + return t.cast(Option, param).flag_value + + return None + + return super().get_default(ctx, call=call) + + def prompt_for_value(self, ctx: Context) -> t.Any: + """This is an alternative flow that can be activated in the full + value processing if a value does not exist. It will prompt the + user until a valid value exists and then returns the processed + value as result. + """ + assert self.prompt is not None + + # Calculate the default before prompting anything to be stable. + default = self.get_default(ctx) + + # If this is a prompt for a flag we need to handle this + # differently. + if self.is_bool_flag: + return confirm(self.prompt, default) + + return prompt( + self.prompt, + default=default, + type=self.type, + hide_input=self.hide_input, + show_choices=self.show_choices, + confirmation_prompt=self.confirmation_prompt, + value_proc=lambda x: self.process_value(ctx, x), + ) + + def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: + rv = super().resolve_envvar_value(ctx) + + if rv is not None: + return rv + + if ( + self.allow_from_autoenv + and ctx.auto_envvar_prefix is not None + and self.name is not None + ): + envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" + rv = os.environ.get(envvar) + + if rv: + return rv + + return None + + def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: + rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) + + if rv is None: + return None + + value_depth = (self.nargs != 1) + bool(self.multiple) + + if value_depth > 0: + rv = self.type.split_envvar_value(rv) + + if self.multiple and self.nargs != 1: + rv = batch(rv, self.nargs) + + return rv + + def consume_value( + self, ctx: Context, opts: t.Mapping[str, "Parameter"] + ) -> t.Tuple[t.Any, ParameterSource]: + value, source = super().consume_value(ctx, opts) + + # The parser will emit a sentinel value if the option can be + # given as a flag without a value. This is different from None + # to distinguish from the flag not being given at all. + if value is _flag_needs_value: + if self.prompt is not None and not ctx.resilient_parsing: + value = self.prompt_for_value(ctx) + source = ParameterSource.PROMPT + else: + value = self.flag_value + source = ParameterSource.COMMANDLINE + + elif ( + self.multiple + and value is not None + and any(v is _flag_needs_value for v in value) + ): + value = [self.flag_value if v is _flag_needs_value else v for v in value] + source = ParameterSource.COMMANDLINE + + # The value wasn't set, or used the param's default, prompt if + # prompting is enabled. + elif ( + source in {None, ParameterSource.DEFAULT} + and self.prompt is not None + and (self.required or self.prompt_required) + and not ctx.resilient_parsing + ): + value = self.prompt_for_value(ctx) + source = ParameterSource.PROMPT + + return value, source + + +class Argument(Parameter): + """Arguments are positional parameters to a command. They generally + provide fewer features than options but can have infinite ``nargs`` + and are required by default. + + All parameters are passed onwards to the constructor of :class:`Parameter`. + """ + + param_type_name = "argument" + + def __init__( + self, + param_decls: t.Sequence[str], + required: t.Optional[bool] = None, + **attrs: t.Any, + ) -> None: + if required is None: + if attrs.get("default") is not None: + required = False + else: + required = attrs.get("nargs", 1) > 0 + + if "multiple" in attrs: + raise TypeError("__init__() got an unexpected keyword argument 'multiple'.") + + super().__init__(param_decls, required=required, **attrs) + + if __debug__: + if self.default is not None and self.nargs == -1: + raise TypeError("'default' is not supported for nargs=-1.") + + @property + def human_readable_name(self) -> str: + if self.metavar is not None: + return self.metavar + return self.name.upper() # type: ignore + + def make_metavar(self) -> str: + if self.metavar is not None: + return self.metavar + var = self.type.get_metavar(self) + if not var: + var = self.name.upper() # type: ignore + if not self.required: + var = f"[{var}]" + if self.nargs != 1: + var += "..." + return var + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + if not decls: + if not expose_value: + return None, [], [] + raise TypeError("Argument is marked as exposed, but does not have a name.") + if len(decls) == 1: + name = arg = decls[0] + name = name.replace("-", "_").lower() + else: + raise TypeError( + "Arguments take exactly one parameter declaration, got" + f" {len(decls)}." + ) + return name, [arg], [] + + def get_usage_pieces(self, ctx: Context) -> t.List[str]: + return [self.make_metavar()] + + def get_error_hint(self, ctx: Context) -> str: + return f"'{self.make_metavar()}'" + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/decorators.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/decorators.py new file mode 100644 index 000000000..bcf8906e7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/decorators.py @@ -0,0 +1,562 @@ +import inspect +import types +import typing as t +from functools import update_wrapper +from gettext import gettext as _ + +from .core import Argument +from .core import Command +from .core import Context +from .core import Group +from .core import Option +from .core import Parameter +from .globals import get_current_context +from .utils import echo + +if t.TYPE_CHECKING: + import typing_extensions as te + + P = te.ParamSpec("P") + +R = t.TypeVar("R") +T = t.TypeVar("T") +_AnyCallable = t.Callable[..., t.Any] +FC = t.TypeVar("FC", bound=t.Union[_AnyCallable, Command]) + + +def pass_context(f: "t.Callable[te.Concatenate[Context, P], R]") -> "t.Callable[P, R]": + """Marks a callback as wanting to receive the current context + object as first argument. + """ + + def new_func(*args: "P.args", **kwargs: "P.kwargs") -> "R": + return f(get_current_context(), *args, **kwargs) + + return update_wrapper(new_func, f) + + +def pass_obj(f: "t.Callable[te.Concatenate[t.Any, P], R]") -> "t.Callable[P, R]": + """Similar to :func:`pass_context`, but only pass the object on the + context onwards (:attr:`Context.obj`). This is useful if that object + represents the state of a nested system. + """ + + def new_func(*args: "P.args", **kwargs: "P.kwargs") -> "R": + return f(get_current_context().obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + +def make_pass_decorator( + object_type: t.Type[T], ensure: bool = False +) -> t.Callable[["t.Callable[te.Concatenate[T, P], R]"], "t.Callable[P, R]"]: + """Given an object type this creates a decorator that will work + similar to :func:`pass_obj` but instead of passing the object of the + current context, it will find the innermost context of type + :func:`object_type`. + + This generates a decorator that works roughly like this:: + + from functools import update_wrapper + + def decorator(f): + @pass_context + def new_func(ctx, *args, **kwargs): + obj = ctx.find_object(object_type) + return ctx.invoke(f, obj, *args, **kwargs) + return update_wrapper(new_func, f) + return decorator + + :param object_type: the type of the object to pass. + :param ensure: if set to `True`, a new object will be created and + remembered on the context if it's not there yet. + """ + + def decorator(f: "t.Callable[te.Concatenate[T, P], R]") -> "t.Callable[P, R]": + def new_func(*args: "P.args", **kwargs: "P.kwargs") -> "R": + ctx = get_current_context() + + obj: t.Optional[T] + if ensure: + obj = ctx.ensure_object(object_type) + else: + obj = ctx.find_object(object_type) + + if obj is None: + raise RuntimeError( + "Managed to invoke callback without a context" + f" object of type {object_type.__name__!r}" + " existing." + ) + + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + return decorator + + +def pass_meta_key( + key: str, *, doc_description: t.Optional[str] = None +) -> "t.Callable[[t.Callable[te.Concatenate[t.Any, P], R]], t.Callable[P, R]]": + """Create a decorator that passes a key from + :attr:`click.Context.meta` as the first argument to the decorated + function. + + :param key: Key in ``Context.meta`` to pass. + :param doc_description: Description of the object being passed, + inserted into the decorator's docstring. Defaults to "the 'key' + key from Context.meta". + + .. versionadded:: 8.0 + """ + + def decorator(f: "t.Callable[te.Concatenate[t.Any, P], R]") -> "t.Callable[P, R]": + def new_func(*args: "P.args", **kwargs: "P.kwargs") -> R: + ctx = get_current_context() + obj = ctx.meta[key] + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + if doc_description is None: + doc_description = f"the {key!r} key from :attr:`click.Context.meta`" + + decorator.__doc__ = ( + f"Decorator that passes {doc_description} as the first argument" + " to the decorated function." + ) + return decorator + + +CmdType = t.TypeVar("CmdType", bound=Command) + + +# variant: no call, directly as decorator for a function. +@t.overload +def command(name: _AnyCallable) -> Command: ... + + +# variant: with positional name and with positional or keyword cls argument: +# @command(namearg, CommandCls, ...) or @command(namearg, cls=CommandCls, ...) +@t.overload +def command( + name: t.Optional[str], + cls: t.Type[CmdType], + **attrs: t.Any, +) -> t.Callable[[_AnyCallable], CmdType]: ... + + +# variant: name omitted, cls _must_ be a keyword argument, @command(cls=CommandCls, ...) +@t.overload +def command( + name: None = None, + *, + cls: t.Type[CmdType], + **attrs: t.Any, +) -> t.Callable[[_AnyCallable], CmdType]: ... + + +# variant: with optional string name, no cls argument provided. +@t.overload +def command( + name: t.Optional[str] = ..., cls: None = None, **attrs: t.Any +) -> t.Callable[[_AnyCallable], Command]: ... + + +def command( + name: t.Union[t.Optional[str], _AnyCallable] = None, + cls: t.Optional[t.Type[CmdType]] = None, + **attrs: t.Any, +) -> t.Union[Command, t.Callable[[_AnyCallable], t.Union[Command, CmdType]]]: + r"""Creates a new :class:`Command` and uses the decorated function as + callback. This will also automatically attach all decorated + :func:`option`\s and :func:`argument`\s as parameters to the command. + + The name of the command defaults to the name of the function with + underscores replaced by dashes. If you want to change that, you can + pass the intended name as the first argument. + + All keyword arguments are forwarded to the underlying command class. + For the ``params`` argument, any decorated params are appended to + the end of the list. + + Once decorated the function turns into a :class:`Command` instance + that can be invoked as a command line utility or be attached to a + command :class:`Group`. + + :param name: the name of the command. This defaults to the function + name with underscores replaced by dashes. + :param cls: the command class to instantiate. This defaults to + :class:`Command`. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.1 + The ``params`` argument can be used. Decorated params are + appended to the end of the list. + """ + + func: t.Optional[t.Callable[[_AnyCallable], t.Any]] = None + + if callable(name): + func = name + name = None + assert cls is None, "Use 'command(cls=cls)(callable)' to specify a class." + assert not attrs, "Use 'command(**kwargs)(callable)' to provide arguments." + + if cls is None: + cls = t.cast(t.Type[CmdType], Command) + + def decorator(f: _AnyCallable) -> CmdType: + if isinstance(f, Command): + raise TypeError("Attempted to convert a callback into a command twice.") + + attr_params = attrs.pop("params", None) + params = attr_params if attr_params is not None else [] + + try: + decorator_params = f.__click_params__ # type: ignore + except AttributeError: + pass + else: + del f.__click_params__ # type: ignore + params.extend(reversed(decorator_params)) + + if attrs.get("help") is None: + attrs["help"] = f.__doc__ + + if t.TYPE_CHECKING: + assert cls is not None + assert not callable(name) + + cmd = cls( + name=name or f.__name__.lower().replace("_", "-"), + callback=f, + params=params, + **attrs, + ) + cmd.__doc__ = f.__doc__ + return cmd + + if func is not None: + return decorator(func) + + return decorator + + +GrpType = t.TypeVar("GrpType", bound=Group) + + +# variant: no call, directly as decorator for a function. +@t.overload +def group(name: _AnyCallable) -> Group: ... + + +# variant: with positional name and with positional or keyword cls argument: +# @group(namearg, GroupCls, ...) or @group(namearg, cls=GroupCls, ...) +@t.overload +def group( + name: t.Optional[str], + cls: t.Type[GrpType], + **attrs: t.Any, +) -> t.Callable[[_AnyCallable], GrpType]: ... + + +# variant: name omitted, cls _must_ be a keyword argument, @group(cmd=GroupCls, ...) +@t.overload +def group( + name: None = None, + *, + cls: t.Type[GrpType], + **attrs: t.Any, +) -> t.Callable[[_AnyCallable], GrpType]: ... + + +# variant: with optional string name, no cls argument provided. +@t.overload +def group( + name: t.Optional[str] = ..., cls: None = None, **attrs: t.Any +) -> t.Callable[[_AnyCallable], Group]: ... + + +def group( + name: t.Union[str, _AnyCallable, None] = None, + cls: t.Optional[t.Type[GrpType]] = None, + **attrs: t.Any, +) -> t.Union[Group, t.Callable[[_AnyCallable], t.Union[Group, GrpType]]]: + """Creates a new :class:`Group` with a function as callback. This + works otherwise the same as :func:`command` just that the `cls` + parameter is set to :class:`Group`. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + """ + if cls is None: + cls = t.cast(t.Type[GrpType], Group) + + if callable(name): + return command(cls=cls, **attrs)(name) + + return command(name, cls, **attrs) + + +def _param_memo(f: t.Callable[..., t.Any], param: Parameter) -> None: + if isinstance(f, Command): + f.params.append(param) + else: + if not hasattr(f, "__click_params__"): + f.__click_params__ = [] # type: ignore + + f.__click_params__.append(param) # type: ignore + + +def argument( + *param_decls: str, cls: t.Optional[t.Type[Argument]] = None, **attrs: t.Any +) -> t.Callable[[FC], FC]: + """Attaches an argument to the command. All positional arguments are + passed as parameter declarations to :class:`Argument`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Argument` instance manually + and attaching it to the :attr:`Command.params` list. + + For the default argument class, refer to :class:`Argument` and + :class:`Parameter` for descriptions of parameters. + + :param cls: the argument class to instantiate. This defaults to + :class:`Argument`. + :param param_decls: Passed as positional arguments to the constructor of + ``cls``. + :param attrs: Passed as keyword arguments to the constructor of ``cls``. + """ + if cls is None: + cls = Argument + + def decorator(f: FC) -> FC: + _param_memo(f, cls(param_decls, **attrs)) + return f + + return decorator + + +def option( + *param_decls: str, cls: t.Optional[t.Type[Option]] = None, **attrs: t.Any +) -> t.Callable[[FC], FC]: + """Attaches an option to the command. All positional arguments are + passed as parameter declarations to :class:`Option`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Option` instance manually + and attaching it to the :attr:`Command.params` list. + + For the default option class, refer to :class:`Option` and + :class:`Parameter` for descriptions of parameters. + + :param cls: the option class to instantiate. This defaults to + :class:`Option`. + :param param_decls: Passed as positional arguments to the constructor of + ``cls``. + :param attrs: Passed as keyword arguments to the constructor of ``cls``. + """ + if cls is None: + cls = Option + + def decorator(f: FC) -> FC: + _param_memo(f, cls(param_decls, **attrs)) + return f + + return decorator + + +def confirmation_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Add a ``--yes`` option which shows a prompt before continuing if + not passed. If the prompt is declined, the program will exit. + + :param param_decls: One or more option names. Defaults to the single + value ``"--yes"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + + def callback(ctx: Context, param: Parameter, value: bool) -> None: + if not value: + ctx.abort() + + if not param_decls: + param_decls = ("--yes",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("callback", callback) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("prompt", "Do you want to continue?") + kwargs.setdefault("help", "Confirm the action without prompting.") + return option(*param_decls, **kwargs) + + +def password_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Add a ``--password`` option which prompts for a password, hiding + input and asking to enter the value again for confirmation. + + :param param_decls: One or more option names. Defaults to the single + value ``"--password"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + if not param_decls: + param_decls = ("--password",) + + kwargs.setdefault("prompt", True) + kwargs.setdefault("confirmation_prompt", True) + kwargs.setdefault("hide_input", True) + return option(*param_decls, **kwargs) + + +def version_option( + version: t.Optional[str] = None, + *param_decls: str, + package_name: t.Optional[str] = None, + prog_name: t.Optional[str] = None, + message: t.Optional[str] = None, + **kwargs: t.Any, +) -> t.Callable[[FC], FC]: + """Add a ``--version`` option which immediately prints the version + number and exits the program. + + If ``version`` is not provided, Click will try to detect it using + :func:`importlib.metadata.version` to get the version for the + ``package_name``. On Python < 3.8, the ``importlib_metadata`` + backport must be installed. + + If ``package_name`` is not provided, Click will try to detect it by + inspecting the stack frames. This will be used to detect the + version, so it must match the name of the installed package. + + :param version: The version number to show. If not provided, Click + will try to detect it. + :param param_decls: One or more option names. Defaults to the single + value ``"--version"``. + :param package_name: The package name to detect the version from. If + not provided, Click will try to detect it. + :param prog_name: The name of the CLI to show in the message. If not + provided, it will be detected from the command. + :param message: The message to show. The values ``%(prog)s``, + ``%(package)s``, and ``%(version)s`` are available. Defaults to + ``"%(prog)s, version %(version)s"``. + :param kwargs: Extra arguments are passed to :func:`option`. + :raise RuntimeError: ``version`` could not be detected. + + .. versionchanged:: 8.0 + Add the ``package_name`` parameter, and the ``%(package)s`` + value for messages. + + .. versionchanged:: 8.0 + Use :mod:`importlib.metadata` instead of ``pkg_resources``. The + version is detected based on the package name, not the entry + point name. The Python package name must match the installed + package name, or be passed with ``package_name=``. + """ + if message is None: + message = _("%(prog)s, version %(version)s") + + if version is None and package_name is None: + frame = inspect.currentframe() + f_back = frame.f_back if frame is not None else None + f_globals = f_back.f_globals if f_back is not None else None + # break reference cycle + # https://docs.python.org/3/library/inspect.html#the-interpreter-stack + del frame + + if f_globals is not None: + package_name = f_globals.get("__name__") + + if package_name == "__main__": + package_name = f_globals.get("__package__") + + if package_name: + package_name = package_name.partition(".")[0] + + def callback(ctx: Context, param: Parameter, value: bool) -> None: + if not value or ctx.resilient_parsing: + return + + nonlocal prog_name + nonlocal version + + if prog_name is None: + prog_name = ctx.find_root().info_name + + if version is None and package_name is not None: + metadata: t.Optional[types.ModuleType] + + try: + from importlib import metadata + except ImportError: + # Python < 3.8 + import importlib_metadata as metadata # type: ignore + + try: + version = metadata.version(package_name) # type: ignore + except metadata.PackageNotFoundError: # type: ignore + raise RuntimeError( + f"{package_name!r} is not installed. Try passing" + " 'package_name' instead." + ) from None + + if version is None: + raise RuntimeError( + f"Could not determine the version for {package_name!r} automatically." + ) + + echo( + message % {"prog": prog_name, "package": package_name, "version": version}, + color=ctx.color, + ) + ctx.exit() + + if not param_decls: + param_decls = ("--version",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("is_eager", True) + kwargs.setdefault("help", _("Show the version and exit.")) + kwargs["callback"] = callback + return option(*param_decls, **kwargs) + + +class HelpOption(Option): + """Pre-configured ``--help`` option which immediately prints the help page + and exits the program. + """ + + def __init__( + self, + param_decls: t.Optional[t.Sequence[str]] = None, + **kwargs: t.Any, + ) -> None: + if not param_decls: + param_decls = ("--help",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("is_eager", True) + kwargs.setdefault("help", _("Show this message and exit.")) + kwargs.setdefault("callback", self.show_help) + + super().__init__(param_decls, **kwargs) + + @staticmethod + def show_help(ctx: Context, param: Parameter, value: bool) -> None: + """Callback that print the help page on ```` and exits.""" + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + +def help_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Decorator for the pre-configured ``--help`` option defined above. + + :param param_decls: One or more option names. Defaults to the single + value ``"--help"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + kwargs.setdefault("cls", HelpOption) + return option(*param_decls, **kwargs) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/exceptions.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/exceptions.py new file mode 100644 index 000000000..0b8315166 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/exceptions.py @@ -0,0 +1,296 @@ +import typing as t +from gettext import gettext as _ +from gettext import ngettext + +from ._compat import get_text_stderr +from .globals import resolve_color_default +from .utils import echo +from .utils import format_filename + +if t.TYPE_CHECKING: + from .core import Command + from .core import Context + from .core import Parameter + + +def _join_param_hints( + param_hint: t.Optional[t.Union[t.Sequence[str], str]], +) -> t.Optional[str]: + if param_hint is not None and not isinstance(param_hint, str): + return " / ".join(repr(x) for x in param_hint) + + return param_hint + + +class ClickException(Exception): + """An exception that Click can handle and show to the user.""" + + #: The exit code for this exception. + exit_code = 1 + + def __init__(self, message: str) -> None: + super().__init__(message) + # The context will be removed by the time we print the message, so cache + # the color settings here to be used later on (in `show`) + self.show_color: t.Optional[bool] = resolve_color_default() + self.message = message + + def format_message(self) -> str: + return self.message + + def __str__(self) -> str: + return self.message + + def show(self, file: t.Optional[t.IO[t.Any]] = None) -> None: + if file is None: + file = get_text_stderr() + + echo( + _("Error: {message}").format(message=self.format_message()), + file=file, + color=self.show_color, + ) + + +class UsageError(ClickException): + """An internal exception that signals a usage error. This typically + aborts any further handling. + + :param message: the error message to display. + :param ctx: optionally the context that caused this error. Click will + fill in the context automatically in some situations. + """ + + exit_code = 2 + + def __init__(self, message: str, ctx: t.Optional["Context"] = None) -> None: + super().__init__(message) + self.ctx = ctx + self.cmd: t.Optional[Command] = self.ctx.command if self.ctx else None + + def show(self, file: t.Optional[t.IO[t.Any]] = None) -> None: + if file is None: + file = get_text_stderr() + color = None + hint = "" + if ( + self.ctx is not None + and self.ctx.command.get_help_option(self.ctx) is not None + ): + hint = _("Try '{command} {option}' for help.").format( + command=self.ctx.command_path, option=self.ctx.help_option_names[0] + ) + hint = f"{hint}\n" + if self.ctx is not None: + color = self.ctx.color + echo(f"{self.ctx.get_usage()}\n{hint}", file=file, color=color) + echo( + _("Error: {message}").format(message=self.format_message()), + file=file, + color=color, + ) + + +class BadParameter(UsageError): + """An exception that formats out a standardized error message for a + bad parameter. This is useful when thrown from a callback or type as + Click will attach contextual information to it (for instance, which + parameter it is). + + .. versionadded:: 2.0 + + :param param: the parameter object that caused this error. This can + be left out, and Click will attach this info itself + if possible. + :param param_hint: a string that shows up as parameter name. This + can be used as alternative to `param` in cases + where custom validation should happen. If it is + a string it's used as such, if it's a list then + each item is quoted and separated. + """ + + def __init__( + self, + message: str, + ctx: t.Optional["Context"] = None, + param: t.Optional["Parameter"] = None, + param_hint: t.Optional[str] = None, + ) -> None: + super().__init__(message, ctx) + self.param = param + self.param_hint = param_hint + + def format_message(self) -> str: + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) # type: ignore + else: + return _("Invalid value: {message}").format(message=self.message) + + return _("Invalid value for {param_hint}: {message}").format( + param_hint=_join_param_hints(param_hint), message=self.message + ) + + +class MissingParameter(BadParameter): + """Raised if click required an option or argument but it was not + provided when invoking the script. + + .. versionadded:: 4.0 + + :param param_type: a string that indicates the type of the parameter. + The default is to inherit the parameter type from + the given `param`. Valid values are ``'parameter'``, + ``'option'`` or ``'argument'``. + """ + + def __init__( + self, + message: t.Optional[str] = None, + ctx: t.Optional["Context"] = None, + param: t.Optional["Parameter"] = None, + param_hint: t.Optional[str] = None, + param_type: t.Optional[str] = None, + ) -> None: + super().__init__(message or "", ctx, param, param_hint) + self.param_type = param_type + + def format_message(self) -> str: + if self.param_hint is not None: + param_hint: t.Optional[str] = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) # type: ignore + else: + param_hint = None + + param_hint = _join_param_hints(param_hint) + param_hint = f" {param_hint}" if param_hint else "" + + param_type = self.param_type + if param_type is None and self.param is not None: + param_type = self.param.param_type_name + + msg = self.message + if self.param is not None: + msg_extra = self.param.type.get_missing_message(self.param) + if msg_extra: + if msg: + msg += f". {msg_extra}" + else: + msg = msg_extra + + msg = f" {msg}" if msg else "" + + # Translate param_type for known types. + if param_type == "argument": + missing = _("Missing argument") + elif param_type == "option": + missing = _("Missing option") + elif param_type == "parameter": + missing = _("Missing parameter") + else: + missing = _("Missing {param_type}").format(param_type=param_type) + + return f"{missing}{param_hint}.{msg}" + + def __str__(self) -> str: + if not self.message: + param_name = self.param.name if self.param else None + return _("Missing parameter: {param_name}").format(param_name=param_name) + else: + return self.message + + +class NoSuchOption(UsageError): + """Raised if click attempted to handle an option that does not + exist. + + .. versionadded:: 4.0 + """ + + def __init__( + self, + option_name: str, + message: t.Optional[str] = None, + possibilities: t.Optional[t.Sequence[str]] = None, + ctx: t.Optional["Context"] = None, + ) -> None: + if message is None: + message = _("No such option: {name}").format(name=option_name) + + super().__init__(message, ctx) + self.option_name = option_name + self.possibilities = possibilities + + def format_message(self) -> str: + if not self.possibilities: + return self.message + + possibility_str = ", ".join(sorted(self.possibilities)) + suggest = ngettext( + "Did you mean {possibility}?", + "(Possible options: {possibilities})", + len(self.possibilities), + ).format(possibility=possibility_str, possibilities=possibility_str) + return f"{self.message} {suggest}" + + +class BadOptionUsage(UsageError): + """Raised if an option is generally supplied but the use of the option + was incorrect. This is for instance raised if the number of arguments + for an option is not correct. + + .. versionadded:: 4.0 + + :param option_name: the name of the option being used incorrectly. + """ + + def __init__( + self, option_name: str, message: str, ctx: t.Optional["Context"] = None + ) -> None: + super().__init__(message, ctx) + self.option_name = option_name + + +class BadArgumentUsage(UsageError): + """Raised if an argument is generally supplied but the use of the argument + was incorrect. This is for instance raised if the number of values + for an argument is not correct. + + .. versionadded:: 6.0 + """ + + +class FileError(ClickException): + """Raised if a file cannot be opened.""" + + def __init__(self, filename: str, hint: t.Optional[str] = None) -> None: + if hint is None: + hint = _("unknown error") + + super().__init__(hint) + self.ui_filename: str = format_filename(filename) + self.filename = filename + + def format_message(self) -> str: + return _("Could not open file {filename!r}: {message}").format( + filename=self.ui_filename, message=self.message + ) + + +class Abort(RuntimeError): + """An internal signalling exception that signals Click to abort.""" + + +class Exit(RuntimeError): + """An exception that indicates that the application should exit with some + status code. + + :param code: the status code to exit with. + """ + + __slots__ = ("exit_code",) + + def __init__(self, code: int = 0) -> None: + self.exit_code: int = code diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/formatting.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/formatting.py new file mode 100644 index 000000000..ddd2a2f82 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/formatting.py @@ -0,0 +1,301 @@ +import typing as t +from contextlib import contextmanager +from gettext import gettext as _ + +from ._compat import term_len +from .parser import split_opt + +# Can force a width. This is used by the test system +FORCED_WIDTH: t.Optional[int] = None + + +def measure_table(rows: t.Iterable[t.Tuple[str, str]]) -> t.Tuple[int, ...]: + widths: t.Dict[int, int] = {} + + for row in rows: + for idx, col in enumerate(row): + widths[idx] = max(widths.get(idx, 0), term_len(col)) + + return tuple(y for x, y in sorted(widths.items())) + + +def iter_rows( + rows: t.Iterable[t.Tuple[str, str]], col_count: int +) -> t.Iterator[t.Tuple[str, ...]]: + for row in rows: + yield row + ("",) * (col_count - len(row)) + + +def wrap_text( + text: str, + width: int = 78, + initial_indent: str = "", + subsequent_indent: str = "", + preserve_paragraphs: bool = False, +) -> str: + """A helper function that intelligently wraps text. By default, it + assumes that it operates on a single paragraph of text but if the + `preserve_paragraphs` parameter is provided it will intelligently + handle paragraphs (defined by two empty lines). + + If paragraphs are handled, a paragraph can be prefixed with an empty + line containing the ``\\b`` character (``\\x08``) to indicate that + no rewrapping should happen in that block. + + :param text: the text that should be rewrapped. + :param width: the maximum width for the text. + :param initial_indent: the initial indent that should be placed on the + first line as a string. + :param subsequent_indent: the indent string that should be placed on + each consecutive line. + :param preserve_paragraphs: if this flag is set then the wrapping will + intelligently handle paragraphs. + """ + from ._textwrap import TextWrapper + + text = text.expandtabs() + wrapper = TextWrapper( + width, + initial_indent=initial_indent, + subsequent_indent=subsequent_indent, + replace_whitespace=False, + ) + if not preserve_paragraphs: + return wrapper.fill(text) + + p: t.List[t.Tuple[int, bool, str]] = [] + buf: t.List[str] = [] + indent = None + + def _flush_par() -> None: + if not buf: + return + if buf[0].strip() == "\b": + p.append((indent or 0, True, "\n".join(buf[1:]))) + else: + p.append((indent or 0, False, " ".join(buf))) + del buf[:] + + for line in text.splitlines(): + if not line: + _flush_par() + indent = None + else: + if indent is None: + orig_len = term_len(line) + line = line.lstrip() + indent = orig_len - term_len(line) + buf.append(line) + _flush_par() + + rv = [] + for indent, raw, text in p: + with wrapper.extra_indent(" " * indent): + if raw: + rv.append(wrapper.indent_only(text)) + else: + rv.append(wrapper.fill(text)) + + return "\n\n".join(rv) + + +class HelpFormatter: + """This class helps with formatting text-based help pages. It's + usually just needed for very special internal cases, but it's also + exposed so that developers can write their own fancy outputs. + + At present, it always writes into memory. + + :param indent_increment: the additional increment for each level. + :param width: the width for the text. This defaults to the terminal + width clamped to a maximum of 78. + """ + + def __init__( + self, + indent_increment: int = 2, + width: t.Optional[int] = None, + max_width: t.Optional[int] = None, + ) -> None: + import shutil + + self.indent_increment = indent_increment + if max_width is None: + max_width = 80 + if width is None: + width = FORCED_WIDTH + if width is None: + width = max(min(shutil.get_terminal_size().columns, max_width) - 2, 50) + self.width = width + self.current_indent = 0 + self.buffer: t.List[str] = [] + + def write(self, string: str) -> None: + """Writes a unicode string into the internal buffer.""" + self.buffer.append(string) + + def indent(self) -> None: + """Increases the indentation.""" + self.current_indent += self.indent_increment + + def dedent(self) -> None: + """Decreases the indentation.""" + self.current_indent -= self.indent_increment + + def write_usage( + self, prog: str, args: str = "", prefix: t.Optional[str] = None + ) -> None: + """Writes a usage line into the buffer. + + :param prog: the program name. + :param args: whitespace separated list of arguments. + :param prefix: The prefix for the first line. Defaults to + ``"Usage: "``. + """ + if prefix is None: + prefix = f"{_('Usage:')} " + + usage_prefix = f"{prefix:>{self.current_indent}}{prog} " + text_width = self.width - self.current_indent + + if text_width >= (term_len(usage_prefix) + 20): + # The arguments will fit to the right of the prefix. + indent = " " * term_len(usage_prefix) + self.write( + wrap_text( + args, + text_width, + initial_indent=usage_prefix, + subsequent_indent=indent, + ) + ) + else: + # The prefix is too long, put the arguments on the next line. + self.write(usage_prefix) + self.write("\n") + indent = " " * (max(self.current_indent, term_len(prefix)) + 4) + self.write( + wrap_text( + args, text_width, initial_indent=indent, subsequent_indent=indent + ) + ) + + self.write("\n") + + def write_heading(self, heading: str) -> None: + """Writes a heading into the buffer.""" + self.write(f"{'':>{self.current_indent}}{heading}:\n") + + def write_paragraph(self) -> None: + """Writes a paragraph into the buffer.""" + if self.buffer: + self.write("\n") + + def write_text(self, text: str) -> None: + """Writes re-indented text into the buffer. This rewraps and + preserves paragraphs. + """ + indent = " " * self.current_indent + self.write( + wrap_text( + text, + self.width, + initial_indent=indent, + subsequent_indent=indent, + preserve_paragraphs=True, + ) + ) + self.write("\n") + + def write_dl( + self, + rows: t.Sequence[t.Tuple[str, str]], + col_max: int = 30, + col_spacing: int = 2, + ) -> None: + """Writes a definition list into the buffer. This is how options + and commands are usually formatted. + + :param rows: a list of two item tuples for the terms and values. + :param col_max: the maximum width of the first column. + :param col_spacing: the number of spaces between the first and + second column. + """ + rows = list(rows) + widths = measure_table(rows) + if len(widths) != 2: + raise TypeError("Expected two columns for definition list") + + first_col = min(widths[0], col_max) + col_spacing + + for first, second in iter_rows(rows, len(widths)): + self.write(f"{'':>{self.current_indent}}{first}") + if not second: + self.write("\n") + continue + if term_len(first) <= first_col - col_spacing: + self.write(" " * (first_col - term_len(first))) + else: + self.write("\n") + self.write(" " * (first_col + self.current_indent)) + + text_width = max(self.width - first_col - 2, 10) + wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True) + lines = wrapped_text.splitlines() + + if lines: + self.write(f"{lines[0]}\n") + + for line in lines[1:]: + self.write(f"{'':>{first_col + self.current_indent}}{line}\n") + else: + self.write("\n") + + @contextmanager + def section(self, name: str) -> t.Iterator[None]: + """Helpful context manager that writes a paragraph, a heading, + and the indents. + + :param name: the section name that is written as heading. + """ + self.write_paragraph() + self.write_heading(name) + self.indent() + try: + yield + finally: + self.dedent() + + @contextmanager + def indentation(self) -> t.Iterator[None]: + """A context manager that increases the indentation.""" + self.indent() + try: + yield + finally: + self.dedent() + + def getvalue(self) -> str: + """Returns the buffer contents.""" + return "".join(self.buffer) + + +def join_options(options: t.Sequence[str]) -> t.Tuple[str, bool]: + """Given a list of option strings this joins them in the most appropriate + way and returns them in the form ``(formatted_string, + any_prefix_is_slash)`` where the second item in the tuple is a flag that + indicates if any of the option prefixes was a slash. + """ + rv = [] + any_prefix_is_slash = False + + for opt in options: + prefix = split_opt(opt)[0] + + if prefix == "/": + any_prefix_is_slash = True + + rv.append((len(prefix), opt)) + + rv.sort(key=lambda x: x[0]) + return ", ".join(x[1] for x in rv), any_prefix_is_slash diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/globals.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/globals.py new file mode 100644 index 000000000..191e712db --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/globals.py @@ -0,0 +1,67 @@ +import typing as t +from threading import local + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .core import Context + +_local = local() + + +@t.overload +def get_current_context(silent: "te.Literal[False]" = False) -> "Context": ... + + +@t.overload +def get_current_context(silent: bool = ...) -> t.Optional["Context"]: ... + + +def get_current_context(silent: bool = False) -> t.Optional["Context"]: + """Returns the current click context. This can be used as a way to + access the current context object from anywhere. This is a more implicit + alternative to the :func:`pass_context` decorator. This function is + primarily useful for helpers such as :func:`echo` which might be + interested in changing its behavior based on the current context. + + To push the current context, :meth:`Context.scope` can be used. + + .. versionadded:: 5.0 + + :param silent: if set to `True` the return value is `None` if no context + is available. The default behavior is to raise a + :exc:`RuntimeError`. + """ + try: + return t.cast("Context", _local.stack[-1]) + except (AttributeError, IndexError) as e: + if not silent: + raise RuntimeError("There is no active click context.") from e + + return None + + +def push_context(ctx: "Context") -> None: + """Pushes a new context to the current stack.""" + _local.__dict__.setdefault("stack", []).append(ctx) + + +def pop_context() -> None: + """Removes the top level from the stack.""" + _local.stack.pop() + + +def resolve_color_default(color: t.Optional[bool] = None) -> t.Optional[bool]: + """Internal helper to get the default value of the color flag. If a + value is passed it's returned unchanged, otherwise it's looked up from + the current context. + """ + if color is not None: + return color + + ctx = get_current_context(silent=True) + + if ctx is not None: + return ctx.color + + return None diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/parser.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/parser.py new file mode 100644 index 000000000..600b8436d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/parser.py @@ -0,0 +1,531 @@ +""" +This module started out as largely a copy paste from the stdlib's +optparse module with the features removed that we do not need from +optparse because we implement them in Click on a higher level (for +instance type handling, help formatting and a lot more). + +The plan is to remove more and more from here over time. + +The reason this is a different module and not optparse from the stdlib +is that there are differences in 2.x and 3.x about the error messages +generated and optparse in the stdlib uses gettext for no good reason +and might cause us issues. + +Click uses parts of optparse written by Gregory P. Ward and maintained +by the Python Software Foundation. This is limited to code in parser.py. + +Copyright 2001-2006 Gregory P. Ward. All rights reserved. +Copyright 2002-2006 Python Software Foundation. All rights reserved. +""" + +# This code uses parts of optparse written by Gregory P. Ward and +# maintained by the Python Software Foundation. +# Copyright 2001-2006 Gregory P. Ward +# Copyright 2002-2006 Python Software Foundation +import typing as t +from collections import deque +from gettext import gettext as _ +from gettext import ngettext + +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import NoSuchOption +from .exceptions import UsageError + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .core import Argument as CoreArgument + from .core import Context + from .core import Option as CoreOption + from .core import Parameter as CoreParameter + +V = t.TypeVar("V") + +# Sentinel value that indicates an option was passed as a flag without a +# value but is not a flag option. Option.consume_value uses this to +# prompt or use the flag_value. +_flag_needs_value = object() + + +def _unpack_args( + args: t.Sequence[str], nargs_spec: t.Sequence[int] +) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]: + """Given an iterable of arguments and an iterable of nargs specifications, + it returns a tuple with all the unpacked arguments at the first index + and all remaining arguments as the second. + + The nargs specification is the number of arguments that should be consumed + or `-1` to indicate that this position should eat up all the remainders. + + Missing items are filled with `None`. + """ + args = deque(args) + nargs_spec = deque(nargs_spec) + rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = [] + spos: t.Optional[int] = None + + def _fetch(c: "te.Deque[V]") -> t.Optional[V]: + try: + if spos is None: + return c.popleft() + else: + return c.pop() + except IndexError: + return None + + while nargs_spec: + nargs = _fetch(nargs_spec) + + if nargs is None: + continue + + if nargs == 1: + rv.append(_fetch(args)) + elif nargs > 1: + x = [_fetch(args) for _ in range(nargs)] + + # If we're reversed, we're pulling in the arguments in reverse, + # so we need to turn them around. + if spos is not None: + x.reverse() + + rv.append(tuple(x)) + elif nargs < 0: + if spos is not None: + raise TypeError("Cannot have two nargs < 0") + + spos = len(rv) + rv.append(None) + + # spos is the position of the wildcard (star). If it's not `None`, + # we fill it with the remainder. + if spos is not None: + rv[spos] = tuple(args) + args = [] + rv[spos + 1 :] = reversed(rv[spos + 1 :]) + + return tuple(rv), list(args) + + +def split_opt(opt: str) -> t.Tuple[str, str]: + first = opt[:1] + if first.isalnum(): + return "", opt + if opt[1:2] == first: + return opt[:2], opt[2:] + return first, opt[1:] + + +def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str: + if ctx is None or ctx.token_normalize_func is None: + return opt + prefix, opt = split_opt(opt) + return f"{prefix}{ctx.token_normalize_func(opt)}" + + +def split_arg_string(string: str) -> t.List[str]: + """Split an argument string as with :func:`shlex.split`, but don't + fail if the string is incomplete. Ignores a missing closing quote or + incomplete escape sequence and uses the partial token as-is. + + .. code-block:: python + + split_arg_string("example 'my file") + ["example", "my file"] + + split_arg_string("example my\\") + ["example", "my"] + + :param string: String to split. + """ + import shlex + + lex = shlex.shlex(string, posix=True) + lex.whitespace_split = True + lex.commenters = "" + out = [] + + try: + for token in lex: + out.append(token) + except ValueError: + # Raised when end-of-string is reached in an invalid state. Use + # the partial token as-is. The quote or escape character is in + # lex.state, not lex.token. + out.append(lex.token) + + return out + + +class Option: + def __init__( + self, + obj: "CoreOption", + opts: t.Sequence[str], + dest: t.Optional[str], + action: t.Optional[str] = None, + nargs: int = 1, + const: t.Optional[t.Any] = None, + ): + self._short_opts = [] + self._long_opts = [] + self.prefixes: t.Set[str] = set() + + for opt in opts: + prefix, value = split_opt(opt) + if not prefix: + raise ValueError(f"Invalid start character for option ({opt})") + self.prefixes.add(prefix[0]) + if len(prefix) == 1 and len(value) == 1: + self._short_opts.append(opt) + else: + self._long_opts.append(opt) + self.prefixes.add(prefix) + + if action is None: + action = "store" + + self.dest = dest + self.action = action + self.nargs = nargs + self.const = const + self.obj = obj + + @property + def takes_value(self) -> bool: + return self.action in ("store", "append") + + def process(self, value: t.Any, state: "ParsingState") -> None: + if self.action == "store": + state.opts[self.dest] = value # type: ignore + elif self.action == "store_const": + state.opts[self.dest] = self.const # type: ignore + elif self.action == "append": + state.opts.setdefault(self.dest, []).append(value) # type: ignore + elif self.action == "append_const": + state.opts.setdefault(self.dest, []).append(self.const) # type: ignore + elif self.action == "count": + state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore + else: + raise ValueError(f"unknown action '{self.action}'") + state.order.append(self.obj) + + +class Argument: + def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1): + self.dest = dest + self.nargs = nargs + self.obj = obj + + def process( + self, + value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]], + state: "ParsingState", + ) -> None: + if self.nargs > 1: + assert value is not None + holes = sum(1 for x in value if x is None) + if holes == len(value): + value = None + elif holes != 0: + raise BadArgumentUsage( + _("Argument {name!r} takes {nargs} values.").format( + name=self.dest, nargs=self.nargs + ) + ) + + if self.nargs == -1 and self.obj.envvar is not None and value == (): + # Replace empty tuple with None so that a value from the + # environment may be tried. + value = None + + state.opts[self.dest] = value # type: ignore + state.order.append(self.obj) + + +class ParsingState: + def __init__(self, rargs: t.List[str]) -> None: + self.opts: t.Dict[str, t.Any] = {} + self.largs: t.List[str] = [] + self.rargs = rargs + self.order: t.List[CoreParameter] = [] + + +class OptionParser: + """The option parser is an internal class that is ultimately used to + parse options and arguments. It's modelled after optparse and brings + a similar but vastly simplified API. It should generally not be used + directly as the high level Click classes wrap it for you. + + It's not nearly as extensible as optparse or argparse as it does not + implement features that are implemented on a higher level (such as + types or defaults). + + :param ctx: optionally the :class:`~click.Context` where this parser + should go with. + """ + + def __init__(self, ctx: t.Optional["Context"] = None) -> None: + #: The :class:`~click.Context` for this parser. This might be + #: `None` for some advanced use cases. + self.ctx = ctx + #: This controls how the parser deals with interspersed arguments. + #: If this is set to `False`, the parser will stop on the first + #: non-option. Click uses this to implement nested subcommands + #: safely. + self.allow_interspersed_args: bool = True + #: This tells the parser how to deal with unknown options. By + #: default it will error out (which is sensible), but there is a + #: second mode where it will ignore it and continue processing + #: after shifting all the unknown options into the resulting args. + self.ignore_unknown_options: bool = False + + if ctx is not None: + self.allow_interspersed_args = ctx.allow_interspersed_args + self.ignore_unknown_options = ctx.ignore_unknown_options + + self._short_opt: t.Dict[str, Option] = {} + self._long_opt: t.Dict[str, Option] = {} + self._opt_prefixes = {"-", "--"} + self._args: t.List[Argument] = [] + + def add_option( + self, + obj: "CoreOption", + opts: t.Sequence[str], + dest: t.Optional[str], + action: t.Optional[str] = None, + nargs: int = 1, + const: t.Optional[t.Any] = None, + ) -> None: + """Adds a new option named `dest` to the parser. The destination + is not inferred (unlike with optparse) and needs to be explicitly + provided. Action can be any of ``store``, ``store_const``, + ``append``, ``append_const`` or ``count``. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + opts = [normalize_opt(opt, self.ctx) for opt in opts] + option = Option(obj, opts, dest, action=action, nargs=nargs, const=const) + self._opt_prefixes.update(option.prefixes) + for opt in option._short_opts: + self._short_opt[opt] = option + for opt in option._long_opts: + self._long_opt[opt] = option + + def add_argument( + self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1 + ) -> None: + """Adds a positional argument named `dest` to the parser. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + self._args.append(Argument(obj, dest=dest, nargs=nargs)) + + def parse_args( + self, args: t.List[str] + ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]: + """Parses positional arguments and returns ``(values, args, order)`` + for the parsed options and arguments as well as the leftover + arguments if there are any. The order is a list of objects as they + appear on the command line. If arguments appear multiple times they + will be memorized multiple times as well. + """ + state = ParsingState(args) + try: + self._process_args_for_options(state) + self._process_args_for_args(state) + except UsageError: + if self.ctx is None or not self.ctx.resilient_parsing: + raise + return state.opts, state.largs, state.order + + def _process_args_for_args(self, state: ParsingState) -> None: + pargs, args = _unpack_args( + state.largs + state.rargs, [x.nargs for x in self._args] + ) + + for idx, arg in enumerate(self._args): + arg.process(pargs[idx], state) + + state.largs = args + state.rargs = [] + + def _process_args_for_options(self, state: ParsingState) -> None: + while state.rargs: + arg = state.rargs.pop(0) + arglen = len(arg) + # Double dashes always handled explicitly regardless of what + # prefixes are valid. + if arg == "--": + return + elif arg[:1] in self._opt_prefixes and arglen > 1: + self._process_opts(arg, state) + elif self.allow_interspersed_args: + state.largs.append(arg) + else: + state.rargs.insert(0, arg) + return + + # Say this is the original argument list: + # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] + # ^ + # (we are about to process arg(i)). + # + # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of + # [arg0, ..., arg(i-1)] (any options and their arguments will have + # been removed from largs). + # + # The while loop will usually consume 1 or more arguments per pass. + # If it consumes 1 (eg. arg is an option that takes no arguments), + # then after _process_arg() is done the situation is: + # + # largs = subset of [arg0, ..., arg(i)] + # rargs = [arg(i+1), ..., arg(N-1)] + # + # If allow_interspersed_args is false, largs will always be + # *empty* -- still a subset of [arg0, ..., arg(i-1)], but + # not a very interesting subset! + + def _match_long_opt( + self, opt: str, explicit_value: t.Optional[str], state: ParsingState + ) -> None: + if opt not in self._long_opt: + from difflib import get_close_matches + + possibilities = get_close_matches(opt, self._long_opt) + raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) + + option = self._long_opt[opt] + if option.takes_value: + # At this point it's safe to modify rargs by injecting the + # explicit value, because no exception is raised in this + # branch. This means that the inserted value will be fully + # consumed. + if explicit_value is not None: + state.rargs.insert(0, explicit_value) + + value = self._get_value_from_state(opt, option, state) + + elif explicit_value is not None: + raise BadOptionUsage( + opt, _("Option {name!r} does not take a value.").format(name=opt) + ) + + else: + value = None + + option.process(value, state) + + def _match_short_opt(self, arg: str, state: ParsingState) -> None: + stop = False + i = 1 + prefix = arg[0] + unknown_options = [] + + for ch in arg[1:]: + opt = normalize_opt(f"{prefix}{ch}", self.ctx) + option = self._short_opt.get(opt) + i += 1 + + if not option: + if self.ignore_unknown_options: + unknown_options.append(ch) + continue + raise NoSuchOption(opt, ctx=self.ctx) + if option.takes_value: + # Any characters left in arg? Pretend they're the + # next arg, and stop consuming characters of arg. + if i < len(arg): + state.rargs.insert(0, arg[i:]) + stop = True + + value = self._get_value_from_state(opt, option, state) + + else: + value = None + + option.process(value, state) + + if stop: + break + + # If we got any unknown options we recombine the string of the + # remaining options and re-attach the prefix, then report that + # to the state as new larg. This way there is basic combinatorics + # that can be achieved while still ignoring unknown arguments. + if self.ignore_unknown_options and unknown_options: + state.largs.append(f"{prefix}{''.join(unknown_options)}") + + def _get_value_from_state( + self, option_name: str, option: Option, state: ParsingState + ) -> t.Any: + nargs = option.nargs + + if len(state.rargs) < nargs: + if option.obj._flag_needs_value: + # Option allows omitting the value. + value = _flag_needs_value + else: + raise BadOptionUsage( + option_name, + ngettext( + "Option {name!r} requires an argument.", + "Option {name!r} requires {nargs} arguments.", + nargs, + ).format(name=option_name, nargs=nargs), + ) + elif nargs == 1: + next_rarg = state.rargs[0] + + if ( + option.obj._flag_needs_value + and isinstance(next_rarg, str) + and next_rarg[:1] in self._opt_prefixes + and len(next_rarg) > 1 + ): + # The next arg looks like the start of an option, don't + # use it as the value if omitting the value is allowed. + value = _flag_needs_value + else: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + return value + + def _process_opts(self, arg: str, state: ParsingState) -> None: + explicit_value = None + # Long option handling happens in two parts. The first part is + # supporting explicitly attached values. In any case, we will try + # to long match the option first. + if "=" in arg: + long_opt, explicit_value = arg.split("=", 1) + else: + long_opt = arg + norm_long_opt = normalize_opt(long_opt, self.ctx) + + # At this point we will match the (assumed) long option through + # the long option matching code. Note that this allows options + # like "-foo" to be matched as long options. + try: + self._match_long_opt(norm_long_opt, explicit_value, state) + except NoSuchOption: + # At this point the long option matching failed, and we need + # to try with short options. However there is a special rule + # which says, that if we have a two character options prefix + # (applies to "--foo" for instance), we do not dispatch to the + # short option code and will instead raise the no option + # error. + if arg[:2] not in self._opt_prefixes: + self._match_short_opt(arg, state) + return + + if not self.ignore_unknown_options: + raise + + state.largs.append(arg) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/click/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/shell_completion.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/shell_completion.py new file mode 100644 index 000000000..07d0f09ba --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/shell_completion.py @@ -0,0 +1,603 @@ +import os +import re +import typing as t +from gettext import gettext as _ + +from .core import Argument +from .core import BaseCommand +from .core import Context +from .core import MultiCommand +from .core import Option +from .core import Parameter +from .core import ParameterSource +from .parser import split_arg_string +from .utils import echo + + +def shell_complete( + cli: BaseCommand, + ctx_args: t.MutableMapping[str, t.Any], + prog_name: str, + complete_var: str, + instruction: str, +) -> int: + """Perform shell completion for the given CLI program. + + :param cli: Command being called. + :param ctx_args: Extra arguments to pass to + ``cli.make_context``. + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. + :param instruction: Value of ``complete_var`` with the completion + instruction and shell, in the form ``instruction_shell``. + :return: Status code to exit with. + """ + shell, _, instruction = instruction.partition("_") + comp_cls = get_completion_class(shell) + + if comp_cls is None: + return 1 + + comp = comp_cls(cli, ctx_args, prog_name, complete_var) + + if instruction == "source": + echo(comp.source()) + return 0 + + if instruction == "complete": + echo(comp.complete()) + return 0 + + return 1 + + +class CompletionItem: + """Represents a completion value and metadata about the value. The + default metadata is ``type`` to indicate special shell handling, + and ``help`` if a shell supports showing a help string next to the + value. + + Arbitrary parameters can be passed when creating the object, and + accessed using ``item.attr``. If an attribute wasn't passed, + accessing it returns ``None``. + + :param value: The completion suggestion. + :param type: Tells the shell script to provide special completion + support for the type. Click uses ``"dir"`` and ``"file"``. + :param help: String shown next to the value if supported. + :param kwargs: Arbitrary metadata. The built-in implementations + don't use this, but custom type completions paired with custom + shell support could use it. + """ + + __slots__ = ("value", "type", "help", "_info") + + def __init__( + self, + value: t.Any, + type: str = "plain", + help: t.Optional[str] = None, + **kwargs: t.Any, + ) -> None: + self.value: t.Any = value + self.type: str = type + self.help: t.Optional[str] = help + self._info = kwargs + + def __getattr__(self, name: str) -> t.Any: + return self._info.get(name) + + +# Only Bash >= 4.4 has the nosort option. +_SOURCE_BASH = """\ +%(complete_func)s() { + local IFS=$'\\n' + local response + + response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \ +%(complete_var)s=bash_complete $1) + + for completion in $response; do + IFS=',' read type value <<< "$completion" + + if [[ $type == 'dir' ]]; then + COMPREPLY=() + compopt -o dirnames + elif [[ $type == 'file' ]]; then + COMPREPLY=() + compopt -o default + elif [[ $type == 'plain' ]]; then + COMPREPLY+=($value) + fi + done + + return 0 +} + +%(complete_func)s_setup() { + complete -o nosort -F %(complete_func)s %(prog_name)s +} + +%(complete_func)s_setup; +""" + +_SOURCE_ZSH = """\ +#compdef %(prog_name)s + +%(complete_func)s() { + local -a completions + local -a completions_with_descriptions + local -a response + (( ! $+commands[%(prog_name)s] )) && return 1 + + response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \ +%(complete_var)s=zsh_complete %(prog_name)s)}") + + for type key descr in ${response}; do + if [[ "$type" == "plain" ]]; then + if [[ "$descr" == "_" ]]; then + completions+=("$key") + else + completions_with_descriptions+=("$key":"$descr") + fi + elif [[ "$type" == "dir" ]]; then + _path_files -/ + elif [[ "$type" == "file" ]]; then + _path_files -f + fi + done + + if [ -n "$completions_with_descriptions" ]; then + _describe -V unsorted completions_with_descriptions -U + fi + + if [ -n "$completions" ]; then + compadd -U -V unsorted -a completions + fi +} + +if [[ $zsh_eval_context[-1] == loadautofunc ]]; then + # autoload from fpath, call function directly + %(complete_func)s "$@" +else + # eval/source/. command, register function for later + compdef %(complete_func)s %(prog_name)s +fi +""" + +_SOURCE_FISH = """\ +function %(complete_func)s; + set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \ +COMP_CWORD=(commandline -t) %(prog_name)s); + + for completion in $response; + set -l metadata (string split "," $completion); + + if test $metadata[1] = "dir"; + __fish_complete_directories $metadata[2]; + else if test $metadata[1] = "file"; + __fish_complete_path $metadata[2]; + else if test $metadata[1] = "plain"; + echo $metadata[2]; + end; + end; +end; + +complete --no-files --command %(prog_name)s --arguments \ +"(%(complete_func)s)"; +""" + + +class ShellComplete: + """Base class for providing shell completion support. A subclass for + a given shell will override attributes and methods to implement the + completion instructions (``source`` and ``complete``). + + :param cli: Command being called. + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. + + .. versionadded:: 8.0 + """ + + name: t.ClassVar[str] + """Name to register the shell as with :func:`add_completion_class`. + This is used in completion instructions (``{name}_source`` and + ``{name}_complete``). + """ + + source_template: t.ClassVar[str] + """Completion script template formatted by :meth:`source`. This must + be provided by subclasses. + """ + + def __init__( + self, + cli: BaseCommand, + ctx_args: t.MutableMapping[str, t.Any], + prog_name: str, + complete_var: str, + ) -> None: + self.cli = cli + self.ctx_args = ctx_args + self.prog_name = prog_name + self.complete_var = complete_var + + @property + def func_name(self) -> str: + """The name of the shell function defined by the completion + script. + """ + safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), flags=re.ASCII) + return f"_{safe_name}_completion" + + def source_vars(self) -> t.Dict[str, t.Any]: + """Vars for formatting :attr:`source_template`. + + By default this provides ``complete_func``, ``complete_var``, + and ``prog_name``. + """ + return { + "complete_func": self.func_name, + "complete_var": self.complete_var, + "prog_name": self.prog_name, + } + + def source(self) -> str: + """Produce the shell script that defines the completion + function. By default this ``%``-style formats + :attr:`source_template` with the dict returned by + :meth:`source_vars`. + """ + return self.source_template % self.source_vars() + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + """Use the env vars defined by the shell script to return a + tuple of ``args, incomplete``. This must be implemented by + subclasses. + """ + raise NotImplementedError + + def get_completions( + self, args: t.List[str], incomplete: str + ) -> t.List[CompletionItem]: + """Determine the context and last complete command or parameter + from the complete args. Call that object's ``shell_complete`` + method to get the completions for the incomplete value. + + :param args: List of complete args before the incomplete value. + :param incomplete: Value being completed. May be empty. + """ + ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args) + obj, incomplete = _resolve_incomplete(ctx, args, incomplete) + return obj.shell_complete(ctx, incomplete) + + def format_completion(self, item: CompletionItem) -> str: + """Format a completion item into the form recognized by the + shell script. This must be implemented by subclasses. + + :param item: Completion item to format. + """ + raise NotImplementedError + + def complete(self) -> str: + """Produce the completion data to send back to the shell. + + By default this calls :meth:`get_completion_args`, gets the + completions, then calls :meth:`format_completion` for each + completion. + """ + args, incomplete = self.get_completion_args() + completions = self.get_completions(args, incomplete) + out = [self.format_completion(item) for item in completions] + return "\n".join(out) + + +class BashComplete(ShellComplete): + """Shell completion for Bash.""" + + name = "bash" + source_template = _SOURCE_BASH + + @staticmethod + def _check_version() -> None: + import shutil + import subprocess + + bash_exe = shutil.which("bash") + + if bash_exe is None: + match = None + else: + output = subprocess.run( + [bash_exe, "--norc", "-c", 'echo "${BASH_VERSION}"'], + stdout=subprocess.PIPE, + ) + match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode()) + + if match is not None: + major, minor = match.groups() + + if major < "4" or major == "4" and minor < "4": + echo( + _( + "Shell completion is not supported for Bash" + " versions older than 4.4." + ), + err=True, + ) + else: + echo( + _("Couldn't detect Bash version, shell completion is not supported."), + err=True, + ) + + def source(self) -> str: + self._check_version() + return super().source() + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + return f"{item.type},{item.value}" + + +class ZshComplete(ShellComplete): + """Shell completion for Zsh.""" + + name = "zsh" + source_template = _SOURCE_ZSH + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}" + + +class FishComplete(ShellComplete): + """Shell completion for Fish.""" + + name = "fish" + source_template = _SOURCE_FISH + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + incomplete = os.environ["COMP_CWORD"] + args = cwords[1:] + + # Fish stores the partial word in both COMP_WORDS and + # COMP_CWORD, remove it from complete args. + if incomplete and args and args[-1] == incomplete: + args.pop() + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + if item.help: + return f"{item.type},{item.value}\t{item.help}" + + return f"{item.type},{item.value}" + + +ShellCompleteType = t.TypeVar("ShellCompleteType", bound=t.Type[ShellComplete]) + + +_available_shells: t.Dict[str, t.Type[ShellComplete]] = { + "bash": BashComplete, + "fish": FishComplete, + "zsh": ZshComplete, +} + + +def add_completion_class( + cls: ShellCompleteType, name: t.Optional[str] = None +) -> ShellCompleteType: + """Register a :class:`ShellComplete` subclass under the given name. + The name will be provided by the completion instruction environment + variable during completion. + + :param cls: The completion class that will handle completion for the + shell. + :param name: Name to register the class under. Defaults to the + class's ``name`` attribute. + """ + if name is None: + name = cls.name + + _available_shells[name] = cls + + return cls + + +def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]: + """Look up a registered :class:`ShellComplete` subclass by the name + provided by the completion instruction environment variable. If the + name isn't registered, returns ``None``. + + :param shell: Name the class is registered under. + """ + return _available_shells.get(shell) + + +def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool: + """Determine if the given parameter is an argument that can still + accept values. + + :param ctx: Invocation context for the command represented by the + parsed complete args. + :param param: Argument object being checked. + """ + if not isinstance(param, Argument): + return False + + assert param.name is not None + # Will be None if expose_value is False. + value = ctx.params.get(param.name) + return ( + param.nargs == -1 + or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE + or ( + param.nargs > 1 + and isinstance(value, (tuple, list)) + and len(value) < param.nargs + ) + ) + + +def _start_of_option(ctx: Context, value: str) -> bool: + """Check if the value looks like the start of an option.""" + if not value: + return False + + c = value[0] + return c in ctx._opt_prefixes + + +def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool: + """Determine if the given parameter is an option that needs a value. + + :param args: List of complete args before the incomplete value. + :param param: Option object being checked. + """ + if not isinstance(param, Option): + return False + + if param.is_flag or param.count: + return False + + last_option = None + + for index, arg in enumerate(reversed(args)): + if index + 1 > param.nargs: + break + + if _start_of_option(ctx, arg): + last_option = arg + + return last_option is not None and last_option in param.opts + + +def _resolve_context( + cli: BaseCommand, + ctx_args: t.MutableMapping[str, t.Any], + prog_name: str, + args: t.List[str], +) -> Context: + """Produce the context hierarchy starting with the command and + traversing the complete arguments. This only follows the commands, + it doesn't trigger input prompts or callbacks. + + :param cli: Command being called. + :param prog_name: Name of the executable in the shell. + :param args: List of complete args before the incomplete value. + """ + ctx_args["resilient_parsing"] = True + ctx = cli.make_context(prog_name, args.copy(), **ctx_args) + args = ctx.protected_args + ctx.args + + while args: + command = ctx.command + + if isinstance(command, MultiCommand): + if not command.chain: + name, cmd, args = command.resolve_command(ctx, args) + + if cmd is None: + return ctx + + ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True) + args = ctx.protected_args + ctx.args + else: + sub_ctx = ctx + + while args: + name, cmd, args = command.resolve_command(ctx, args) + + if cmd is None: + return ctx + + sub_ctx = cmd.make_context( + name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + resilient_parsing=True, + ) + args = sub_ctx.args + + ctx = sub_ctx + args = [*sub_ctx.protected_args, *sub_ctx.args] + else: + break + + return ctx + + +def _resolve_incomplete( + ctx: Context, args: t.List[str], incomplete: str +) -> t.Tuple[t.Union[BaseCommand, Parameter], str]: + """Find the Click object that will handle the completion of the + incomplete value. Return the object and the incomplete value. + + :param ctx: Invocation context for the command represented by + the parsed complete args. + :param args: List of complete args before the incomplete value. + :param incomplete: Value being completed. May be empty. + """ + # Different shells treat an "=" between a long option name and + # value differently. Might keep the value joined, return the "=" + # as a separate item, or return the split name and value. Always + # split and discard the "=" to make completion easier. + if incomplete == "=": + incomplete = "" + elif "=" in incomplete and _start_of_option(ctx, incomplete): + name, _, incomplete = incomplete.partition("=") + args.append(name) + + # The "--" marker tells Click to stop treating values as options + # even if they start with the option character. If it hasn't been + # given and the incomplete arg looks like an option, the current + # command will provide option name completions. + if "--" not in args and _start_of_option(ctx, incomplete): + return ctx.command, incomplete + + params = ctx.command.get_params(ctx) + + # If the last complete arg is an option name with an incomplete + # value, the option will provide value completions. + for param in params: + if _is_incomplete_option(ctx, args, param): + return param, incomplete + + # It's not an option name or value. The first argument without a + # parsed value will provide value completions. + for param in params: + if _is_incomplete_argument(ctx, param): + return param, incomplete + + # There were no unparsed arguments, the command may be a group that + # will provide command name completions. + return ctx.command, incomplete diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/termui.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/termui.py new file mode 100644 index 000000000..c084f1965 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/termui.py @@ -0,0 +1,784 @@ +import inspect +import io +import itertools +import sys +import typing as t +from gettext import gettext as _ + +from ._compat import isatty +from ._compat import strip_ansi +from .exceptions import Abort +from .exceptions import UsageError +from .globals import resolve_color_default +from .types import Choice +from .types import convert_type +from .types import ParamType +from .utils import echo +from .utils import LazyFile + +if t.TYPE_CHECKING: + from ._termui_impl import ProgressBar + +V = t.TypeVar("V") + +# The prompt functions to use. The doc tools currently override these +# functions to customize how they work. +visible_prompt_func: t.Callable[[str], str] = input + +_ansi_colors = { + "black": 30, + "red": 31, + "green": 32, + "yellow": 33, + "blue": 34, + "magenta": 35, + "cyan": 36, + "white": 37, + "reset": 39, + "bright_black": 90, + "bright_red": 91, + "bright_green": 92, + "bright_yellow": 93, + "bright_blue": 94, + "bright_magenta": 95, + "bright_cyan": 96, + "bright_white": 97, +} +_ansi_reset_all = "\033[0m" + + +def hidden_prompt_func(prompt: str) -> str: + import getpass + + return getpass.getpass(prompt) + + +def _build_prompt( + text: str, + suffix: str, + show_default: bool = False, + default: t.Optional[t.Any] = None, + show_choices: bool = True, + type: t.Optional[ParamType] = None, +) -> str: + prompt = text + if type is not None and show_choices and isinstance(type, Choice): + prompt += f" ({', '.join(map(str, type.choices))})" + if default is not None and show_default: + prompt = f"{prompt} [{_format_default(default)}]" + return f"{prompt}{suffix}" + + +def _format_default(default: t.Any) -> t.Any: + if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): + return default.name + + return default + + +def prompt( + text: str, + default: t.Optional[t.Any] = None, + hide_input: bool = False, + confirmation_prompt: t.Union[bool, str] = False, + type: t.Optional[t.Union[ParamType, t.Any]] = None, + value_proc: t.Optional[t.Callable[[str], t.Any]] = None, + prompt_suffix: str = ": ", + show_default: bool = True, + err: bool = False, + show_choices: bool = True, +) -> t.Any: + """Prompts a user for input. This is a convenience function that can + be used to prompt a user for input later. + + If the user aborts the input by sending an interrupt signal, this + function will catch it and raise a :exc:`Abort` exception. + + :param text: the text to show for the prompt. + :param default: the default value to use if no input happens. If this + is not given it will prompt until it's aborted. + :param hide_input: if this is set to true then the input value will + be hidden. + :param confirmation_prompt: Prompt a second time to confirm the + value. Can be set to a string instead of ``True`` to customize + the message. + :param type: the type to use to check the value against. + :param value_proc: if this parameter is provided it's a function that + is invoked instead of the type conversion to + convert a value. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + :param show_choices: Show or hide choices if the passed type is a Choice. + For example if type is a Choice of either day or week, + show_choices is true and text is "Group by" then the + prompt will be "Group by (day, week): ". + + .. versionadded:: 8.0 + ``confirmation_prompt`` can be a custom string. + + .. versionadded:: 7.0 + Added the ``show_choices`` parameter. + + .. versionadded:: 6.0 + Added unicode support for cmd.exe on Windows. + + .. versionadded:: 4.0 + Added the `err` parameter. + + """ + + def prompt_func(text: str) -> str: + f = hidden_prompt_func if hide_input else visible_prompt_func + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(text.rstrip(" "), nl=False, err=err) + # Echo a space to stdout to work around an issue where + # readline causes backspace to clear the whole line. + return f(" ") + except (KeyboardInterrupt, EOFError): + # getpass doesn't print a newline if the user aborts input with ^C. + # Allegedly this behavior is inherited from getpass(3). + # A doc bug has been filed at https://bugs.python.org/issue24711 + if hide_input: + echo(None, err=err) + raise Abort() from None + + if value_proc is None: + value_proc = convert_type(type, default) + + prompt = _build_prompt( + text, prompt_suffix, show_default, default, show_choices, type + ) + + if confirmation_prompt: + if confirmation_prompt is True: + confirmation_prompt = _("Repeat for confirmation") + + confirmation_prompt = _build_prompt(confirmation_prompt, prompt_suffix) + + while True: + while True: + value = prompt_func(prompt) + if value: + break + elif default is not None: + value = default + break + try: + result = value_proc(value) + except UsageError as e: + if hide_input: + echo(_("Error: The value you entered was invalid."), err=err) + else: + echo(_("Error: {e.message}").format(e=e), err=err) + continue + if not confirmation_prompt: + return result + while True: + value2 = prompt_func(confirmation_prompt) + is_empty = not value and not value2 + if value2 or is_empty: + break + if value == value2: + return result + echo(_("Error: The two entered values do not match."), err=err) + + +def confirm( + text: str, + default: t.Optional[bool] = False, + abort: bool = False, + prompt_suffix: str = ": ", + show_default: bool = True, + err: bool = False, +) -> bool: + """Prompts for confirmation (yes/no question). + + If the user aborts the input by sending a interrupt signal this + function will catch it and raise a :exc:`Abort` exception. + + :param text: the question to ask. + :param default: The default value to use when no input is given. If + ``None``, repeat until input is given. + :param abort: if this is set to `True` a negative answer aborts the + exception by raising :exc:`Abort`. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + + .. versionchanged:: 8.0 + Repeat until input is given if ``default`` is ``None``. + + .. versionadded:: 4.0 + Added the ``err`` parameter. + """ + prompt = _build_prompt( + text, + prompt_suffix, + show_default, + "y/n" if default is None else ("Y/n" if default else "y/N"), + ) + + while True: + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(prompt.rstrip(" "), nl=False, err=err) + # Echo a space to stdout to work around an issue where + # readline causes backspace to clear the whole line. + value = visible_prompt_func(" ").lower().strip() + except (KeyboardInterrupt, EOFError): + raise Abort() from None + if value in ("y", "yes"): + rv = True + elif value in ("n", "no"): + rv = False + elif default is not None and value == "": + rv = default + else: + echo(_("Error: invalid input"), err=err) + continue + break + if abort and not rv: + raise Abort() + return rv + + +def echo_via_pager( + text_or_generator: t.Union[t.Iterable[str], t.Callable[[], t.Iterable[str]], str], + color: t.Optional[bool] = None, +) -> None: + """This function takes a text and shows it via an environment specific + pager on stdout. + + .. versionchanged:: 3.0 + Added the `color` flag. + + :param text_or_generator: the text to page, or alternatively, a + generator emitting the text to page. + :param color: controls if the pager supports ANSI colors or not. The + default is autodetection. + """ + color = resolve_color_default(color) + + if inspect.isgeneratorfunction(text_or_generator): + i = t.cast(t.Callable[[], t.Iterable[str]], text_or_generator)() + elif isinstance(text_or_generator, str): + i = [text_or_generator] + else: + i = iter(t.cast(t.Iterable[str], text_or_generator)) + + # convert every element of i to a text type if necessary + text_generator = (el if isinstance(el, str) else str(el) for el in i) + + from ._termui_impl import pager + + return pager(itertools.chain(text_generator, "\n"), color) + + +def progressbar( + iterable: t.Optional[t.Iterable[V]] = None, + length: t.Optional[int] = None, + label: t.Optional[str] = None, + show_eta: bool = True, + show_percent: t.Optional[bool] = None, + show_pos: bool = False, + item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, + fill_char: str = "#", + empty_char: str = "-", + bar_template: str = "%(label)s [%(bar)s] %(info)s", + info_sep: str = " ", + width: int = 36, + file: t.Optional[t.TextIO] = None, + color: t.Optional[bool] = None, + update_min_steps: int = 1, +) -> "ProgressBar[V]": + """This function creates an iterable context manager that can be used + to iterate over something while showing a progress bar. It will + either iterate over the `iterable` or `length` items (that are counted + up). While iteration happens, this function will print a rendered + progress bar to the given `file` (defaults to stdout) and will attempt + to calculate remaining time and more. By default, this progress bar + will not be rendered if the file is not a terminal. + + The context manager creates the progress bar. When the context + manager is entered the progress bar is already created. With every + iteration over the progress bar, the iterable passed to the bar is + advanced and the bar is updated. When the context manager exits, + a newline is printed and the progress bar is finalized on screen. + + Note: The progress bar is currently designed for use cases where the + total progress can be expected to take at least several seconds. + Because of this, the ProgressBar class object won't display + progress that is considered too fast, and progress where the time + between steps is less than a second. + + No printing must happen or the progress bar will be unintentionally + destroyed. + + Example usage:: + + with progressbar(items) as bar: + for item in bar: + do_something_with(item) + + Alternatively, if no iterable is specified, one can manually update the + progress bar through the `update()` method instead of directly + iterating over the progress bar. The update method accepts the number + of steps to increment the bar with:: + + with progressbar(length=chunks.total_bytes) as bar: + for chunk in chunks: + process_chunk(chunk) + bar.update(chunks.bytes) + + The ``update()`` method also takes an optional value specifying the + ``current_item`` at the new position. This is useful when used + together with ``item_show_func`` to customize the output for each + manual step:: + + with click.progressbar( + length=total_size, + label='Unzipping archive', + item_show_func=lambda a: a.filename + ) as bar: + for archive in zip_file: + archive.extract() + bar.update(archive.size, archive) + + :param iterable: an iterable to iterate over. If not provided the length + is required. + :param length: the number of items to iterate over. By default the + progressbar will attempt to ask the iterator about its + length, which might or might not work. If an iterable is + also provided this parameter can be used to override the + length. If an iterable is not provided the progress bar + will iterate over a range of that length. + :param label: the label to show next to the progress bar. + :param show_eta: enables or disables the estimated time display. This is + automatically disabled if the length cannot be + determined. + :param show_percent: enables or disables the percentage display. The + default is `True` if the iterable has a length or + `False` if not. + :param show_pos: enables or disables the absolute position display. The + default is `False`. + :param item_show_func: A function called with the current item which + can return a string to show next to the progress bar. If the + function returns ``None`` nothing is shown. The current item can + be ``None``, such as when entering and exiting the bar. + :param fill_char: the character to use to show the filled part of the + progress bar. + :param empty_char: the character to use to show the non-filled part of + the progress bar. + :param bar_template: the format string to use as template for the bar. + The parameters in it are ``label`` for the label, + ``bar`` for the progress bar and ``info`` for the + info section. + :param info_sep: the separator between multiple info items (eta etc.) + :param width: the width of the progress bar in characters, 0 means full + terminal width + :param file: The file to write to. If this is not a terminal then + only the label is printed. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are included anywhere in the progress bar output + which is not the case by default. + :param update_min_steps: Render only when this many updates have + completed. This allows tuning for very fast iterators. + + .. versionchanged:: 8.0 + Output is shown even if execution time is less than 0.5 seconds. + + .. versionchanged:: 8.0 + ``item_show_func`` shows the current item, not the previous one. + + .. versionchanged:: 8.0 + Labels are echoed if the output is not a TTY. Reverts a change + in 7.0 that removed all output. + + .. versionadded:: 8.0 + Added the ``update_min_steps`` parameter. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. Added the ``update`` method to + the object. + + .. versionadded:: 2.0 + """ + from ._termui_impl import ProgressBar + + color = resolve_color_default(color) + return ProgressBar( + iterable=iterable, + length=length, + show_eta=show_eta, + show_percent=show_percent, + show_pos=show_pos, + item_show_func=item_show_func, + fill_char=fill_char, + empty_char=empty_char, + bar_template=bar_template, + info_sep=info_sep, + file=file, + label=label, + width=width, + color=color, + update_min_steps=update_min_steps, + ) + + +def clear() -> None: + """Clears the terminal screen. This will have the effect of clearing + the whole visible space of the terminal and moving the cursor to the + top left. This does not do anything if not connected to a terminal. + + .. versionadded:: 2.0 + """ + if not isatty(sys.stdout): + return + + # ANSI escape \033[2J clears the screen, \033[1;1H moves the cursor + echo("\033[2J\033[1;1H", nl=False) + + +def _interpret_color( + color: t.Union[int, t.Tuple[int, int, int], str], offset: int = 0 +) -> str: + if isinstance(color, int): + return f"{38 + offset};5;{color:d}" + + if isinstance(color, (tuple, list)): + r, g, b = color + return f"{38 + offset};2;{r:d};{g:d};{b:d}" + + return str(_ansi_colors[color] + offset) + + +def style( + text: t.Any, + fg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, + bg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, + bold: t.Optional[bool] = None, + dim: t.Optional[bool] = None, + underline: t.Optional[bool] = None, + overline: t.Optional[bool] = None, + italic: t.Optional[bool] = None, + blink: t.Optional[bool] = None, + reverse: t.Optional[bool] = None, + strikethrough: t.Optional[bool] = None, + reset: bool = True, +) -> str: + """Styles a text with ANSI styles and returns the new string. By + default the styling is self contained which means that at the end + of the string a reset code is issued. This can be prevented by + passing ``reset=False``. + + Examples:: + + click.echo(click.style('Hello World!', fg='green')) + click.echo(click.style('ATTENTION!', blink=True)) + click.echo(click.style('Some things', reverse=True, fg='cyan')) + click.echo(click.style('More colors', fg=(255, 12, 128), bg=117)) + + Supported color names: + + * ``black`` (might be a gray) + * ``red`` + * ``green`` + * ``yellow`` (might be an orange) + * ``blue`` + * ``magenta`` + * ``cyan`` + * ``white`` (might be light gray) + * ``bright_black`` + * ``bright_red`` + * ``bright_green`` + * ``bright_yellow`` + * ``bright_blue`` + * ``bright_magenta`` + * ``bright_cyan`` + * ``bright_white`` + * ``reset`` (reset the color code only) + + If the terminal supports it, color may also be specified as: + + - An integer in the interval [0, 255]. The terminal must support + 8-bit/256-color mode. + - An RGB tuple of three integers in [0, 255]. The terminal must + support 24-bit/true-color mode. + + See https://en.wikipedia.org/wiki/ANSI_color and + https://gist.github.com/XVilka/8346728 for more information. + + :param text: the string to style with ansi codes. + :param fg: if provided this will become the foreground color. + :param bg: if provided this will become the background color. + :param bold: if provided this will enable or disable bold mode. + :param dim: if provided this will enable or disable dim mode. This is + badly supported. + :param underline: if provided this will enable or disable underline. + :param overline: if provided this will enable or disable overline. + :param italic: if provided this will enable or disable italic. + :param blink: if provided this will enable or disable blinking. + :param reverse: if provided this will enable or disable inverse + rendering (foreground becomes background and the + other way round). + :param strikethrough: if provided this will enable or disable + striking through text. + :param reset: by default a reset-all code is added at the end of the + string which means that styles do not carry over. This + can be disabled to compose styles. + + .. versionchanged:: 8.0 + A non-string ``message`` is converted to a string. + + .. versionchanged:: 8.0 + Added support for 256 and RGB color codes. + + .. versionchanged:: 8.0 + Added the ``strikethrough``, ``italic``, and ``overline`` + parameters. + + .. versionchanged:: 7.0 + Added support for bright colors. + + .. versionadded:: 2.0 + """ + if not isinstance(text, str): + text = str(text) + + bits = [] + + if fg: + try: + bits.append(f"\033[{_interpret_color(fg)}m") + except KeyError: + raise TypeError(f"Unknown color {fg!r}") from None + + if bg: + try: + bits.append(f"\033[{_interpret_color(bg, 10)}m") + except KeyError: + raise TypeError(f"Unknown color {bg!r}") from None + + if bold is not None: + bits.append(f"\033[{1 if bold else 22}m") + if dim is not None: + bits.append(f"\033[{2 if dim else 22}m") + if underline is not None: + bits.append(f"\033[{4 if underline else 24}m") + if overline is not None: + bits.append(f"\033[{53 if overline else 55}m") + if italic is not None: + bits.append(f"\033[{3 if italic else 23}m") + if blink is not None: + bits.append(f"\033[{5 if blink else 25}m") + if reverse is not None: + bits.append(f"\033[{7 if reverse else 27}m") + if strikethrough is not None: + bits.append(f"\033[{9 if strikethrough else 29}m") + bits.append(text) + if reset: + bits.append(_ansi_reset_all) + return "".join(bits) + + +def unstyle(text: str) -> str: + """Removes ANSI styling information from a string. Usually it's not + necessary to use this function as Click's echo function will + automatically remove styling if necessary. + + .. versionadded:: 2.0 + + :param text: the text to remove style information from. + """ + return strip_ansi(text) + + +def secho( + message: t.Optional[t.Any] = None, + file: t.Optional[t.IO[t.AnyStr]] = None, + nl: bool = True, + err: bool = False, + color: t.Optional[bool] = None, + **styles: t.Any, +) -> None: + """This function combines :func:`echo` and :func:`style` into one + call. As such the following two calls are the same:: + + click.secho('Hello World!', fg='green') + click.echo(click.style('Hello World!', fg='green')) + + All keyword arguments are forwarded to the underlying functions + depending on which one they go with. + + Non-string types will be converted to :class:`str`. However, + :class:`bytes` are passed directly to :meth:`echo` without applying + style. If you want to style bytes that represent text, call + :meth:`bytes.decode` first. + + .. versionchanged:: 8.0 + A non-string ``message`` is converted to a string. Bytes are + passed through without style applied. + + .. versionadded:: 2.0 + """ + if message is not None and not isinstance(message, (bytes, bytearray)): + message = style(message, **styles) + + return echo(message, file=file, nl=nl, err=err, color=color) + + +def edit( + text: t.Optional[t.AnyStr] = None, + editor: t.Optional[str] = None, + env: t.Optional[t.Mapping[str, str]] = None, + require_save: bool = True, + extension: str = ".txt", + filename: t.Optional[str] = None, +) -> t.Optional[t.AnyStr]: + r"""Edits the given text in the defined editor. If an editor is given + (should be the full path to the executable but the regular operating + system search path is used for finding the executable) it overrides + the detected editor. Optionally, some environment variables can be + used. If the editor is closed without changes, `None` is returned. In + case a file is edited directly the return value is always `None` and + `require_save` and `extension` are ignored. + + If the editor cannot be opened a :exc:`UsageError` is raised. + + Note for Windows: to simplify cross-platform usage, the newlines are + automatically converted from POSIX to Windows and vice versa. As such, + the message here will have ``\n`` as newline markers. + + :param text: the text to edit. + :param editor: optionally the editor to use. Defaults to automatic + detection. + :param env: environment variables to forward to the editor. + :param require_save: if this is true, then not saving in the editor + will make the return value become `None`. + :param extension: the extension to tell the editor about. This defaults + to `.txt` but changing this might change syntax + highlighting. + :param filename: if provided it will edit this file instead of the + provided text contents. It will not use a temporary + file as an indirection in that case. + """ + from ._termui_impl import Editor + + ed = Editor(editor=editor, env=env, require_save=require_save, extension=extension) + + if filename is None: + return ed.edit(text) + + ed.edit_file(filename) + return None + + +def launch(url: str, wait: bool = False, locate: bool = False) -> int: + """This function launches the given URL (or filename) in the default + viewer application for this file type. If this is an executable, it + might launch the executable in a new session. The return value is + the exit code of the launched application. Usually, ``0`` indicates + success. + + Examples:: + + click.launch('https://click.palletsprojects.com/') + click.launch('/my/downloaded/file', locate=True) + + .. versionadded:: 2.0 + + :param url: URL or filename of the thing to launch. + :param wait: Wait for the program to exit before returning. This + only works if the launched program blocks. In particular, + ``xdg-open`` on Linux does not block. + :param locate: if this is set to `True` then instead of launching the + application associated with the URL it will attempt to + launch a file manager with the file located. This + might have weird effects if the URL does not point to + the filesystem. + """ + from ._termui_impl import open_url + + return open_url(url, wait=wait, locate=locate) + + +# If this is provided, getchar() calls into this instead. This is used +# for unittesting purposes. +_getchar: t.Optional[t.Callable[[bool], str]] = None + + +def getchar(echo: bool = False) -> str: + """Fetches a single character from the terminal and returns it. This + will always return a unicode character and under certain rare + circumstances this might return more than one character. The + situations which more than one character is returned is when for + whatever reason multiple characters end up in the terminal buffer or + standard input was not actually a terminal. + + Note that this will always read from the terminal, even if something + is piped into the standard input. + + Note for Windows: in rare cases when typing non-ASCII characters, this + function might wait for a second character and then return both at once. + This is because certain Unicode characters look like special-key markers. + + .. versionadded:: 2.0 + + :param echo: if set to `True`, the character read will also show up on + the terminal. The default is to not show it. + """ + global _getchar + + if _getchar is None: + from ._termui_impl import getchar as f + + _getchar = f + + return _getchar(echo) + + +def raw_terminal() -> t.ContextManager[int]: + from ._termui_impl import raw_terminal as f + + return f() + + +def pause(info: t.Optional[str] = None, err: bool = False) -> None: + """This command stops execution and waits for the user to press any + key to continue. This is similar to the Windows batch "pause" + command. If the program is not run through a terminal, this command + will instead do nothing. + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param info: The message to print before pausing. Defaults to + ``"Press any key to continue..."``. + :param err: if set to message goes to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + if not isatty(sys.stdin) or not isatty(sys.stdout): + return + + if info is None: + info = _("Press any key to continue...") + + try: + if info: + echo(info, nl=False, err=err) + try: + getchar() + except (KeyboardInterrupt, EOFError): + pass + finally: + if info: + echo(err=err) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/testing.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/testing.py new file mode 100644 index 000000000..772b2159c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/testing.py @@ -0,0 +1,483 @@ +import contextlib +import io +import os +import shlex +import shutil +import sys +import tempfile +import typing as t +from types import TracebackType + +from . import _compat +from . import formatting +from . import termui +from . import utils +from ._compat import _find_binary_reader + +if t.TYPE_CHECKING: + from .core import BaseCommand + + +class EchoingStdin: + def __init__(self, input: t.BinaryIO, output: t.BinaryIO) -> None: + self._input = input + self._output = output + self._paused = False + + def __getattr__(self, x: str) -> t.Any: + return getattr(self._input, x) + + def _echo(self, rv: bytes) -> bytes: + if not self._paused: + self._output.write(rv) + + return rv + + def read(self, n: int = -1) -> bytes: + return self._echo(self._input.read(n)) + + def read1(self, n: int = -1) -> bytes: + return self._echo(self._input.read1(n)) # type: ignore + + def readline(self, n: int = -1) -> bytes: + return self._echo(self._input.readline(n)) + + def readlines(self) -> t.List[bytes]: + return [self._echo(x) for x in self._input.readlines()] + + def __iter__(self) -> t.Iterator[bytes]: + return iter(self._echo(x) for x in self._input) + + def __repr__(self) -> str: + return repr(self._input) + + +@contextlib.contextmanager +def _pause_echo(stream: t.Optional[EchoingStdin]) -> t.Iterator[None]: + if stream is None: + yield + else: + stream._paused = True + yield + stream._paused = False + + +class _NamedTextIOWrapper(io.TextIOWrapper): + def __init__( + self, buffer: t.BinaryIO, name: str, mode: str, **kwargs: t.Any + ) -> None: + super().__init__(buffer, **kwargs) + self._name = name + self._mode = mode + + @property + def name(self) -> str: + return self._name + + @property + def mode(self) -> str: + return self._mode + + +def make_input_stream( + input: t.Optional[t.Union[str, bytes, t.IO[t.Any]]], charset: str +) -> t.BinaryIO: + # Is already an input stream. + if hasattr(input, "read"): + rv = _find_binary_reader(t.cast(t.IO[t.Any], input)) + + if rv is not None: + return rv + + raise TypeError("Could not find binary reader for input stream.") + + if input is None: + input = b"" + elif isinstance(input, str): + input = input.encode(charset) + + return io.BytesIO(input) + + +class Result: + """Holds the captured result of an invoked CLI script.""" + + def __init__( + self, + runner: "CliRunner", + stdout_bytes: bytes, + stderr_bytes: t.Optional[bytes], + return_value: t.Any, + exit_code: int, + exception: t.Optional[BaseException], + exc_info: t.Optional[ + t.Tuple[t.Type[BaseException], BaseException, TracebackType] + ] = None, + ): + #: The runner that created the result + self.runner = runner + #: The standard output as bytes. + self.stdout_bytes = stdout_bytes + #: The standard error as bytes, or None if not available + self.stderr_bytes = stderr_bytes + #: The value returned from the invoked command. + #: + #: .. versionadded:: 8.0 + self.return_value = return_value + #: The exit code as integer. + self.exit_code = exit_code + #: The exception that happened if one did. + self.exception = exception + #: The traceback + self.exc_info = exc_info + + @property + def output(self) -> str: + """The (standard) output as unicode string.""" + return self.stdout + + @property + def stdout(self) -> str: + """The standard output as unicode string.""" + return self.stdout_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + @property + def stderr(self) -> str: + """The standard error as unicode string.""" + if self.stderr_bytes is None: + raise ValueError("stderr not separately captured") + return self.stderr_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + def __repr__(self) -> str: + exc_str = repr(self.exception) if self.exception else "okay" + return f"<{type(self).__name__} {exc_str}>" + + +class CliRunner: + """The CLI runner provides functionality to invoke a Click command line + script for unittesting purposes in a isolated environment. This only + works in single-threaded systems without any concurrency as it changes the + global interpreter state. + + :param charset: the character set for the input and output data. + :param env: a dictionary with environment variables for overriding. + :param echo_stdin: if this is set to `True`, then reading from stdin writes + to stdout. This is useful for showing examples in + some circumstances. Note that regular prompts + will automatically echo the input. + :param mix_stderr: if this is set to `False`, then stdout and stderr are + preserved as independent streams. This is useful for + Unix-philosophy apps that have predictable stdout and + noisy stderr, such that each may be measured + independently + """ + + def __init__( + self, + charset: str = "utf-8", + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + echo_stdin: bool = False, + mix_stderr: bool = True, + ) -> None: + self.charset = charset + self.env: t.Mapping[str, t.Optional[str]] = env or {} + self.echo_stdin = echo_stdin + self.mix_stderr = mix_stderr + + def get_default_prog_name(self, cli: "BaseCommand") -> str: + """Given a command object it will return the default program name + for it. The default is the `name` attribute or ``"root"`` if not + set. + """ + return cli.name or "root" + + def make_env( + self, overrides: t.Optional[t.Mapping[str, t.Optional[str]]] = None + ) -> t.Mapping[str, t.Optional[str]]: + """Returns the environment overrides for invoking a script.""" + rv = dict(self.env) + if overrides: + rv.update(overrides) + return rv + + @contextlib.contextmanager + def isolation( + self, + input: t.Optional[t.Union[str, bytes, t.IO[t.Any]]] = None, + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + color: bool = False, + ) -> t.Iterator[t.Tuple[io.BytesIO, t.Optional[io.BytesIO]]]: + """A context manager that sets up the isolation for invoking of a + command line tool. This sets up stdin with the given input data + and `os.environ` with the overrides from the given dictionary. + This also rebinds some internals in Click to be mocked (like the + prompt functionality). + + This is automatically done in the :meth:`invoke` method. + + :param input: the input stream to put into sys.stdin. + :param env: the environment overrides as dictionary. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + + .. versionchanged:: 8.0 + ``stderr`` is opened with ``errors="backslashreplace"`` + instead of the default ``"strict"``. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + """ + bytes_input = make_input_stream(input, self.charset) + echo_input = None + + old_stdin = sys.stdin + old_stdout = sys.stdout + old_stderr = sys.stderr + old_forced_width = formatting.FORCED_WIDTH + formatting.FORCED_WIDTH = 80 + + env = self.make_env(env) + + bytes_output = io.BytesIO() + + if self.echo_stdin: + bytes_input = echo_input = t.cast( + t.BinaryIO, EchoingStdin(bytes_input, bytes_output) + ) + + sys.stdin = text_input = _NamedTextIOWrapper( + bytes_input, encoding=self.charset, name="", mode="r" + ) + + if self.echo_stdin: + # Force unbuffered reads, otherwise TextIOWrapper reads a + # large chunk which is echoed early. + text_input._CHUNK_SIZE = 1 # type: ignore + + sys.stdout = _NamedTextIOWrapper( + bytes_output, encoding=self.charset, name="", mode="w" + ) + + bytes_error = None + if self.mix_stderr: + sys.stderr = sys.stdout + else: + bytes_error = io.BytesIO() + sys.stderr = _NamedTextIOWrapper( + bytes_error, + encoding=self.charset, + name="", + mode="w", + errors="backslashreplace", + ) + + @_pause_echo(echo_input) # type: ignore + def visible_input(prompt: t.Optional[str] = None) -> str: + sys.stdout.write(prompt or "") + val = text_input.readline().rstrip("\r\n") + sys.stdout.write(f"{val}\n") + sys.stdout.flush() + return val + + @_pause_echo(echo_input) # type: ignore + def hidden_input(prompt: t.Optional[str] = None) -> str: + sys.stdout.write(f"{prompt or ''}\n") + sys.stdout.flush() + return text_input.readline().rstrip("\r\n") + + @_pause_echo(echo_input) # type: ignore + def _getchar(echo: bool) -> str: + char = sys.stdin.read(1) + + if echo: + sys.stdout.write(char) + + sys.stdout.flush() + return char + + default_color = color + + def should_strip_ansi( + stream: t.Optional[t.IO[t.Any]] = None, color: t.Optional[bool] = None + ) -> bool: + if color is None: + return not default_color + return not color + + old_visible_prompt_func = termui.visible_prompt_func + old_hidden_prompt_func = termui.hidden_prompt_func + old__getchar_func = termui._getchar + old_should_strip_ansi = utils.should_strip_ansi # type: ignore + old__compat_should_strip_ansi = _compat.should_strip_ansi + termui.visible_prompt_func = visible_input + termui.hidden_prompt_func = hidden_input + termui._getchar = _getchar + utils.should_strip_ansi = should_strip_ansi # type: ignore + _compat.should_strip_ansi = should_strip_ansi + + old_env = {} + try: + for key, value in env.items(): + old_env[key] = os.environ.get(key) + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + yield (bytes_output, bytes_error) + finally: + for key, value in old_env.items(): + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + sys.stdout = old_stdout + sys.stderr = old_stderr + sys.stdin = old_stdin + termui.visible_prompt_func = old_visible_prompt_func + termui.hidden_prompt_func = old_hidden_prompt_func + termui._getchar = old__getchar_func + utils.should_strip_ansi = old_should_strip_ansi # type: ignore + _compat.should_strip_ansi = old__compat_should_strip_ansi + formatting.FORCED_WIDTH = old_forced_width + + def invoke( + self, + cli: "BaseCommand", + args: t.Optional[t.Union[str, t.Sequence[str]]] = None, + input: t.Optional[t.Union[str, bytes, t.IO[t.Any]]] = None, + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + catch_exceptions: bool = True, + color: bool = False, + **extra: t.Any, + ) -> Result: + """Invokes a command in an isolated environment. The arguments are + forwarded directly to the command line script, the `extra` keyword + arguments are passed to the :meth:`~clickpkg.Command.main` function of + the command. + + This returns a :class:`Result` object. + + :param cli: the command to invoke + :param args: the arguments to invoke. It may be given as an iterable + or a string. When given as string it will be interpreted + as a Unix shell command. More details at + :func:`shlex.split`. + :param input: the input data for `sys.stdin`. + :param env: the environment overrides. + :param catch_exceptions: Whether to catch any other exceptions than + ``SystemExit``. + :param extra: the keyword arguments to pass to :meth:`main`. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + + .. versionchanged:: 8.0 + The result object has the ``return_value`` attribute with + the value returned from the invoked command. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + + .. versionchanged:: 3.0 + Added the ``catch_exceptions`` parameter. + + .. versionchanged:: 3.0 + The result object has the ``exc_info`` attribute with the + traceback if available. + """ + exc_info = None + with self.isolation(input=input, env=env, color=color) as outstreams: + return_value = None + exception: t.Optional[BaseException] = None + exit_code = 0 + + if isinstance(args, str): + args = shlex.split(args) + + try: + prog_name = extra.pop("prog_name") + except KeyError: + prog_name = self.get_default_prog_name(cli) + + try: + return_value = cli.main(args=args or (), prog_name=prog_name, **extra) + except SystemExit as e: + exc_info = sys.exc_info() + e_code = t.cast(t.Optional[t.Union[int, t.Any]], e.code) + + if e_code is None: + e_code = 0 + + if e_code != 0: + exception = e + + if not isinstance(e_code, int): + sys.stdout.write(str(e_code)) + sys.stdout.write("\n") + e_code = 1 + + exit_code = e_code + + except Exception as e: + if not catch_exceptions: + raise + exception = e + exit_code = 1 + exc_info = sys.exc_info() + finally: + sys.stdout.flush() + stdout = outstreams[0].getvalue() + if self.mix_stderr: + stderr = None + else: + stderr = outstreams[1].getvalue() # type: ignore + + return Result( + runner=self, + stdout_bytes=stdout, + stderr_bytes=stderr, + return_value=return_value, + exit_code=exit_code, + exception=exception, + exc_info=exc_info, # type: ignore + ) + + @contextlib.contextmanager + def isolated_filesystem( + self, temp_dir: t.Optional[t.Union[str, "os.PathLike[str]"]] = None + ) -> t.Iterator[str]: + """A context manager that creates a temporary directory and + changes the current working directory to it. This isolates tests + that affect the contents of the CWD to prevent them from + interfering with each other. + + :param temp_dir: Create the temporary directory under this + directory. If given, the created directory is not removed + when exiting. + + .. versionchanged:: 8.0 + Added the ``temp_dir`` parameter. + """ + cwd = os.getcwd() + dt = tempfile.mkdtemp(dir=temp_dir) + os.chdir(dt) + + try: + yield dt + finally: + os.chdir(cwd) + + if temp_dir is None: + try: + shutil.rmtree(dt) + except OSError: + pass diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/types.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/types.py new file mode 100644 index 000000000..a70fd58ce --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/types.py @@ -0,0 +1,1093 @@ +import os +import stat +import sys +import typing as t +from datetime import datetime +from gettext import gettext as _ +from gettext import ngettext + +from ._compat import _get_argv_encoding +from ._compat import open_stream +from .exceptions import BadParameter +from .utils import format_filename +from .utils import LazyFile +from .utils import safecall + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .core import Context + from .core import Parameter + from .shell_completion import CompletionItem + + +class ParamType: + """Represents the type of a parameter. Validates and converts values + from the command line or Python into the correct type. + + To implement a custom type, subclass and implement at least the + following: + + - The :attr:`name` class attribute must be set. + - Calling an instance of the type with ``None`` must return + ``None``. This is already implemented by default. + - :meth:`convert` must convert string values to the correct type. + - :meth:`convert` must accept values that are already the correct + type. + - It must be able to convert a value if the ``ctx`` and ``param`` + arguments are ``None``. This can occur when converting prompt + input. + """ + + is_composite: t.ClassVar[bool] = False + arity: t.ClassVar[int] = 1 + + #: the descriptive name of this type + name: str + + #: if a list of this type is expected and the value is pulled from a + #: string environment variable, this is what splits it up. `None` + #: means any whitespace. For all parameters the general rule is that + #: whitespace splits them up. The exception are paths and files which + #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on + #: Windows). + envvar_list_splitter: t.ClassVar[t.Optional[str]] = None + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + .. versionadded:: 8.0 + """ + # The class name without the "ParamType" suffix. + param_type = type(self).__name__.partition("ParamType")[0] + param_type = param_type.partition("ParameterType")[0] + + # Custom subclasses might not remember to set a name. + if hasattr(self, "name"): + name = self.name + else: + name = param_type + + return {"param_type": param_type, "name": name} + + def __call__( + self, + value: t.Any, + param: t.Optional["Parameter"] = None, + ctx: t.Optional["Context"] = None, + ) -> t.Any: + if value is not None: + return self.convert(value, param, ctx) + + def get_metavar(self, param: "Parameter") -> t.Optional[str]: + """Returns the metavar default for this param if it provides one.""" + + def get_missing_message(self, param: "Parameter") -> t.Optional[str]: + """Optionally might return extra information about a missing + parameter. + + .. versionadded:: 2.0 + """ + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + """Convert the value to the correct type. This is not called if + the value is ``None`` (the missing value). + + This must accept string values from the command line, as well as + values that are already the correct type. It may also convert + other compatible types. + + The ``param`` and ``ctx`` arguments may be ``None`` in certain + situations, such as when converting prompt input. + + If the value cannot be converted, call :meth:`fail` with a + descriptive message. + + :param value: The value to convert. + :param param: The parameter that is using this type to convert + its value. May be ``None``. + :param ctx: The current context that arrived at this value. May + be ``None``. + """ + return value + + def split_envvar_value(self, rv: str) -> t.Sequence[str]: + """Given a value from an environment variable this splits it up + into small chunks depending on the defined envvar list splitter. + + If the splitter is set to `None`, which means that whitespace splits, + then leading and trailing whitespace is ignored. Otherwise, leading + and trailing splitters usually lead to empty items being included. + """ + return (rv or "").split(self.envvar_list_splitter) + + def fail( + self, + message: str, + param: t.Optional["Parameter"] = None, + ctx: t.Optional["Context"] = None, + ) -> "t.NoReturn": + """Helper method to fail with an invalid value message.""" + raise BadParameter(message, ctx=ctx, param=param) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a list of + :class:`~click.shell_completion.CompletionItem` objects for the + incomplete value. Most types do not provide completions, but + some do, and this allows custom types to provide custom + completions as well. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + return [] + + +class CompositeParamType(ParamType): + is_composite = True + + @property + def arity(self) -> int: # type: ignore + raise NotImplementedError() + + +class FuncParamType(ParamType): + def __init__(self, func: t.Callable[[t.Any], t.Any]) -> None: + self.name: str = func.__name__ + self.func = func + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["func"] = self.func + return info_dict + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + try: + return self.func(value) + except ValueError: + try: + value = str(value) + except UnicodeError: + value = value.decode("utf-8", "replace") + + self.fail(value, param, ctx) + + +class UnprocessedParamType(ParamType): + name = "text" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + return value + + def __repr__(self) -> str: + return "UNPROCESSED" + + +class StringParamType(ParamType): + name = "text" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if isinstance(value, bytes): + enc = _get_argv_encoding() + try: + value = value.decode(enc) + except UnicodeError: + fs_enc = sys.getfilesystemencoding() + if fs_enc != enc: + try: + value = value.decode(fs_enc) + except UnicodeError: + value = value.decode("utf-8", "replace") + else: + value = value.decode("utf-8", "replace") + return value + return str(value) + + def __repr__(self) -> str: + return "STRING" + + +class Choice(ParamType): + """The choice type allows a value to be checked against a fixed set + of supported values. All of these values have to be strings. + + You should only pass a list or tuple of choices. Other iterables + (like generators) may lead to surprising results. + + The resulting value will always be one of the originally passed choices + regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` + being specified. + + See :ref:`choice-opts` for an example. + + :param case_sensitive: Set to false to make choices case + insensitive. Defaults to true. + """ + + name = "choice" + + def __init__(self, choices: t.Sequence[str], case_sensitive: bool = True) -> None: + self.choices = choices + self.case_sensitive = case_sensitive + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["choices"] = self.choices + info_dict["case_sensitive"] = self.case_sensitive + return info_dict + + def get_metavar(self, param: "Parameter") -> str: + choices_str = "|".join(self.choices) + + # Use curly braces to indicate a required argument. + if param.required and param.param_type_name == "argument": + return f"{{{choices_str}}}" + + # Use square braces to indicate an option or optional argument. + return f"[{choices_str}]" + + def get_missing_message(self, param: "Parameter") -> str: + return _("Choose from:\n\t{choices}").format(choices=",\n\t".join(self.choices)) + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + # Match through normalization and case sensitivity + # first do token_normalize_func, then lowercase + # preserve original `value` to produce an accurate message in + # `self.fail` + normed_value = value + normed_choices = {choice: choice for choice in self.choices} + + if ctx is not None and ctx.token_normalize_func is not None: + normed_value = ctx.token_normalize_func(value) + normed_choices = { + ctx.token_normalize_func(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if not self.case_sensitive: + normed_value = normed_value.casefold() + normed_choices = { + normed_choice.casefold(): original + for normed_choice, original in normed_choices.items() + } + + if normed_value in normed_choices: + return normed_choices[normed_value] + + choices_str = ", ".join(map(repr, self.choices)) + self.fail( + ngettext( + "{value!r} is not {choice}.", + "{value!r} is not one of {choices}.", + len(self.choices), + ).format(value=value, choice=choices_str, choices=choices_str), + param, + ctx, + ) + + def __repr__(self) -> str: + return f"Choice({list(self.choices)})" + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Complete choices that start with the incomplete value. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + str_choices = map(str, self.choices) + + if self.case_sensitive: + matched = (c for c in str_choices if c.startswith(incomplete)) + else: + incomplete = incomplete.lower() + matched = (c for c in str_choices if c.lower().startswith(incomplete)) + + return [CompletionItem(c) for c in matched] + + +class DateTime(ParamType): + """The DateTime type converts date strings into `datetime` objects. + + The format strings which are checked are configurable, but default to some + common (non-timezone aware) ISO 8601 formats. + + When specifying *DateTime* formats, you should only pass a list or a tuple. + Other iterables, like generators, may lead to surprising results. + + The format strings are processed using ``datetime.strptime``, and this + consequently defines the format strings which are allowed. + + Parsing is tried using each format, in order, and the first format which + parses successfully is used. + + :param formats: A list or tuple of date format strings, in the order in + which they should be tried. Defaults to + ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, + ``'%Y-%m-%d %H:%M:%S'``. + """ + + name = "datetime" + + def __init__(self, formats: t.Optional[t.Sequence[str]] = None): + self.formats: t.Sequence[str] = formats or [ + "%Y-%m-%d", + "%Y-%m-%dT%H:%M:%S", + "%Y-%m-%d %H:%M:%S", + ] + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["formats"] = self.formats + return info_dict + + def get_metavar(self, param: "Parameter") -> str: + return f"[{'|'.join(self.formats)}]" + + def _try_to_convert_date(self, value: t.Any, format: str) -> t.Optional[datetime]: + try: + return datetime.strptime(value, format) + except ValueError: + return None + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if isinstance(value, datetime): + return value + + for format in self.formats: + converted = self._try_to_convert_date(value, format) + + if converted is not None: + return converted + + formats_str = ", ".join(map(repr, self.formats)) + self.fail( + ngettext( + "{value!r} does not match the format {format}.", + "{value!r} does not match the formats {formats}.", + len(self.formats), + ).format(value=value, format=formats_str, formats=formats_str), + param, + ctx, + ) + + def __repr__(self) -> str: + return "DateTime" + + +class _NumberParamTypeBase(ParamType): + _number_class: t.ClassVar[t.Type[t.Any]] + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + try: + return self._number_class(value) + except ValueError: + self.fail( + _("{value!r} is not a valid {number_type}.").format( + value=value, number_type=self.name + ), + param, + ctx, + ) + + +class _NumberRangeBase(_NumberParamTypeBase): + def __init__( + self, + min: t.Optional[float] = None, + max: t.Optional[float] = None, + min_open: bool = False, + max_open: bool = False, + clamp: bool = False, + ) -> None: + self.min = min + self.max = max + self.min_open = min_open + self.max_open = max_open + self.clamp = clamp + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + min=self.min, + max=self.max, + min_open=self.min_open, + max_open=self.max_open, + clamp=self.clamp, + ) + return info_dict + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + import operator + + rv = super().convert(value, param, ctx) + lt_min: bool = self.min is not None and ( + operator.le if self.min_open else operator.lt + )(rv, self.min) + gt_max: bool = self.max is not None and ( + operator.ge if self.max_open else operator.gt + )(rv, self.max) + + if self.clamp: + if lt_min: + return self._clamp(self.min, 1, self.min_open) # type: ignore + + if gt_max: + return self._clamp(self.max, -1, self.max_open) # type: ignore + + if lt_min or gt_max: + self.fail( + _("{value} is not in the range {range}.").format( + value=rv, range=self._describe_range() + ), + param, + ctx, + ) + + return rv + + def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: + """Find the valid value to clamp to bound in the given + direction. + + :param bound: The boundary value. + :param dir: 1 or -1 indicating the direction to move. + :param open: If true, the range does not include the bound. + """ + raise NotImplementedError + + def _describe_range(self) -> str: + """Describe the range for use in help text.""" + if self.min is None: + op = "<" if self.max_open else "<=" + return f"x{op}{self.max}" + + if self.max is None: + op = ">" if self.min_open else ">=" + return f"x{op}{self.min}" + + lop = "<" if self.min_open else "<=" + rop = "<" if self.max_open else "<=" + return f"{self.min}{lop}x{rop}{self.max}" + + def __repr__(self) -> str: + clamp = " clamped" if self.clamp else "" + return f"<{type(self).__name__} {self._describe_range()}{clamp}>" + + +class IntParamType(_NumberParamTypeBase): + name = "integer" + _number_class = int + + def __repr__(self) -> str: + return "INT" + + +class IntRange(_NumberRangeBase, IntParamType): + """Restrict an :data:`click.INT` value to a range of accepted + values. See :ref:`ranges`. + + If ``min`` or ``max`` are not passed, any value is accepted in that + direction. If ``min_open`` or ``max_open`` are enabled, the + corresponding boundary is not included in the range. + + If ``clamp`` is enabled, a value outside the range is clamped to the + boundary instead of failing. + + .. versionchanged:: 8.0 + Added the ``min_open`` and ``max_open`` parameters. + """ + + name = "integer range" + + def _clamp( # type: ignore + self, bound: int, dir: "te.Literal[1, -1]", open: bool + ) -> int: + if not open: + return bound + + return bound + dir + + +class FloatParamType(_NumberParamTypeBase): + name = "float" + _number_class = float + + def __repr__(self) -> str: + return "FLOAT" + + +class FloatRange(_NumberRangeBase, FloatParamType): + """Restrict a :data:`click.FLOAT` value to a range of accepted + values. See :ref:`ranges`. + + If ``min`` or ``max`` are not passed, any value is accepted in that + direction. If ``min_open`` or ``max_open`` are enabled, the + corresponding boundary is not included in the range. + + If ``clamp`` is enabled, a value outside the range is clamped to the + boundary instead of failing. This is not supported if either + boundary is marked ``open``. + + .. versionchanged:: 8.0 + Added the ``min_open`` and ``max_open`` parameters. + """ + + name = "float range" + + def __init__( + self, + min: t.Optional[float] = None, + max: t.Optional[float] = None, + min_open: bool = False, + max_open: bool = False, + clamp: bool = False, + ) -> None: + super().__init__( + min=min, max=max, min_open=min_open, max_open=max_open, clamp=clamp + ) + + if (min_open or max_open) and clamp: + raise TypeError("Clamping is not supported for open bounds.") + + def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: + if not open: + return bound + + # Could use Python 3.9's math.nextafter here, but clamping an + # open float range doesn't seem to be particularly useful. It's + # left up to the user to write a callback to do it if needed. + raise RuntimeError("Clamping is not supported for open bounds.") + + +class BoolParamType(ParamType): + name = "boolean" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if value in {False, True}: + return bool(value) + + norm = value.strip().lower() + + if norm in {"1", "true", "t", "yes", "y", "on"}: + return True + + if norm in {"0", "false", "f", "no", "n", "off"}: + return False + + self.fail( + _("{value!r} is not a valid boolean.").format(value=value), param, ctx + ) + + def __repr__(self) -> str: + return "BOOL" + + +class UUIDParameterType(ParamType): + name = "uuid" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + import uuid + + if isinstance(value, uuid.UUID): + return value + + value = value.strip() + + try: + return uuid.UUID(value) + except ValueError: + self.fail( + _("{value!r} is not a valid UUID.").format(value=value), param, ctx + ) + + def __repr__(self) -> str: + return "UUID" + + +class File(ParamType): + """Declares a parameter to be a file for reading or writing. The file + is automatically closed once the context tears down (after the command + finished working). + + Files can be opened for reading or writing. The special value ``-`` + indicates stdin or stdout depending on the mode. + + By default, the file is opened for reading text data, but it can also be + opened in binary mode or for writing. The encoding parameter can be used + to force a specific encoding. + + The `lazy` flag controls if the file should be opened immediately or upon + first IO. The default is to be non-lazy for standard input and output + streams as well as files opened for reading, `lazy` otherwise. When opening a + file lazily for reading, it is still opened temporarily for validation, but + will not be held open until first IO. lazy is mainly useful when opening + for writing to avoid creating the file until it is needed. + + Files can also be opened atomically in which case all writes go into a + separate file in the same folder and upon completion the file will + be moved over to the original location. This is useful if a file + regularly read by other users is modified. + + See :ref:`file-args` for more information. + + .. versionchanged:: 2.0 + Added the ``atomic`` parameter. + """ + + name = "filename" + envvar_list_splitter: t.ClassVar[str] = os.path.pathsep + + def __init__( + self, + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + lazy: t.Optional[bool] = None, + atomic: bool = False, + ) -> None: + self.mode = mode + self.encoding = encoding + self.errors = errors + self.lazy = lazy + self.atomic = atomic + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update(mode=self.mode, encoding=self.encoding) + return info_dict + + def resolve_lazy_flag(self, value: "t.Union[str, os.PathLike[str]]") -> bool: + if self.lazy is not None: + return self.lazy + if os.fspath(value) == "-": + return False + elif "w" in self.mode: + return True + return False + + def convert( + self, + value: t.Union[str, "os.PathLike[str]", t.IO[t.Any]], + param: t.Optional["Parameter"], + ctx: t.Optional["Context"], + ) -> t.IO[t.Any]: + if _is_file_like(value): + return value + + value = t.cast("t.Union[str, os.PathLike[str]]", value) + + try: + lazy = self.resolve_lazy_flag(value) + + if lazy: + lf = LazyFile( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + + if ctx is not None: + ctx.call_on_close(lf.close_intelligently) + + return t.cast(t.IO[t.Any], lf) + + f, should_close = open_stream( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + + # If a context is provided, we automatically close the file + # at the end of the context execution (or flush out). If a + # context does not exist, it's the caller's responsibility to + # properly close the file. This for instance happens when the + # type is used with prompts. + if ctx is not None: + if should_close: + ctx.call_on_close(safecall(f.close)) + else: + ctx.call_on_close(safecall(f.flush)) + + return f + except OSError as e: + self.fail(f"'{format_filename(value)}': {e.strerror}", param, ctx) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a special completion marker that tells the completion + system to use the shell to provide file path completions. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + return [CompletionItem(incomplete, type="file")] + + +def _is_file_like(value: t.Any) -> "te.TypeGuard[t.IO[t.Any]]": + return hasattr(value, "read") or hasattr(value, "write") + + +class Path(ParamType): + """The ``Path`` type is similar to the :class:`File` type, but + returns the filename instead of an open file. Various checks can be + enabled to validate the type of file and permissions. + + :param exists: The file or directory needs to exist for the value to + be valid. If this is not set to ``True``, and the file does not + exist, then all further checks are silently skipped. + :param file_okay: Allow a file as a value. + :param dir_okay: Allow a directory as a value. + :param readable: if true, a readable check is performed. + :param writable: if true, a writable check is performed. + :param executable: if true, an executable check is performed. + :param resolve_path: Make the value absolute and resolve any + symlinks. A ``~`` is not expanded, as this is supposed to be + done by the shell only. + :param allow_dash: Allow a single dash as a value, which indicates + a standard stream (but does not open it). Use + :func:`~click.open_file` to handle opening this value. + :param path_type: Convert the incoming path value to this type. If + ``None``, keep Python's default, which is ``str``. Useful to + convert to :class:`pathlib.Path`. + + .. versionchanged:: 8.1 + Added the ``executable`` parameter. + + .. versionchanged:: 8.0 + Allow passing ``path_type=pathlib.Path``. + + .. versionchanged:: 6.0 + Added the ``allow_dash`` parameter. + """ + + envvar_list_splitter: t.ClassVar[str] = os.path.pathsep + + def __init__( + self, + exists: bool = False, + file_okay: bool = True, + dir_okay: bool = True, + writable: bool = False, + readable: bool = True, + resolve_path: bool = False, + allow_dash: bool = False, + path_type: t.Optional[t.Type[t.Any]] = None, + executable: bool = False, + ): + self.exists = exists + self.file_okay = file_okay + self.dir_okay = dir_okay + self.readable = readable + self.writable = writable + self.executable = executable + self.resolve_path = resolve_path + self.allow_dash = allow_dash + self.type = path_type + + if self.file_okay and not self.dir_okay: + self.name: str = _("file") + elif self.dir_okay and not self.file_okay: + self.name = _("directory") + else: + self.name = _("path") + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + exists=self.exists, + file_okay=self.file_okay, + dir_okay=self.dir_okay, + writable=self.writable, + readable=self.readable, + allow_dash=self.allow_dash, + ) + return info_dict + + def coerce_path_result( + self, value: "t.Union[str, os.PathLike[str]]" + ) -> "t.Union[str, bytes, os.PathLike[str]]": + if self.type is not None and not isinstance(value, self.type): + if self.type is str: + return os.fsdecode(value) + elif self.type is bytes: + return os.fsencode(value) + else: + return t.cast("os.PathLike[str]", self.type(value)) + + return value + + def convert( + self, + value: "t.Union[str, os.PathLike[str]]", + param: t.Optional["Parameter"], + ctx: t.Optional["Context"], + ) -> "t.Union[str, bytes, os.PathLike[str]]": + rv = value + + is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") + + if not is_dash: + if self.resolve_path: + # os.path.realpath doesn't resolve symlinks on Windows + # until Python 3.8. Use pathlib for now. + import pathlib + + rv = os.fsdecode(pathlib.Path(rv).resolve()) + + try: + st = os.stat(rv) + except OSError: + if not self.exists: + return self.coerce_path_result(rv) + self.fail( + _("{name} {filename!r} does not exist.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + + if not self.file_okay and stat.S_ISREG(st.st_mode): + self.fail( + _("{name} {filename!r} is a file.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + if not self.dir_okay and stat.S_ISDIR(st.st_mode): + self.fail( + _("{name} {filename!r} is a directory.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + + if self.readable and not os.access(rv, os.R_OK): + self.fail( + _("{name} {filename!r} is not readable.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + + if self.writable and not os.access(rv, os.W_OK): + self.fail( + _("{name} {filename!r} is not writable.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + + if self.executable and not os.access(value, os.X_OK): + self.fail( + _("{name} {filename!r} is not executable.").format( + name=self.name.title(), filename=format_filename(value) + ), + param, + ctx, + ) + + return self.coerce_path_result(rv) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a special completion marker that tells the completion + system to use the shell to provide path completions for only + directories or any paths. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + type = "dir" if self.dir_okay and not self.file_okay else "file" + return [CompletionItem(incomplete, type=type)] + + +class Tuple(CompositeParamType): + """The default behavior of Click is to apply a type on a value directly. + This works well in most cases, except for when `nargs` is set to a fixed + count and different types should be used for different items. In this + case the :class:`Tuple` type can be used. This type can only be used + if `nargs` is set to a fixed number. + + For more information see :ref:`tuple-type`. + + This can be selected by using a Python tuple literal as a type. + + :param types: a list of types that should be used for the tuple items. + """ + + def __init__(self, types: t.Sequence[t.Union[t.Type[t.Any], ParamType]]) -> None: + self.types: t.Sequence[ParamType] = [convert_type(ty) for ty in types] + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["types"] = [t.to_info_dict() for t in self.types] + return info_dict + + @property + def name(self) -> str: # type: ignore + return f"<{' '.join(ty.name for ty in self.types)}>" + + @property + def arity(self) -> int: # type: ignore + return len(self.types) + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + len_type = len(self.types) + len_value = len(value) + + if len_value != len_type: + self.fail( + ngettext( + "{len_type} values are required, but {len_value} was given.", + "{len_type} values are required, but {len_value} were given.", + len_value, + ).format(len_type=len_type, len_value=len_value), + param=param, + ctx=ctx, + ) + + return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) + + +def convert_type(ty: t.Optional[t.Any], default: t.Optional[t.Any] = None) -> ParamType: + """Find the most appropriate :class:`ParamType` for the given Python + type. If the type isn't provided, it can be inferred from a default + value. + """ + guessed_type = False + + if ty is None and default is not None: + if isinstance(default, (tuple, list)): + # If the default is empty, ty will remain None and will + # return STRING. + if default: + item = default[0] + + # A tuple of tuples needs to detect the inner types. + # Can't call convert recursively because that would + # incorrectly unwind the tuple to a single type. + if isinstance(item, (tuple, list)): + ty = tuple(map(type, item)) + else: + ty = type(item) + else: + ty = type(default) + + guessed_type = True + + if isinstance(ty, tuple): + return Tuple(ty) + + if isinstance(ty, ParamType): + return ty + + if ty is str or ty is None: + return STRING + + if ty is int: + return INT + + if ty is float: + return FLOAT + + if ty is bool: + return BOOL + + if guessed_type: + return STRING + + if __debug__: + try: + if issubclass(ty, ParamType): + raise AssertionError( + f"Attempted to use an uninstantiated parameter type ({ty})." + ) + except TypeError: + # ty is an instance (correct), so issubclass fails. + pass + + return FuncParamType(ty) + + +#: A dummy parameter type that just does nothing. From a user's +#: perspective this appears to just be the same as `STRING` but +#: internally no string conversion takes place if the input was bytes. +#: This is usually useful when working with file paths as they can +#: appear in bytes and unicode. +#: +#: For path related uses the :class:`Path` type is a better choice but +#: there are situations where an unprocessed type is useful which is why +#: it is is provided. +#: +#: .. versionadded:: 4.0 +UNPROCESSED = UnprocessedParamType() + +#: A unicode string parameter type which is the implicit default. This +#: can also be selected by using ``str`` as type. +STRING = StringParamType() + +#: An integer parameter. This can also be selected by using ``int`` as +#: type. +INT = IntParamType() + +#: A floating point value parameter. This can also be selected by using +#: ``float`` as type. +FLOAT = FloatParamType() + +#: A boolean parameter. This is the default for boolean flags. This can +#: also be selected by using ``bool`` as a type. +BOOL = BoolParamType() + +#: A UUID parameter. +UUID = UUIDParameterType() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/click/utils.py b/staybuddy/server/venv/lib/python3.10/site-packages/click/utils.py new file mode 100644 index 000000000..836c6f21a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/click/utils.py @@ -0,0 +1,624 @@ +import os +import re +import sys +import typing as t +from functools import update_wrapper +from types import ModuleType +from types import TracebackType + +from ._compat import _default_text_stderr +from ._compat import _default_text_stdout +from ._compat import _find_binary_writer +from ._compat import auto_wrap_for_ansi +from ._compat import binary_streams +from ._compat import open_stream +from ._compat import should_strip_ansi +from ._compat import strip_ansi +from ._compat import text_streams +from ._compat import WIN +from .globals import resolve_color_default + +if t.TYPE_CHECKING: + import typing_extensions as te + + P = te.ParamSpec("P") + +R = t.TypeVar("R") + + +def _posixify(name: str) -> str: + return "-".join(name.split()).lower() + + +def safecall(func: "t.Callable[P, R]") -> "t.Callable[P, t.Optional[R]]": + """Wraps a function so that it swallows exceptions.""" + + def wrapper(*args: "P.args", **kwargs: "P.kwargs") -> t.Optional[R]: + try: + return func(*args, **kwargs) + except Exception: + pass + return None + + return update_wrapper(wrapper, func) + + +def make_str(value: t.Any) -> str: + """Converts a value into a valid string.""" + if isinstance(value, bytes): + try: + return value.decode(sys.getfilesystemencoding()) + except UnicodeError: + return value.decode("utf-8", "replace") + return str(value) + + +def make_default_short_help(help: str, max_length: int = 45) -> str: + """Returns a condensed version of help string.""" + # Consider only the first paragraph. + paragraph_end = help.find("\n\n") + + if paragraph_end != -1: + help = help[:paragraph_end] + + # Collapse newlines, tabs, and spaces. + words = help.split() + + if not words: + return "" + + # The first paragraph started with a "no rewrap" marker, ignore it. + if words[0] == "\b": + words = words[1:] + + total_length = 0 + last_index = len(words) - 1 + + for i, word in enumerate(words): + total_length += len(word) + (i > 0) + + if total_length > max_length: # too long, truncate + break + + if word[-1] == ".": # sentence end, truncate without "..." + return " ".join(words[: i + 1]) + + if total_length == max_length and i != last_index: + break # not at sentence end, truncate with "..." + else: + return " ".join(words) # no truncation needed + + # Account for the length of the suffix. + total_length += len("...") + + # remove words until the length is short enough + while i > 0: + total_length -= len(words[i]) + (i > 0) + + if total_length <= max_length: + break + + i -= 1 + + return " ".join(words[:i]) + "..." + + +class LazyFile: + """A lazy file works like a regular file but it does not fully open + the file but it does perform some basic checks early to see if the + filename parameter does make sense. This is useful for safely opening + files for writing. + """ + + def __init__( + self, + filename: t.Union[str, "os.PathLike[str]"], + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + atomic: bool = False, + ): + self.name: str = os.fspath(filename) + self.mode = mode + self.encoding = encoding + self.errors = errors + self.atomic = atomic + self._f: t.Optional[t.IO[t.Any]] + self.should_close: bool + + if self.name == "-": + self._f, self.should_close = open_stream(filename, mode, encoding, errors) + else: + if "r" in mode: + # Open and close the file in case we're opening it for + # reading so that we can catch at least some errors in + # some cases early. + open(filename, mode).close() + self._f = None + self.should_close = True + + def __getattr__(self, name: str) -> t.Any: + return getattr(self.open(), name) + + def __repr__(self) -> str: + if self._f is not None: + return repr(self._f) + return f"" + + def open(self) -> t.IO[t.Any]: + """Opens the file if it's not yet open. This call might fail with + a :exc:`FileError`. Not handling this error will produce an error + that Click shows. + """ + if self._f is not None: + return self._f + try: + rv, self.should_close = open_stream( + self.name, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + except OSError as e: + from .exceptions import FileError + + raise FileError(self.name, hint=e.strerror) from e + self._f = rv + return rv + + def close(self) -> None: + """Closes the underlying file, no matter what.""" + if self._f is not None: + self._f.close() + + def close_intelligently(self) -> None: + """This function only closes the file if it was opened by the lazy + file wrapper. For instance this will never close stdin. + """ + if self.should_close: + self.close() + + def __enter__(self) -> "LazyFile": + return self + + def __exit__( + self, + exc_type: t.Optional[t.Type[BaseException]], + exc_value: t.Optional[BaseException], + tb: t.Optional[TracebackType], + ) -> None: + self.close_intelligently() + + def __iter__(self) -> t.Iterator[t.AnyStr]: + self.open() + return iter(self._f) # type: ignore + + +class KeepOpenFile: + def __init__(self, file: t.IO[t.Any]) -> None: + self._file: t.IO[t.Any] = file + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._file, name) + + def __enter__(self) -> "KeepOpenFile": + return self + + def __exit__( + self, + exc_type: t.Optional[t.Type[BaseException]], + exc_value: t.Optional[BaseException], + tb: t.Optional[TracebackType], + ) -> None: + pass + + def __repr__(self) -> str: + return repr(self._file) + + def __iter__(self) -> t.Iterator[t.AnyStr]: + return iter(self._file) + + +def echo( + message: t.Optional[t.Any] = None, + file: t.Optional[t.IO[t.Any]] = None, + nl: bool = True, + err: bool = False, + color: t.Optional[bool] = None, +) -> None: + """Print a message and newline to stdout or a file. This should be + used instead of :func:`print` because it provides better support + for different data, files, and environments. + + Compared to :func:`print`, this does the following: + + - Ensures that the output encoding is not misconfigured on Linux. + - Supports Unicode in the Windows console. + - Supports writing to binary outputs, and supports writing bytes + to text outputs. + - Supports colors and styles on Windows. + - Removes ANSI color and style codes if the output does not look + like an interactive terminal. + - Always flushes the output. + + :param message: The string or bytes to output. Other objects are + converted to strings. + :param file: The file to write to. Defaults to ``stdout``. + :param err: Write to ``stderr`` instead of ``stdout``. + :param nl: Print a newline after the message. Enabled by default. + :param color: Force showing or hiding colors and other styles. By + default Click will remove color if the output does not look like + an interactive terminal. + + .. versionchanged:: 6.0 + Support Unicode output on the Windows console. Click does not + modify ``sys.stdout``, so ``sys.stdout.write()`` and ``print()`` + will still not support Unicode. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + + .. versionadded:: 3.0 + Added the ``err`` parameter. + + .. versionchanged:: 2.0 + Support colors on Windows if colorama is installed. + """ + if file is None: + if err: + file = _default_text_stderr() + else: + file = _default_text_stdout() + + # There are no standard streams attached to write to. For example, + # pythonw on Windows. + if file is None: + return + + # Convert non bytes/text into the native string type. + if message is not None and not isinstance(message, (str, bytes, bytearray)): + out: t.Optional[t.Union[str, bytes]] = str(message) + else: + out = message + + if nl: + out = out or "" + if isinstance(out, str): + out += "\n" + else: + out += b"\n" + + if not out: + file.flush() + return + + # If there is a message and the value looks like bytes, we manually + # need to find the binary stream and write the message in there. + # This is done separately so that most stream types will work as you + # would expect. Eg: you can write to StringIO for other cases. + if isinstance(out, (bytes, bytearray)): + binary_file = _find_binary_writer(file) + + if binary_file is not None: + file.flush() + binary_file.write(out) + binary_file.flush() + return + + # ANSI style code support. For no message or bytes, nothing happens. + # When outputting to a file instead of a terminal, strip codes. + else: + color = resolve_color_default(color) + + if should_strip_ansi(file, color): + out = strip_ansi(out) + elif WIN: + if auto_wrap_for_ansi is not None: + file = auto_wrap_for_ansi(file, color) # type: ignore + elif not color: + out = strip_ansi(out) + + file.write(out) # type: ignore + file.flush() + + +def get_binary_stream(name: "te.Literal['stdin', 'stdout', 'stderr']") -> t.BinaryIO: + """Returns a system stream for byte processing. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + """ + opener = binary_streams.get(name) + if opener is None: + raise TypeError(f"Unknown standard stream '{name}'") + return opener() + + +def get_text_stream( + name: "te.Literal['stdin', 'stdout', 'stderr']", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", +) -> t.TextIO: + """Returns a system stream for text processing. This usually returns + a wrapped stream around a binary stream returned from + :func:`get_binary_stream` but it also can take shortcuts for already + correctly configured streams. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + :param encoding: overrides the detected default encoding. + :param errors: overrides the default error mode. + """ + opener = text_streams.get(name) + if opener is None: + raise TypeError(f"Unknown standard stream '{name}'") + return opener(encoding, errors) + + +def open_file( + filename: t.Union[str, "os.PathLike[str]"], + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + lazy: bool = False, + atomic: bool = False, +) -> t.IO[t.Any]: + """Open a file, with extra behavior to handle ``'-'`` to indicate + a standard stream, lazy open on write, and atomic write. Similar to + the behavior of the :class:`~click.File` param type. + + If ``'-'`` is given to open ``stdout`` or ``stdin``, the stream is + wrapped so that using it in a context manager will not close it. + This makes it possible to use the function without accidentally + closing a standard stream: + + .. code-block:: python + + with open_file(filename) as f: + ... + + :param filename: The name or Path of the file to open, or ``'-'`` for + ``stdin``/``stdout``. + :param mode: The mode in which to open the file. + :param encoding: The encoding to decode or encode a file opened in + text mode. + :param errors: The error handling mode. + :param lazy: Wait to open the file until it is accessed. For read + mode, the file is temporarily opened to raise access errors + early, then closed until it is read again. + :param atomic: Write to a temporary file and replace the given file + on close. + + .. versionadded:: 3.0 + """ + if lazy: + return t.cast( + t.IO[t.Any], LazyFile(filename, mode, encoding, errors, atomic=atomic) + ) + + f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic) + + if not should_close: + f = t.cast(t.IO[t.Any], KeepOpenFile(f)) + + return f + + +def format_filename( + filename: "t.Union[str, bytes, os.PathLike[str], os.PathLike[bytes]]", + shorten: bool = False, +) -> str: + """Format a filename as a string for display. Ensures the filename can be + displayed by replacing any invalid bytes or surrogate escapes in the name + with the replacement character ``�``. + + Invalid bytes or surrogate escapes will raise an error when written to a + stream with ``errors="strict"``. This will typically happen with ``stdout`` + when the locale is something like ``en_GB.UTF-8``. + + Many scenarios *are* safe to write surrogates though, due to PEP 538 and + PEP 540, including: + + - Writing to ``stderr``, which uses ``errors="backslashreplace"``. + - The system has ``LANG=C.UTF-8``, ``C``, or ``POSIX``. Python opens + stdout and stderr with ``errors="surrogateescape"``. + - None of ``LANG/LC_*`` are set. Python assumes ``LANG=C.UTF-8``. + - Python is started in UTF-8 mode with ``PYTHONUTF8=1`` or ``-X utf8``. + Python opens stdout and stderr with ``errors="surrogateescape"``. + + :param filename: formats a filename for UI display. This will also convert + the filename into unicode without failing. + :param shorten: this optionally shortens the filename to strip of the + path that leads up to it. + """ + if shorten: + filename = os.path.basename(filename) + else: + filename = os.fspath(filename) + + if isinstance(filename, bytes): + filename = filename.decode(sys.getfilesystemencoding(), "replace") + else: + filename = filename.encode("utf-8", "surrogateescape").decode( + "utf-8", "replace" + ) + + return filename + + +def get_app_dir(app_name: str, roaming: bool = True, force_posix: bool = False) -> str: + r"""Returns the config folder for the application. The default behavior + is to return whatever is most appropriate for the operating system. + + To give you an idea, for an app called ``"Foo Bar"``, something like + the following folders could be returned: + + Mac OS X: + ``~/Library/Application Support/Foo Bar`` + Mac OS X (POSIX): + ``~/.foo-bar`` + Unix: + ``~/.config/foo-bar`` + Unix (POSIX): + ``~/.foo-bar`` + Windows (roaming): + ``C:\Users\\AppData\Roaming\Foo Bar`` + Windows (not roaming): + ``C:\Users\\AppData\Local\Foo Bar`` + + .. versionadded:: 2.0 + + :param app_name: the application name. This should be properly capitalized + and can contain whitespace. + :param roaming: controls if the folder should be roaming or not on Windows. + Has no effect otherwise. + :param force_posix: if this is set to `True` then on any POSIX system the + folder will be stored in the home folder with a leading + dot instead of the XDG config home or darwin's + application support folder. + """ + if WIN: + key = "APPDATA" if roaming else "LOCALAPPDATA" + folder = os.environ.get(key) + if folder is None: + folder = os.path.expanduser("~") + return os.path.join(folder, app_name) + if force_posix: + return os.path.join(os.path.expanduser(f"~/.{_posixify(app_name)}")) + if sys.platform == "darwin": + return os.path.join( + os.path.expanduser("~/Library/Application Support"), app_name + ) + return os.path.join( + os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), + _posixify(app_name), + ) + + +class PacifyFlushWrapper: + """This wrapper is used to catch and suppress BrokenPipeErrors resulting + from ``.flush()`` being called on broken pipe during the shutdown/final-GC + of the Python interpreter. Notably ``.flush()`` is always called on + ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any + other cleanup code, and the case where the underlying file is not a broken + pipe, all calls and attributes are proxied. + """ + + def __init__(self, wrapped: t.IO[t.Any]) -> None: + self.wrapped = wrapped + + def flush(self) -> None: + try: + self.wrapped.flush() + except OSError as e: + import errno + + if e.errno != errno.EPIPE: + raise + + def __getattr__(self, attr: str) -> t.Any: + return getattr(self.wrapped, attr) + + +def _detect_program_name( + path: t.Optional[str] = None, _main: t.Optional[ModuleType] = None +) -> str: + """Determine the command used to run the program, for use in help + text. If a file or entry point was executed, the file name is + returned. If ``python -m`` was used to execute a module or package, + ``python -m name`` is returned. + + This doesn't try to be too precise, the goal is to give a concise + name for help text. Files are only shown as their name without the + path. ``python`` is only shown for modules, and the full path to + ``sys.executable`` is not shown. + + :param path: The Python file being executed. Python puts this in + ``sys.argv[0]``, which is used by default. + :param _main: The ``__main__`` module. This should only be passed + during internal testing. + + .. versionadded:: 8.0 + Based on command args detection in the Werkzeug reloader. + + :meta private: + """ + if _main is None: + _main = sys.modules["__main__"] + + if not path: + path = sys.argv[0] + + # The value of __package__ indicates how Python was called. It may + # not exist if a setuptools script is installed as an egg. It may be + # set incorrectly for entry points created with pip on Windows. + # It is set to "" inside a Shiv or PEX zipapp. + if getattr(_main, "__package__", None) in {None, ""} or ( + os.name == "nt" + and _main.__package__ == "" + and not os.path.exists(path) + and os.path.exists(f"{path}.exe") + ): + # Executed a file, like "python app.py". + return os.path.basename(path) + + # Executed a module, like "python -m example". + # Rewritten by Python from "-m script" to "/path/to/script.py". + # Need to look at main module to determine how it was executed. + py_module = t.cast(str, _main.__package__) + name = os.path.splitext(os.path.basename(path))[0] + + # A submodule like "example.cli". + if name != "__main__": + py_module = f"{py_module}.{name}" + + return f"python -m {py_module.lstrip('.')}" + + +def _expand_args( + args: t.Iterable[str], + *, + user: bool = True, + env: bool = True, + glob_recursive: bool = True, +) -> t.List[str]: + """Simulate Unix shell expansion with Python functions. + + See :func:`glob.glob`, :func:`os.path.expanduser`, and + :func:`os.path.expandvars`. + + This is intended for use on Windows, where the shell does not do any + expansion. It may not exactly match what a Unix shell would do. + + :param args: List of command line arguments to expand. + :param user: Expand user home directory. + :param env: Expand environment variables. + :param glob_recursive: ``**`` matches directories recursively. + + .. versionchanged:: 8.1 + Invalid glob patterns are treated as empty expansions rather + than raising an error. + + .. versionadded:: 8.0 + + :meta private: + """ + from glob import glob + + out = [] + + for arg in args: + if user: + arg = os.path.expanduser(arg) + + if env: + arg = os.path.expandvars(arg) + + try: + matches = glob(arg, recursive=glob_recursive) + except re.error: + matches = [] + + if not matches: + out.append(arg) + else: + out.extend(matches) + + return out diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/distutils-precedence.pth b/staybuddy/server/venv/lib/python3.10/site-packages/distutils-precedence.pth new file mode 100644 index 000000000..7f009fe9b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/distutils-precedence.pth @@ -0,0 +1 @@ +import os; var = 'SETUPTOOLS_USE_DISTUTILS'; enabled = os.environ.get(var, 'local') == 'local'; enabled and __import__('_distutils_hack').add_shim(); diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/LICENSE.txt b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/LICENSE.txt new file mode 100644 index 000000000..9d227a0cc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/LICENSE.txt @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/METADATA new file mode 100644 index 000000000..5a0210724 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/METADATA @@ -0,0 +1,101 @@ +Metadata-Version: 2.1 +Name: Flask +Version: 3.0.3 +Summary: A simple framework for building complex web applications. +Maintainer-email: Pallets +Requires-Python: >=3.8 +Description-Content-Type: text/markdown +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Framework :: Flask +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI :: Application +Classifier: Topic :: Software Development :: Libraries :: Application Frameworks +Classifier: Typing :: Typed +Requires-Dist: Werkzeug>=3.0.0 +Requires-Dist: Jinja2>=3.1.2 +Requires-Dist: itsdangerous>=2.1.2 +Requires-Dist: click>=8.1.3 +Requires-Dist: blinker>=1.6.2 +Requires-Dist: importlib-metadata>=3.6.0; python_version < '3.10' +Requires-Dist: asgiref>=3.2 ; extra == "async" +Requires-Dist: python-dotenv ; extra == "dotenv" +Project-URL: Changes, https://flask.palletsprojects.com/changes/ +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://flask.palletsprojects.com/ +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Source, https://github.com/pallets/flask/ +Provides-Extra: async +Provides-Extra: dotenv + +# Flask + +Flask is a lightweight [WSGI][] web application framework. It is designed +to make getting started quick and easy, with the ability to scale up to +complex applications. It began as a simple wrapper around [Werkzeug][] +and [Jinja][], and has become one of the most popular Python web +application frameworks. + +Flask offers suggestions, but doesn't enforce any dependencies or +project layout. It is up to the developer to choose the tools and +libraries they want to use. There are many extensions provided by the +community that make adding new functionality easy. + +[WSGI]: https://wsgi.readthedocs.io/ +[Werkzeug]: https://werkzeug.palletsprojects.com/ +[Jinja]: https://jinja.palletsprojects.com/ + + +## Installing + +Install and update from [PyPI][] using an installer such as [pip][]: + +``` +$ pip install -U Flask +``` + +[PyPI]: https://pypi.org/project/Flask/ +[pip]: https://pip.pypa.io/en/stable/getting-started/ + + +## A Simple Example + +```python +# save this as app.py +from flask import Flask + +app = Flask(__name__) + +@app.route("/") +def hello(): + return "Hello, World!" +``` + +``` +$ flask run + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) +``` + + +## Contributing + +For guidance on setting up a development environment and how to make a +contribution to Flask, see the [contributing guidelines][]. + +[contributing guidelines]: https://github.com/pallets/flask/blob/main/CONTRIBUTING.rst + + +## Donate + +The Pallets organization develops and supports Flask and the libraries +it uses. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, [please +donate today][]. + +[please donate today]: https://palletsprojects.com/donate + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/RECORD new file mode 100644 index 000000000..94bf46d90 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/RECORD @@ -0,0 +1,58 @@ +../../../bin/flask,sha256=y09hVfd8RKlbsTeSCO6_dCppzsGITGYHeketHAEKY6A,288 +flask-3.0.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +flask-3.0.3.dist-info/LICENSE.txt,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +flask-3.0.3.dist-info/METADATA,sha256=exPahy4aahjV-mYqd9qb5HNP8haB_IxTuaotoSvCtag,3177 +flask-3.0.3.dist-info/RECORD,, +flask-3.0.3.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask-3.0.3.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81 +flask-3.0.3.dist-info/entry_points.txt,sha256=bBP7hTOS5fz9zLtC7sPofBZAlMkEvBxu7KqS6l5lvc4,40 +flask/__init__.py,sha256=6xMqdVA0FIQ2U1KVaGX3lzNCdXPzoHPaa0hvQCNcfSk,2625 +flask/__main__.py,sha256=bYt9eEaoRQWdejEHFD8REx9jxVEdZptECFsV7F49Ink,30 +flask/__pycache__/__init__.cpython-310.pyc,, +flask/__pycache__/__main__.cpython-310.pyc,, +flask/__pycache__/app.cpython-310.pyc,, +flask/__pycache__/blueprints.cpython-310.pyc,, +flask/__pycache__/cli.cpython-310.pyc,, +flask/__pycache__/config.cpython-310.pyc,, +flask/__pycache__/ctx.cpython-310.pyc,, +flask/__pycache__/debughelpers.cpython-310.pyc,, +flask/__pycache__/globals.cpython-310.pyc,, +flask/__pycache__/helpers.cpython-310.pyc,, +flask/__pycache__/logging.cpython-310.pyc,, +flask/__pycache__/sessions.cpython-310.pyc,, +flask/__pycache__/signals.cpython-310.pyc,, +flask/__pycache__/templating.cpython-310.pyc,, +flask/__pycache__/testing.cpython-310.pyc,, +flask/__pycache__/typing.cpython-310.pyc,, +flask/__pycache__/views.cpython-310.pyc,, +flask/__pycache__/wrappers.cpython-310.pyc,, +flask/app.py,sha256=7-lh6cIj27riTE1Q18Ok1p5nOZ8qYiMux4Btc6o6mNc,60143 +flask/blueprints.py,sha256=7INXPwTkUxfOQXOOv1yu52NpHPmPGI5fMTMFZ-BG9yY,4430 +flask/cli.py,sha256=OOaf_Efqih1i2in58j-5ZZZmQnPpaSfiUFbEjlL9bzw,35825 +flask/config.py,sha256=bLzLVAj-cq-Xotu9erqOFte0xSFaVXyfz0AkP4GbwmY,13312 +flask/ctx.py,sha256=4atDhJJ_cpV1VMq4qsfU4E_61M1oN93jlS2H9gjrl58,15120 +flask/debughelpers.py,sha256=PGIDhStW_efRjpaa3zHIpo-htStJOR41Ip3OJWPYBwo,6080 +flask/globals.py,sha256=XdQZmStBmPIs8t93tjx6pO7Bm3gobAaONWkFcUHaGas,1713 +flask/helpers.py,sha256=tYrcQ_73GuSZVEgwFr-eMmV69UriFQDBmt8wZJIAqvg,23084 +flask/json/__init__.py,sha256=hLNR898paqoefdeAhraa5wyJy-bmRB2k2dV4EgVy2Z8,5602 +flask/json/__pycache__/__init__.cpython-310.pyc,, +flask/json/__pycache__/provider.cpython-310.pyc,, +flask/json/__pycache__/tag.cpython-310.pyc,, +flask/json/provider.py,sha256=q6iB83lSiopy80DZPrU-9mGcWwrD0mvLjiv9fHrRZgc,7646 +flask/json/tag.py,sha256=DhaNwuIOhdt2R74oOC9Y4Z8ZprxFYiRb5dUP5byyINw,9281 +flask/logging.py,sha256=8sM3WMTubi1cBb2c_lPkWpN0J8dMAqrgKRYLLi1dCVI,2377 +flask/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask/sansio/README.md,sha256=-0X1tECnilmz1cogx-YhNw5d7guK7GKrq_DEV2OzlU0,228 +flask/sansio/__pycache__/app.cpython-310.pyc,, +flask/sansio/__pycache__/blueprints.cpython-310.pyc,, +flask/sansio/__pycache__/scaffold.cpython-310.pyc,, +flask/sansio/app.py,sha256=YG5Gf7JVf1c0yccWDZ86q5VSfJUidOVp27HFxFNxC7U,38053 +flask/sansio/blueprints.py,sha256=Tqe-7EkZ-tbWchm8iDoCfD848f0_3nLv6NNjeIPvHwM,24637 +flask/sansio/scaffold.py,sha256=WLV9TRQMMhGlXz-1OKtQ3lv6mtIBQZxdW2HezYrGxoI,30633 +flask/sessions.py,sha256=RU4lzm9MQW9CtH8rVLRTDm8USMJyT4LbvYe7sxM2__k,14807 +flask/signals.py,sha256=V7lMUww7CqgJ2ThUBn1PiatZtQanOyt7OZpu2GZI-34,750 +flask/templating.py,sha256=2TcXLT85Asflm2W9WOSFxKCmYn5e49w_Jkg9-NaaJWo,7537 +flask/testing.py,sha256=3BFXb3bP7R5r-XLBuobhczbxDu8-1LWRzYuhbr-lwaE,10163 +flask/typing.py,sha256=ZavK-wV28Yv8CQB7u73qZp_jLalpbWdrXS37QR1ftN0,3190 +flask/views.py,sha256=B66bTvYBBcHMYk4dA1ScZD0oTRTBl0I5smp1lRm9riI,6939 +flask/wrappers.py,sha256=m1j5tIJxIu8_sPPgTAB_G4TTh52Q-HoDuw_qHV5J59g,5831 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/WHEEL new file mode 100644 index 000000000..3b5e64b5e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.9.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/entry_points.txt b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/entry_points.txt new file mode 100644 index 000000000..eec6733e5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask-3.0.3.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +flask=flask.cli:main + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/__init__.py new file mode 100644 index 000000000..e86eb43ee --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/__init__.py @@ -0,0 +1,60 @@ +from __future__ import annotations + +import typing as t + +from . import json as json +from .app import Flask as Flask +from .blueprints import Blueprint as Blueprint +from .config import Config as Config +from .ctx import after_this_request as after_this_request +from .ctx import copy_current_request_context as copy_current_request_context +from .ctx import has_app_context as has_app_context +from .ctx import has_request_context as has_request_context +from .globals import current_app as current_app +from .globals import g as g +from .globals import request as request +from .globals import session as session +from .helpers import abort as abort +from .helpers import flash as flash +from .helpers import get_flashed_messages as get_flashed_messages +from .helpers import get_template_attribute as get_template_attribute +from .helpers import make_response as make_response +from .helpers import redirect as redirect +from .helpers import send_file as send_file +from .helpers import send_from_directory as send_from_directory +from .helpers import stream_with_context as stream_with_context +from .helpers import url_for as url_for +from .json import jsonify as jsonify +from .signals import appcontext_popped as appcontext_popped +from .signals import appcontext_pushed as appcontext_pushed +from .signals import appcontext_tearing_down as appcontext_tearing_down +from .signals import before_render_template as before_render_template +from .signals import got_request_exception as got_request_exception +from .signals import message_flashed as message_flashed +from .signals import request_finished as request_finished +from .signals import request_started as request_started +from .signals import request_tearing_down as request_tearing_down +from .signals import template_rendered as template_rendered +from .templating import render_template as render_template +from .templating import render_template_string as render_template_string +from .templating import stream_template as stream_template +from .templating import stream_template_string as stream_template_string +from .wrappers import Request as Request +from .wrappers import Response as Response + + +def __getattr__(name: str) -> t.Any: + if name == "__version__": + import importlib.metadata + import warnings + + warnings.warn( + "The '__version__' attribute is deprecated and will be removed in" + " Flask 3.1. Use feature detection or" + " 'importlib.metadata.version(\"flask\")' instead.", + DeprecationWarning, + stacklevel=2, + ) + return importlib.metadata.version("flask") + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/__main__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/__main__.py new file mode 100644 index 000000000..4e28416e1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/__main__.py @@ -0,0 +1,3 @@ +from .cli import main + +main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/app.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/app.py new file mode 100644 index 000000000..7622b5e83 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/app.py @@ -0,0 +1,1498 @@ +from __future__ import annotations + +import collections.abc as cabc +import os +import sys +import typing as t +import weakref +from datetime import timedelta +from inspect import iscoroutinefunction +from itertools import chain +from types import TracebackType +from urllib.parse import quote as _url_quote + +import click +from werkzeug.datastructures import Headers +from werkzeug.datastructures import ImmutableDict +from werkzeug.exceptions import BadRequestKeyError +from werkzeug.exceptions import HTTPException +from werkzeug.exceptions import InternalServerError +from werkzeug.routing import BuildError +from werkzeug.routing import MapAdapter +from werkzeug.routing import RequestRedirect +from werkzeug.routing import RoutingException +from werkzeug.routing import Rule +from werkzeug.serving import is_running_from_reloader +from werkzeug.wrappers import Response as BaseResponse + +from . import cli +from . import typing as ft +from .ctx import AppContext +from .ctx import RequestContext +from .globals import _cv_app +from .globals import _cv_request +from .globals import current_app +from .globals import g +from .globals import request +from .globals import request_ctx +from .globals import session +from .helpers import get_debug_flag +from .helpers import get_flashed_messages +from .helpers import get_load_dotenv +from .helpers import send_from_directory +from .sansio.app import App +from .sansio.scaffold import _sentinel +from .sessions import SecureCookieSessionInterface +from .sessions import SessionInterface +from .signals import appcontext_tearing_down +from .signals import got_request_exception +from .signals import request_finished +from .signals import request_started +from .signals import request_tearing_down +from .templating import Environment +from .wrappers import Request +from .wrappers import Response + +if t.TYPE_CHECKING: # pragma: no cover + from _typeshed.wsgi import StartResponse + from _typeshed.wsgi import WSGIEnvironment + + from .testing import FlaskClient + from .testing import FlaskCliRunner + +T_shell_context_processor = t.TypeVar( + "T_shell_context_processor", bound=ft.ShellContextProcessorCallable +) +T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable) +T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable) +T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable) +T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable) + + +def _make_timedelta(value: timedelta | int | None) -> timedelta | None: + if value is None or isinstance(value, timedelta): + return value + + return timedelta(seconds=value) + + +class Flask(App): + """The flask object implements a WSGI application and acts as the central + object. It is passed the name of the module or package of the + application. Once it is created it will act as a central registry for + the view functions, the URL rules, template configuration and much more. + + The name of the package is used to resolve resources from inside the + package or the folder the module is contained in depending on if the + package parameter resolves to an actual python package (a folder with + an :file:`__init__.py` file inside) or a standard module (just a ``.py`` file). + + For more information about resource loading, see :func:`open_resource`. + + Usually you create a :class:`Flask` instance in your main module or + in the :file:`__init__.py` file of your package like this:: + + from flask import Flask + app = Flask(__name__) + + .. admonition:: About the First Parameter + + The idea of the first parameter is to give Flask an idea of what + belongs to your application. This name is used to find resources + on the filesystem, can be used by extensions to improve debugging + information and a lot more. + + So it's important what you provide there. If you are using a single + module, `__name__` is always the correct value. If you however are + using a package, it's usually recommended to hardcode the name of + your package there. + + For example if your application is defined in :file:`yourapplication/app.py` + you should create it with one of the two versions below:: + + app = Flask('yourapplication') + app = Flask(__name__.split('.')[0]) + + Why is that? The application will work even with `__name__`, thanks + to how resources are looked up. However it will make debugging more + painful. Certain extensions can make assumptions based on the + import name of your application. For example the Flask-SQLAlchemy + extension will look for the code in your application that triggered + an SQL query in debug mode. If the import name is not properly set + up, that debugging information is lost. (For example it would only + pick up SQL queries in `yourapplication.app` and not + `yourapplication.views.frontend`) + + .. versionadded:: 0.7 + The `static_url_path`, `static_folder`, and `template_folder` + parameters were added. + + .. versionadded:: 0.8 + The `instance_path` and `instance_relative_config` parameters were + added. + + .. versionadded:: 0.11 + The `root_path` parameter was added. + + .. versionadded:: 1.0 + The ``host_matching`` and ``static_host`` parameters were added. + + .. versionadded:: 1.0 + The ``subdomain_matching`` parameter was added. Subdomain + matching needs to be enabled manually now. Setting + :data:`SERVER_NAME` does not implicitly enable it. + + :param import_name: the name of the application package + :param static_url_path: can be used to specify a different path for the + static files on the web. Defaults to the name + of the `static_folder` folder. + :param static_folder: The folder with static files that is served at + ``static_url_path``. Relative to the application ``root_path`` + or an absolute path. Defaults to ``'static'``. + :param static_host: the host to use when adding the static route. + Defaults to None. Required when using ``host_matching=True`` + with a ``static_folder`` configured. + :param host_matching: set ``url_map.host_matching`` attribute. + Defaults to False. + :param subdomain_matching: consider the subdomain relative to + :data:`SERVER_NAME` when matching routes. Defaults to False. + :param template_folder: the folder that contains the templates that should + be used by the application. Defaults to + ``'templates'`` folder in the root path of the + application. + :param instance_path: An alternative instance path for the application. + By default the folder ``'instance'`` next to the + package or module is assumed to be the instance + path. + :param instance_relative_config: if set to ``True`` relative filenames + for loading the config are assumed to + be relative to the instance path instead + of the application root. + :param root_path: The path to the root of the application files. + This should only be set manually when it can't be detected + automatically, such as for namespace packages. + """ + + default_config = ImmutableDict( + { + "DEBUG": None, + "TESTING": False, + "PROPAGATE_EXCEPTIONS": None, + "SECRET_KEY": None, + "PERMANENT_SESSION_LIFETIME": timedelta(days=31), + "USE_X_SENDFILE": False, + "SERVER_NAME": None, + "APPLICATION_ROOT": "/", + "SESSION_COOKIE_NAME": "session", + "SESSION_COOKIE_DOMAIN": None, + "SESSION_COOKIE_PATH": None, + "SESSION_COOKIE_HTTPONLY": True, + "SESSION_COOKIE_SECURE": False, + "SESSION_COOKIE_SAMESITE": None, + "SESSION_REFRESH_EACH_REQUEST": True, + "MAX_CONTENT_LENGTH": None, + "SEND_FILE_MAX_AGE_DEFAULT": None, + "TRAP_BAD_REQUEST_ERRORS": None, + "TRAP_HTTP_EXCEPTIONS": False, + "EXPLAIN_TEMPLATE_LOADING": False, + "PREFERRED_URL_SCHEME": "http", + "TEMPLATES_AUTO_RELOAD": None, + "MAX_COOKIE_SIZE": 4093, + } + ) + + #: The class that is used for request objects. See :class:`~flask.Request` + #: for more information. + request_class: type[Request] = Request + + #: The class that is used for response objects. See + #: :class:`~flask.Response` for more information. + response_class: type[Response] = Response + + #: the session interface to use. By default an instance of + #: :class:`~flask.sessions.SecureCookieSessionInterface` is used here. + #: + #: .. versionadded:: 0.8 + session_interface: SessionInterface = SecureCookieSessionInterface() + + def __init__( + self, + import_name: str, + static_url_path: str | None = None, + static_folder: str | os.PathLike[str] | None = "static", + static_host: str | None = None, + host_matching: bool = False, + subdomain_matching: bool = False, + template_folder: str | os.PathLike[str] | None = "templates", + instance_path: str | None = None, + instance_relative_config: bool = False, + root_path: str | None = None, + ): + super().__init__( + import_name=import_name, + static_url_path=static_url_path, + static_folder=static_folder, + static_host=static_host, + host_matching=host_matching, + subdomain_matching=subdomain_matching, + template_folder=template_folder, + instance_path=instance_path, + instance_relative_config=instance_relative_config, + root_path=root_path, + ) + + #: The Click command group for registering CLI commands for this + #: object. The commands are available from the ``flask`` command + #: once the application has been discovered and blueprints have + #: been registered. + self.cli = cli.AppGroup() + + # Set the name of the Click group in case someone wants to add + # the app's commands to another CLI tool. + self.cli.name = self.name + + # Add a static route using the provided static_url_path, static_host, + # and static_folder if there is a configured static_folder. + # Note we do this without checking if static_folder exists. + # For one, it might be created while the server is running (e.g. during + # development). Also, Google App Engine stores static files somewhere + if self.has_static_folder: + assert ( + bool(static_host) == host_matching + ), "Invalid static_host/host_matching combination" + # Use a weakref to avoid creating a reference cycle between the app + # and the view function (see #3761). + self_ref = weakref.ref(self) + self.add_url_rule( + f"{self.static_url_path}/", + endpoint="static", + host=static_host, + view_func=lambda **kw: self_ref().send_static_file(**kw), # type: ignore # noqa: B950 + ) + + def get_send_file_max_age(self, filename: str | None) -> int | None: + """Used by :func:`send_file` to determine the ``max_age`` cache + value for a given file path if it wasn't passed. + + By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from + the configuration of :data:`~flask.current_app`. This defaults + to ``None``, which tells the browser to use conditional requests + instead of a timed cache, which is usually preferable. + + Note this is a duplicate of the same method in the Flask + class. + + .. versionchanged:: 2.0 + The default configuration is ``None`` instead of 12 hours. + + .. versionadded:: 0.9 + """ + value = current_app.config["SEND_FILE_MAX_AGE_DEFAULT"] + + if value is None: + return None + + if isinstance(value, timedelta): + return int(value.total_seconds()) + + return value # type: ignore[no-any-return] + + def send_static_file(self, filename: str) -> Response: + """The view function used to serve files from + :attr:`static_folder`. A route is automatically registered for + this view at :attr:`static_url_path` if :attr:`static_folder` is + set. + + Note this is a duplicate of the same method in the Flask + class. + + .. versionadded:: 0.5 + + """ + if not self.has_static_folder: + raise RuntimeError("'static_folder' must be set to serve static_files.") + + # send_file only knows to call get_send_file_max_age on the app, + # call it here so it works for blueprints too. + max_age = self.get_send_file_max_age(filename) + return send_from_directory( + t.cast(str, self.static_folder), filename, max_age=max_age + ) + + def open_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]: + """Open a resource file relative to :attr:`root_path` for + reading. + + For example, if the file ``schema.sql`` is next to the file + ``app.py`` where the ``Flask`` app is defined, it can be opened + with: + + .. code-block:: python + + with app.open_resource("schema.sql") as f: + conn.executescript(f.read()) + + :param resource: Path to the resource relative to + :attr:`root_path`. + :param mode: Open the file in this mode. Only reading is + supported, valid values are "r" (or "rt") and "rb". + + Note this is a duplicate of the same method in the Flask + class. + + """ + if mode not in {"r", "rt", "rb"}: + raise ValueError("Resources can only be opened for reading.") + + return open(os.path.join(self.root_path, resource), mode) + + def open_instance_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]: + """Opens a resource from the application's instance folder + (:attr:`instance_path`). Otherwise works like + :meth:`open_resource`. Instance resources can also be opened for + writing. + + :param resource: the name of the resource. To access resources within + subfolders use forward slashes as separator. + :param mode: resource file opening mode, default is 'rb'. + """ + return open(os.path.join(self.instance_path, resource), mode) + + def create_jinja_environment(self) -> Environment: + """Create the Jinja environment based on :attr:`jinja_options` + and the various Jinja-related methods of the app. Changing + :attr:`jinja_options` after this will have no effect. Also adds + Flask-related globals and filters to the environment. + + .. versionchanged:: 0.11 + ``Environment.auto_reload`` set in accordance with + ``TEMPLATES_AUTO_RELOAD`` configuration option. + + .. versionadded:: 0.5 + """ + options = dict(self.jinja_options) + + if "autoescape" not in options: + options["autoescape"] = self.select_jinja_autoescape + + if "auto_reload" not in options: + auto_reload = self.config["TEMPLATES_AUTO_RELOAD"] + + if auto_reload is None: + auto_reload = self.debug + + options["auto_reload"] = auto_reload + + rv = self.jinja_environment(self, **options) + rv.globals.update( + url_for=self.url_for, + get_flashed_messages=get_flashed_messages, + config=self.config, + # request, session and g are normally added with the + # context processor for efficiency reasons but for imported + # templates we also want the proxies in there. + request=request, + session=session, + g=g, + ) + rv.policies["json.dumps_function"] = self.json.dumps + return rv + + def create_url_adapter(self, request: Request | None) -> MapAdapter | None: + """Creates a URL adapter for the given request. The URL adapter + is created at a point where the request context is not yet set + up so the request is passed explicitly. + + .. versionadded:: 0.6 + + .. versionchanged:: 0.9 + This can now also be called without a request object when the + URL adapter is created for the application context. + + .. versionchanged:: 1.0 + :data:`SERVER_NAME` no longer implicitly enables subdomain + matching. Use :attr:`subdomain_matching` instead. + """ + if request is not None: + # If subdomain matching is disabled (the default), use the + # default subdomain in all cases. This should be the default + # in Werkzeug but it currently does not have that feature. + if not self.subdomain_matching: + subdomain = self.url_map.default_subdomain or None + else: + subdomain = None + + return self.url_map.bind_to_environ( + request.environ, + server_name=self.config["SERVER_NAME"], + subdomain=subdomain, + ) + # We need at the very least the server name to be set for this + # to work. + if self.config["SERVER_NAME"] is not None: + return self.url_map.bind( + self.config["SERVER_NAME"], + script_name=self.config["APPLICATION_ROOT"], + url_scheme=self.config["PREFERRED_URL_SCHEME"], + ) + + return None + + def raise_routing_exception(self, request: Request) -> t.NoReturn: + """Intercept routing exceptions and possibly do something else. + + In debug mode, intercept a routing redirect and replace it with + an error if the body will be discarded. + + With modern Werkzeug this shouldn't occur, since it now uses a + 308 status which tells the browser to resend the method and + body. + + .. versionchanged:: 2.1 + Don't intercept 307 and 308 redirects. + + :meta private: + :internal: + """ + if ( + not self.debug + or not isinstance(request.routing_exception, RequestRedirect) + or request.routing_exception.code in {307, 308} + or request.method in {"GET", "HEAD", "OPTIONS"} + ): + raise request.routing_exception # type: ignore[misc] + + from .debughelpers import FormDataRoutingRedirect + + raise FormDataRoutingRedirect(request) + + def update_template_context(self, context: dict[str, t.Any]) -> None: + """Update the template context with some commonly used variables. + This injects request, session, config and g into the template + context as well as everything template context processors want + to inject. Note that the as of Flask 0.6, the original values + in the context will not be overridden if a context processor + decides to return a value with the same key. + + :param context: the context as a dictionary that is updated in place + to add extra variables. + """ + names: t.Iterable[str | None] = (None,) + + # A template may be rendered outside a request context. + if request: + names = chain(names, reversed(request.blueprints)) + + # The values passed to render_template take precedence. Keep a + # copy to re-apply after all context functions. + orig_ctx = context.copy() + + for name in names: + if name in self.template_context_processors: + for func in self.template_context_processors[name]: + context.update(self.ensure_sync(func)()) + + context.update(orig_ctx) + + def make_shell_context(self) -> dict[str, t.Any]: + """Returns the shell context for an interactive shell for this + application. This runs all the registered shell context + processors. + + .. versionadded:: 0.11 + """ + rv = {"app": self, "g": g} + for processor in self.shell_context_processors: + rv.update(processor()) + return rv + + def run( + self, + host: str | None = None, + port: int | None = None, + debug: bool | None = None, + load_dotenv: bool = True, + **options: t.Any, + ) -> None: + """Runs the application on a local development server. + + Do not use ``run()`` in a production setting. It is not intended to + meet security and performance requirements for a production server. + Instead, see :doc:`/deploying/index` for WSGI server recommendations. + + If the :attr:`debug` flag is set the server will automatically reload + for code changes and show a debugger in case an exception happened. + + If you want to run the application in debug mode, but disable the + code execution on the interactive debugger, you can pass + ``use_evalex=False`` as parameter. This will keep the debugger's + traceback screen active, but disable code execution. + + It is not recommended to use this function for development with + automatic reloading as this is badly supported. Instead you should + be using the :command:`flask` command line script's ``run`` support. + + .. admonition:: Keep in Mind + + Flask will suppress any server error with a generic error page + unless it is in debug mode. As such to enable just the + interactive debugger without the code reloading, you have to + invoke :meth:`run` with ``debug=True`` and ``use_reloader=False``. + Setting ``use_debugger`` to ``True`` without being in debug mode + won't catch any exceptions because there won't be any to + catch. + + :param host: the hostname to listen on. Set this to ``'0.0.0.0'`` to + have the server available externally as well. Defaults to + ``'127.0.0.1'`` or the host in the ``SERVER_NAME`` config variable + if present. + :param port: the port of the webserver. Defaults to ``5000`` or the + port defined in the ``SERVER_NAME`` config variable if present. + :param debug: if given, enable or disable debug mode. See + :attr:`debug`. + :param load_dotenv: Load the nearest :file:`.env` and :file:`.flaskenv` + files to set environment variables. Will also change the working + directory to the directory containing the first file found. + :param options: the options to be forwarded to the underlying Werkzeug + server. See :func:`werkzeug.serving.run_simple` for more + information. + + .. versionchanged:: 1.0 + If installed, python-dotenv will be used to load environment + variables from :file:`.env` and :file:`.flaskenv` files. + + The :envvar:`FLASK_DEBUG` environment variable will override :attr:`debug`. + + Threaded mode is enabled by default. + + .. versionchanged:: 0.10 + The default port is now picked from the ``SERVER_NAME`` + variable. + """ + # Ignore this call so that it doesn't start another server if + # the 'flask run' command is used. + if os.environ.get("FLASK_RUN_FROM_CLI") == "true": + if not is_running_from_reloader(): + click.secho( + " * Ignoring a call to 'app.run()' that would block" + " the current 'flask' CLI command.\n" + " Only call 'app.run()' in an 'if __name__ ==" + ' "__main__"\' guard.', + fg="red", + ) + + return + + if get_load_dotenv(load_dotenv): + cli.load_dotenv() + + # if set, env var overrides existing value + if "FLASK_DEBUG" in os.environ: + self.debug = get_debug_flag() + + # debug passed to method overrides all other sources + if debug is not None: + self.debug = bool(debug) + + server_name = self.config.get("SERVER_NAME") + sn_host = sn_port = None + + if server_name: + sn_host, _, sn_port = server_name.partition(":") + + if not host: + if sn_host: + host = sn_host + else: + host = "127.0.0.1" + + if port or port == 0: + port = int(port) + elif sn_port: + port = int(sn_port) + else: + port = 5000 + + options.setdefault("use_reloader", self.debug) + options.setdefault("use_debugger", self.debug) + options.setdefault("threaded", True) + + cli.show_server_banner(self.debug, self.name) + + from werkzeug.serving import run_simple + + try: + run_simple(t.cast(str, host), port, self, **options) + finally: + # reset the first request information if the development server + # reset normally. This makes it possible to restart the server + # without reloader and that stuff from an interactive shell. + self._got_first_request = False + + def test_client(self, use_cookies: bool = True, **kwargs: t.Any) -> FlaskClient: + """Creates a test client for this application. For information + about unit testing head over to :doc:`/testing`. + + Note that if you are testing for assertions or exceptions in your + application code, you must set ``app.testing = True`` in order for the + exceptions to propagate to the test client. Otherwise, the exception + will be handled by the application (not visible to the test client) and + the only indication of an AssertionError or other exception will be a + 500 status code response to the test client. See the :attr:`testing` + attribute. For example:: + + app.testing = True + client = app.test_client() + + The test client can be used in a ``with`` block to defer the closing down + of the context until the end of the ``with`` block. This is useful if + you want to access the context locals for testing:: + + with app.test_client() as c: + rv = c.get('/?vodka=42') + assert request.args['vodka'] == '42' + + Additionally, you may pass optional keyword arguments that will then + be passed to the application's :attr:`test_client_class` constructor. + For example:: + + from flask.testing import FlaskClient + + class CustomClient(FlaskClient): + def __init__(self, *args, **kwargs): + self._authentication = kwargs.pop("authentication") + super(CustomClient,self).__init__( *args, **kwargs) + + app.test_client_class = CustomClient + client = app.test_client(authentication='Basic ....') + + See :class:`~flask.testing.FlaskClient` for more information. + + .. versionchanged:: 0.4 + added support for ``with`` block usage for the client. + + .. versionadded:: 0.7 + The `use_cookies` parameter was added as well as the ability + to override the client to be used by setting the + :attr:`test_client_class` attribute. + + .. versionchanged:: 0.11 + Added `**kwargs` to support passing additional keyword arguments to + the constructor of :attr:`test_client_class`. + """ + cls = self.test_client_class + if cls is None: + from .testing import FlaskClient as cls + return cls( # type: ignore + self, self.response_class, use_cookies=use_cookies, **kwargs + ) + + def test_cli_runner(self, **kwargs: t.Any) -> FlaskCliRunner: + """Create a CLI runner for testing CLI commands. + See :ref:`testing-cli`. + + Returns an instance of :attr:`test_cli_runner_class`, by default + :class:`~flask.testing.FlaskCliRunner`. The Flask app object is + passed as the first argument. + + .. versionadded:: 1.0 + """ + cls = self.test_cli_runner_class + + if cls is None: + from .testing import FlaskCliRunner as cls + + return cls(self, **kwargs) # type: ignore + + def handle_http_exception( + self, e: HTTPException + ) -> HTTPException | ft.ResponseReturnValue: + """Handles an HTTP exception. By default this will invoke the + registered error handlers and fall back to returning the + exception as response. + + .. versionchanged:: 1.0.3 + ``RoutingException``, used internally for actions such as + slash redirects during routing, is not passed to error + handlers. + + .. versionchanged:: 1.0 + Exceptions are looked up by code *and* by MRO, so + ``HTTPException`` subclasses can be handled with a catch-all + handler for the base ``HTTPException``. + + .. versionadded:: 0.3 + """ + # Proxy exceptions don't have error codes. We want to always return + # those unchanged as errors + if e.code is None: + return e + + # RoutingExceptions are used internally to trigger routing + # actions, such as slash redirects raising RequestRedirect. They + # are not raised or handled in user code. + if isinstance(e, RoutingException): + return e + + handler = self._find_error_handler(e, request.blueprints) + if handler is None: + return e + return self.ensure_sync(handler)(e) # type: ignore[no-any-return] + + def handle_user_exception( + self, e: Exception + ) -> HTTPException | ft.ResponseReturnValue: + """This method is called whenever an exception occurs that + should be handled. A special case is :class:`~werkzeug + .exceptions.HTTPException` which is forwarded to the + :meth:`handle_http_exception` method. This function will either + return a response value or reraise the exception with the same + traceback. + + .. versionchanged:: 1.0 + Key errors raised from request data like ``form`` show the + bad key in debug mode rather than a generic bad request + message. + + .. versionadded:: 0.7 + """ + if isinstance(e, BadRequestKeyError) and ( + self.debug or self.config["TRAP_BAD_REQUEST_ERRORS"] + ): + e.show_exception = True + + if isinstance(e, HTTPException) and not self.trap_http_exception(e): + return self.handle_http_exception(e) + + handler = self._find_error_handler(e, request.blueprints) + + if handler is None: + raise + + return self.ensure_sync(handler)(e) # type: ignore[no-any-return] + + def handle_exception(self, e: Exception) -> Response: + """Handle an exception that did not have an error handler + associated with it, or that was raised from an error handler. + This always causes a 500 ``InternalServerError``. + + Always sends the :data:`got_request_exception` signal. + + If :data:`PROPAGATE_EXCEPTIONS` is ``True``, such as in debug + mode, the error will be re-raised so that the debugger can + display it. Otherwise, the original exception is logged, and + an :exc:`~werkzeug.exceptions.InternalServerError` is returned. + + If an error handler is registered for ``InternalServerError`` or + ``500``, it will be used. For consistency, the handler will + always receive the ``InternalServerError``. The original + unhandled exception is available as ``e.original_exception``. + + .. versionchanged:: 1.1.0 + Always passes the ``InternalServerError`` instance to the + handler, setting ``original_exception`` to the unhandled + error. + + .. versionchanged:: 1.1.0 + ``after_request`` functions and other finalization is done + even for the default 500 response when there is no handler. + + .. versionadded:: 0.3 + """ + exc_info = sys.exc_info() + got_request_exception.send(self, _async_wrapper=self.ensure_sync, exception=e) + propagate = self.config["PROPAGATE_EXCEPTIONS"] + + if propagate is None: + propagate = self.testing or self.debug + + if propagate: + # Re-raise if called with an active exception, otherwise + # raise the passed in exception. + if exc_info[1] is e: + raise + + raise e + + self.log_exception(exc_info) + server_error: InternalServerError | ft.ResponseReturnValue + server_error = InternalServerError(original_exception=e) + handler = self._find_error_handler(server_error, request.blueprints) + + if handler is not None: + server_error = self.ensure_sync(handler)(server_error) + + return self.finalize_request(server_error, from_error_handler=True) + + def log_exception( + self, + exc_info: (tuple[type, BaseException, TracebackType] | tuple[None, None, None]), + ) -> None: + """Logs an exception. This is called by :meth:`handle_exception` + if debugging is disabled and right before the handler is called. + The default implementation logs the exception as error on the + :attr:`logger`. + + .. versionadded:: 0.8 + """ + self.logger.error( + f"Exception on {request.path} [{request.method}]", exc_info=exc_info + ) + + def dispatch_request(self) -> ft.ResponseReturnValue: + """Does the request dispatching. Matches the URL and returns the + return value of the view or error handler. This does not have to + be a response object. In order to convert the return value to a + proper response object, call :func:`make_response`. + + .. versionchanged:: 0.7 + This no longer does the exception handling, this code was + moved to the new :meth:`full_dispatch_request`. + """ + req = request_ctx.request + if req.routing_exception is not None: + self.raise_routing_exception(req) + rule: Rule = req.url_rule # type: ignore[assignment] + # if we provide automatic options for this URL and the + # request came with the OPTIONS method, reply automatically + if ( + getattr(rule, "provide_automatic_options", False) + and req.method == "OPTIONS" + ): + return self.make_default_options_response() + # otherwise dispatch to the handler for that endpoint + view_args: dict[str, t.Any] = req.view_args # type: ignore[assignment] + return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] + + def full_dispatch_request(self) -> Response: + """Dispatches the request and on top of that performs request + pre and postprocessing as well as HTTP exception catching and + error handling. + + .. versionadded:: 0.7 + """ + self._got_first_request = True + + try: + request_started.send(self, _async_wrapper=self.ensure_sync) + rv = self.preprocess_request() + if rv is None: + rv = self.dispatch_request() + except Exception as e: + rv = self.handle_user_exception(e) + return self.finalize_request(rv) + + def finalize_request( + self, + rv: ft.ResponseReturnValue | HTTPException, + from_error_handler: bool = False, + ) -> Response: + """Given the return value from a view function this finalizes + the request by converting it into a response and invoking the + postprocessing functions. This is invoked for both normal + request dispatching as well as error handlers. + + Because this means that it might be called as a result of a + failure a special safe mode is available which can be enabled + with the `from_error_handler` flag. If enabled, failures in + response processing will be logged and otherwise ignored. + + :internal: + """ + response = self.make_response(rv) + try: + response = self.process_response(response) + request_finished.send( + self, _async_wrapper=self.ensure_sync, response=response + ) + except Exception: + if not from_error_handler: + raise + self.logger.exception( + "Request finalizing failed with an error while handling an error" + ) + return response + + def make_default_options_response(self) -> Response: + """This method is called to create the default ``OPTIONS`` response. + This can be changed through subclassing to change the default + behavior of ``OPTIONS`` responses. + + .. versionadded:: 0.7 + """ + adapter = request_ctx.url_adapter + methods = adapter.allowed_methods() # type: ignore[union-attr] + rv = self.response_class() + rv.allow.update(methods) + return rv + + def ensure_sync(self, func: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]: + """Ensure that the function is synchronous for WSGI workers. + Plain ``def`` functions are returned as-is. ``async def`` + functions are wrapped to run and wait for the response. + + Override this method to change how the app runs async views. + + .. versionadded:: 2.0 + """ + if iscoroutinefunction(func): + return self.async_to_sync(func) + + return func + + def async_to_sync( + self, func: t.Callable[..., t.Coroutine[t.Any, t.Any, t.Any]] + ) -> t.Callable[..., t.Any]: + """Return a sync function that will run the coroutine function. + + .. code-block:: python + + result = app.async_to_sync(func)(*args, **kwargs) + + Override this method to change how the app converts async code + to be synchronously callable. + + .. versionadded:: 2.0 + """ + try: + from asgiref.sync import async_to_sync as asgiref_async_to_sync + except ImportError: + raise RuntimeError( + "Install Flask with the 'async' extra in order to use async views." + ) from None + + return asgiref_async_to_sync(func) + + def url_for( + self, + /, + endpoint: str, + *, + _anchor: str | None = None, + _method: str | None = None, + _scheme: str | None = None, + _external: bool | None = None, + **values: t.Any, + ) -> str: + """Generate a URL to the given endpoint with the given values. + + This is called by :func:`flask.url_for`, and can be called + directly as well. + + An *endpoint* is the name of a URL rule, usually added with + :meth:`@app.route() `, and usually the same name as the + view function. A route defined in a :class:`~flask.Blueprint` + will prepend the blueprint's name separated by a ``.`` to the + endpoint. + + In some cases, such as email messages, you want URLs to include + the scheme and domain, like ``https://example.com/hello``. When + not in an active request, URLs will be external by default, but + this requires setting :data:`SERVER_NAME` so Flask knows what + domain to use. :data:`APPLICATION_ROOT` and + :data:`PREFERRED_URL_SCHEME` should also be configured as + needed. This config is only used when not in an active request. + + Functions can be decorated with :meth:`url_defaults` to modify + keyword arguments before the URL is built. + + If building fails for some reason, such as an unknown endpoint + or incorrect values, the app's :meth:`handle_url_build_error` + method is called. If that returns a string, that is returned, + otherwise a :exc:`~werkzeug.routing.BuildError` is raised. + + :param endpoint: The endpoint name associated with the URL to + generate. If this starts with a ``.``, the current blueprint + name (if any) will be used. + :param _anchor: If given, append this as ``#anchor`` to the URL. + :param _method: If given, generate the URL associated with this + method for the endpoint. + :param _scheme: If given, the URL will have this scheme if it + is external. + :param _external: If given, prefer the URL to be internal + (False) or require it to be external (True). External URLs + include the scheme and domain. When not in an active + request, URLs are external by default. + :param values: Values to use for the variable parts of the URL + rule. Unknown keys are appended as query string arguments, + like ``?a=b&c=d``. + + .. versionadded:: 2.2 + Moved from ``flask.url_for``, which calls this method. + """ + req_ctx = _cv_request.get(None) + + if req_ctx is not None: + url_adapter = req_ctx.url_adapter + blueprint_name = req_ctx.request.blueprint + + # If the endpoint starts with "." and the request matches a + # blueprint, the endpoint is relative to the blueprint. + if endpoint[:1] == ".": + if blueprint_name is not None: + endpoint = f"{blueprint_name}{endpoint}" + else: + endpoint = endpoint[1:] + + # When in a request, generate a URL without scheme and + # domain by default, unless a scheme is given. + if _external is None: + _external = _scheme is not None + else: + app_ctx = _cv_app.get(None) + + # If called by helpers.url_for, an app context is active, + # use its url_adapter. Otherwise, app.url_for was called + # directly, build an adapter. + if app_ctx is not None: + url_adapter = app_ctx.url_adapter + else: + url_adapter = self.create_url_adapter(None) + + if url_adapter is None: + raise RuntimeError( + "Unable to build URLs outside an active request" + " without 'SERVER_NAME' configured. Also configure" + " 'APPLICATION_ROOT' and 'PREFERRED_URL_SCHEME' as" + " needed." + ) + + # When outside a request, generate a URL with scheme and + # domain by default. + if _external is None: + _external = True + + # It is an error to set _scheme when _external=False, in order + # to avoid accidental insecure URLs. + if _scheme is not None and not _external: + raise ValueError("When specifying '_scheme', '_external' must be True.") + + self.inject_url_defaults(endpoint, values) + + try: + rv = url_adapter.build( # type: ignore[union-attr] + endpoint, + values, + method=_method, + url_scheme=_scheme, + force_external=_external, + ) + except BuildError as error: + values.update( + _anchor=_anchor, _method=_method, _scheme=_scheme, _external=_external + ) + return self.handle_url_build_error(error, endpoint, values) + + if _anchor is not None: + _anchor = _url_quote(_anchor, safe="%!#$&'()*+,/:;=?@") + rv = f"{rv}#{_anchor}" + + return rv + + def make_response(self, rv: ft.ResponseReturnValue) -> Response: + """Convert the return value from a view function to an instance of + :attr:`response_class`. + + :param rv: the return value from the view function. The view function + must return a response. Returning ``None``, or the view ending + without returning, is not allowed. The following types are allowed + for ``view_rv``: + + ``str`` + A response object is created with the string encoded to UTF-8 + as the body. + + ``bytes`` + A response object is created with the bytes as the body. + + ``dict`` + A dictionary that will be jsonify'd before being returned. + + ``list`` + A list that will be jsonify'd before being returned. + + ``generator`` or ``iterator`` + A generator that returns ``str`` or ``bytes`` to be + streamed as the response. + + ``tuple`` + Either ``(body, status, headers)``, ``(body, status)``, or + ``(body, headers)``, where ``body`` is any of the other types + allowed here, ``status`` is a string or an integer, and + ``headers`` is a dictionary or a list of ``(key, value)`` + tuples. If ``body`` is a :attr:`response_class` instance, + ``status`` overwrites the exiting value and ``headers`` are + extended. + + :attr:`response_class` + The object is returned unchanged. + + other :class:`~werkzeug.wrappers.Response` class + The object is coerced to :attr:`response_class`. + + :func:`callable` + The function is called as a WSGI application. The result is + used to create a response object. + + .. versionchanged:: 2.2 + A generator will be converted to a streaming response. + A list will be converted to a JSON response. + + .. versionchanged:: 1.1 + A dict will be converted to a JSON response. + + .. versionchanged:: 0.9 + Previously a tuple was interpreted as the arguments for the + response object. + """ + + status = headers = None + + # unpack tuple returns + if isinstance(rv, tuple): + len_rv = len(rv) + + # a 3-tuple is unpacked directly + if len_rv == 3: + rv, status, headers = rv # type: ignore[misc] + # decide if a 2-tuple has status or headers + elif len_rv == 2: + if isinstance(rv[1], (Headers, dict, tuple, list)): + rv, headers = rv + else: + rv, status = rv # type: ignore[assignment,misc] + # other sized tuples are not allowed + else: + raise TypeError( + "The view function did not return a valid response tuple." + " The tuple must have the form (body, status, headers)," + " (body, status), or (body, headers)." + ) + + # the body must not be None + if rv is None: + raise TypeError( + f"The view function for {request.endpoint!r} did not" + " return a valid response. The function either returned" + " None or ended without a return statement." + ) + + # make sure the body is an instance of the response class + if not isinstance(rv, self.response_class): + if isinstance(rv, (str, bytes, bytearray)) or isinstance(rv, cabc.Iterator): + # let the response class set the status and headers instead of + # waiting to do it manually, so that the class can handle any + # special logic + rv = self.response_class( + rv, + status=status, + headers=headers, # type: ignore[arg-type] + ) + status = headers = None + elif isinstance(rv, (dict, list)): + rv = self.json.response(rv) + elif isinstance(rv, BaseResponse) or callable(rv): + # evaluate a WSGI callable, or coerce a different response + # class to the correct type + try: + rv = self.response_class.force_type( + rv, # type: ignore[arg-type] + request.environ, + ) + except TypeError as e: + raise TypeError( + f"{e}\nThe view function did not return a valid" + " response. The return type must be a string," + " dict, list, tuple with headers or status," + " Response instance, or WSGI callable, but it" + f" was a {type(rv).__name__}." + ).with_traceback(sys.exc_info()[2]) from None + else: + raise TypeError( + "The view function did not return a valid" + " response. The return type must be a string," + " dict, list, tuple with headers or status," + " Response instance, or WSGI callable, but it was a" + f" {type(rv).__name__}." + ) + + rv = t.cast(Response, rv) + # prefer the status if it was provided + if status is not None: + if isinstance(status, (str, bytes, bytearray)): + rv.status = status + else: + rv.status_code = status + + # extend existing headers with provided headers + if headers: + rv.headers.update(headers) # type: ignore[arg-type] + + return rv + + def preprocess_request(self) -> ft.ResponseReturnValue | None: + """Called before the request is dispatched. Calls + :attr:`url_value_preprocessors` registered with the app and the + current blueprint (if any). Then calls :attr:`before_request_funcs` + registered with the app and the blueprint. + + If any :meth:`before_request` handler returns a non-None value, the + value is handled as if it was the return value from the view, and + further request handling is stopped. + """ + names = (None, *reversed(request.blueprints)) + + for name in names: + if name in self.url_value_preprocessors: + for url_func in self.url_value_preprocessors[name]: + url_func(request.endpoint, request.view_args) + + for name in names: + if name in self.before_request_funcs: + for before_func in self.before_request_funcs[name]: + rv = self.ensure_sync(before_func)() + + if rv is not None: + return rv # type: ignore[no-any-return] + + return None + + def process_response(self, response: Response) -> Response: + """Can be overridden in order to modify the response object + before it's sent to the WSGI server. By default this will + call all the :meth:`after_request` decorated functions. + + .. versionchanged:: 0.5 + As of Flask 0.5 the functions registered for after request + execution are called in reverse order of registration. + + :param response: a :attr:`response_class` object. + :return: a new response object or the same, has to be an + instance of :attr:`response_class`. + """ + ctx = request_ctx._get_current_object() # type: ignore[attr-defined] + + for func in ctx._after_request_functions: + response = self.ensure_sync(func)(response) + + for name in chain(request.blueprints, (None,)): + if name in self.after_request_funcs: + for func in reversed(self.after_request_funcs[name]): + response = self.ensure_sync(func)(response) + + if not self.session_interface.is_null_session(ctx.session): + self.session_interface.save_session(self, ctx.session, response) + + return response + + def do_teardown_request( + self, + exc: BaseException | None = _sentinel, # type: ignore[assignment] + ) -> None: + """Called after the request is dispatched and the response is + returned, right before the request context is popped. + + This calls all functions decorated with + :meth:`teardown_request`, and :meth:`Blueprint.teardown_request` + if a blueprint handled the request. Finally, the + :data:`request_tearing_down` signal is sent. + + This is called by + :meth:`RequestContext.pop() `, + which may be delayed during testing to maintain access to + resources. + + :param exc: An unhandled exception raised while dispatching the + request. Detected from the current exception information if + not passed. Passed to each teardown function. + + .. versionchanged:: 0.9 + Added the ``exc`` argument. + """ + if exc is _sentinel: + exc = sys.exc_info()[1] + + for name in chain(request.blueprints, (None,)): + if name in self.teardown_request_funcs: + for func in reversed(self.teardown_request_funcs[name]): + self.ensure_sync(func)(exc) + + request_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc) + + def do_teardown_appcontext( + self, + exc: BaseException | None = _sentinel, # type: ignore[assignment] + ) -> None: + """Called right before the application context is popped. + + When handling a request, the application context is popped + after the request context. See :meth:`do_teardown_request`. + + This calls all functions decorated with + :meth:`teardown_appcontext`. Then the + :data:`appcontext_tearing_down` signal is sent. + + This is called by + :meth:`AppContext.pop() `. + + .. versionadded:: 0.9 + """ + if exc is _sentinel: + exc = sys.exc_info()[1] + + for func in reversed(self.teardown_appcontext_funcs): + self.ensure_sync(func)(exc) + + appcontext_tearing_down.send(self, _async_wrapper=self.ensure_sync, exc=exc) + + def app_context(self) -> AppContext: + """Create an :class:`~flask.ctx.AppContext`. Use as a ``with`` + block to push the context, which will make :data:`current_app` + point at this application. + + An application context is automatically pushed by + :meth:`RequestContext.push() ` + when handling a request, and when running a CLI command. Use + this to manually create a context outside of these situations. + + :: + + with app.app_context(): + init_db() + + See :doc:`/appcontext`. + + .. versionadded:: 0.9 + """ + return AppContext(self) + + def request_context(self, environ: WSGIEnvironment) -> RequestContext: + """Create a :class:`~flask.ctx.RequestContext` representing a + WSGI environment. Use a ``with`` block to push the context, + which will make :data:`request` point at this request. + + See :doc:`/reqcontext`. + + Typically you should not call this from your own code. A request + context is automatically pushed by the :meth:`wsgi_app` when + handling a request. Use :meth:`test_request_context` to create + an environment and context instead of this method. + + :param environ: a WSGI environment + """ + return RequestContext(self, environ) + + def test_request_context(self, *args: t.Any, **kwargs: t.Any) -> RequestContext: + """Create a :class:`~flask.ctx.RequestContext` for a WSGI + environment created from the given values. This is mostly useful + during testing, where you may want to run a function that uses + request data without dispatching a full request. + + See :doc:`/reqcontext`. + + Use a ``with`` block to push the context, which will make + :data:`request` point at the request for the created + environment. :: + + with app.test_request_context(...): + generate_report() + + When using the shell, it may be easier to push and pop the + context manually to avoid indentation. :: + + ctx = app.test_request_context(...) + ctx.push() + ... + ctx.pop() + + Takes the same arguments as Werkzeug's + :class:`~werkzeug.test.EnvironBuilder`, with some defaults from + the application. See the linked Werkzeug docs for most of the + available arguments. Flask-specific behavior is listed here. + + :param path: URL path being requested. + :param base_url: Base URL where the app is being served, which + ``path`` is relative to. If not given, built from + :data:`PREFERRED_URL_SCHEME`, ``subdomain``, + :data:`SERVER_NAME`, and :data:`APPLICATION_ROOT`. + :param subdomain: Subdomain name to append to + :data:`SERVER_NAME`. + :param url_scheme: Scheme to use instead of + :data:`PREFERRED_URL_SCHEME`. + :param data: The request body, either as a string or a dict of + form keys and values. + :param json: If given, this is serialized as JSON and passed as + ``data``. Also defaults ``content_type`` to + ``application/json``. + :param args: other positional arguments passed to + :class:`~werkzeug.test.EnvironBuilder`. + :param kwargs: other keyword arguments passed to + :class:`~werkzeug.test.EnvironBuilder`. + """ + from .testing import EnvironBuilder + + builder = EnvironBuilder(self, *args, **kwargs) + + try: + return self.request_context(builder.get_environ()) + finally: + builder.close() + + def wsgi_app( + self, environ: WSGIEnvironment, start_response: StartResponse + ) -> cabc.Iterable[bytes]: + """The actual WSGI application. This is not implemented in + :meth:`__call__` so that middlewares can be applied without + losing a reference to the app object. Instead of doing this:: + + app = MyMiddleware(app) + + It's a better idea to do this instead:: + + app.wsgi_app = MyMiddleware(app.wsgi_app) + + Then you still have the original application object around and + can continue to call methods on it. + + .. versionchanged:: 0.7 + Teardown events for the request and app contexts are called + even if an unhandled error occurs. Other events may not be + called depending on when an error occurs during dispatch. + See :ref:`callbacks-and-errors`. + + :param environ: A WSGI environment. + :param start_response: A callable accepting a status code, + a list of headers, and an optional exception context to + start the response. + """ + ctx = self.request_context(environ) + error: BaseException | None = None + try: + try: + ctx.push() + response = self.full_dispatch_request() + except Exception as e: + error = e + response = self.handle_exception(e) + except: # noqa: B001 + error = sys.exc_info()[1] + raise + return response(environ, start_response) + finally: + if "werkzeug.debug.preserve_context" in environ: + environ["werkzeug.debug.preserve_context"](_cv_app.get()) + environ["werkzeug.debug.preserve_context"](_cv_request.get()) + + if error is not None and self.should_ignore_error(error): + error = None + + ctx.pop(error) + + def __call__( + self, environ: WSGIEnvironment, start_response: StartResponse + ) -> cabc.Iterable[bytes]: + """The WSGI server calls the Flask application object as the + WSGI application. This calls :meth:`wsgi_app`, which can be + wrapped to apply middleware. + """ + return self.wsgi_app(environ, start_response) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/blueprints.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/blueprints.py new file mode 100644 index 000000000..aa9eacf21 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/blueprints.py @@ -0,0 +1,129 @@ +from __future__ import annotations + +import os +import typing as t +from datetime import timedelta + +from .cli import AppGroup +from .globals import current_app +from .helpers import send_from_directory +from .sansio.blueprints import Blueprint as SansioBlueprint +from .sansio.blueprints import BlueprintSetupState as BlueprintSetupState # noqa +from .sansio.scaffold import _sentinel + +if t.TYPE_CHECKING: # pragma: no cover + from .wrappers import Response + + +class Blueprint(SansioBlueprint): + def __init__( + self, + name: str, + import_name: str, + static_folder: str | os.PathLike[str] | None = None, + static_url_path: str | None = None, + template_folder: str | os.PathLike[str] | None = None, + url_prefix: str | None = None, + subdomain: str | None = None, + url_defaults: dict[str, t.Any] | None = None, + root_path: str | None = None, + cli_group: str | None = _sentinel, # type: ignore + ) -> None: + super().__init__( + name, + import_name, + static_folder, + static_url_path, + template_folder, + url_prefix, + subdomain, + url_defaults, + root_path, + cli_group, + ) + + #: The Click command group for registering CLI commands for this + #: object. The commands are available from the ``flask`` command + #: once the application has been discovered and blueprints have + #: been registered. + self.cli = AppGroup() + + # Set the name of the Click group in case someone wants to add + # the app's commands to another CLI tool. + self.cli.name = self.name + + def get_send_file_max_age(self, filename: str | None) -> int | None: + """Used by :func:`send_file` to determine the ``max_age`` cache + value for a given file path if it wasn't passed. + + By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from + the configuration of :data:`~flask.current_app`. This defaults + to ``None``, which tells the browser to use conditional requests + instead of a timed cache, which is usually preferable. + + Note this is a duplicate of the same method in the Flask + class. + + .. versionchanged:: 2.0 + The default configuration is ``None`` instead of 12 hours. + + .. versionadded:: 0.9 + """ + value = current_app.config["SEND_FILE_MAX_AGE_DEFAULT"] + + if value is None: + return None + + if isinstance(value, timedelta): + return int(value.total_seconds()) + + return value # type: ignore[no-any-return] + + def send_static_file(self, filename: str) -> Response: + """The view function used to serve files from + :attr:`static_folder`. A route is automatically registered for + this view at :attr:`static_url_path` if :attr:`static_folder` is + set. + + Note this is a duplicate of the same method in the Flask + class. + + .. versionadded:: 0.5 + + """ + if not self.has_static_folder: + raise RuntimeError("'static_folder' must be set to serve static_files.") + + # send_file only knows to call get_send_file_max_age on the app, + # call it here so it works for blueprints too. + max_age = self.get_send_file_max_age(filename) + return send_from_directory( + t.cast(str, self.static_folder), filename, max_age=max_age + ) + + def open_resource(self, resource: str, mode: str = "rb") -> t.IO[t.AnyStr]: + """Open a resource file relative to :attr:`root_path` for + reading. + + For example, if the file ``schema.sql`` is next to the file + ``app.py`` where the ``Flask`` app is defined, it can be opened + with: + + .. code-block:: python + + with app.open_resource("schema.sql") as f: + conn.executescript(f.read()) + + :param resource: Path to the resource relative to + :attr:`root_path`. + :param mode: Open the file in this mode. Only reading is + supported, valid values are "r" (or "rt") and "rb". + + Note this is a duplicate of the same method in the Flask + class. + + """ + if mode not in {"r", "rt", "rb"}: + raise ValueError("Resources can only be opened for reading.") + + return open(os.path.join(self.root_path, resource), mode) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/cli.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/cli.py new file mode 100644 index 000000000..ecb292a01 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/cli.py @@ -0,0 +1,1109 @@ +from __future__ import annotations + +import ast +import collections.abc as cabc +import importlib.metadata +import inspect +import os +import platform +import re +import sys +import traceback +import typing as t +from functools import update_wrapper +from operator import itemgetter +from types import ModuleType + +import click +from click.core import ParameterSource +from werkzeug import run_simple +from werkzeug.serving import is_running_from_reloader +from werkzeug.utils import import_string + +from .globals import current_app +from .helpers import get_debug_flag +from .helpers import get_load_dotenv + +if t.TYPE_CHECKING: + import ssl + + from _typeshed.wsgi import StartResponse + from _typeshed.wsgi import WSGIApplication + from _typeshed.wsgi import WSGIEnvironment + + from .app import Flask + + +class NoAppException(click.UsageError): + """Raised if an application cannot be found or loaded.""" + + +def find_best_app(module: ModuleType) -> Flask: + """Given a module instance this tries to find the best possible + application in the module or raises an exception. + """ + from . import Flask + + # Search for the most common names first. + for attr_name in ("app", "application"): + app = getattr(module, attr_name, None) + + if isinstance(app, Flask): + return app + + # Otherwise find the only object that is a Flask instance. + matches = [v for v in module.__dict__.values() if isinstance(v, Flask)] + + if len(matches) == 1: + return matches[0] + elif len(matches) > 1: + raise NoAppException( + "Detected multiple Flask applications in module" + f" '{module.__name__}'. Use '{module.__name__}:name'" + " to specify the correct one." + ) + + # Search for app factory functions. + for attr_name in ("create_app", "make_app"): + app_factory = getattr(module, attr_name, None) + + if inspect.isfunction(app_factory): + try: + app = app_factory() + + if isinstance(app, Flask): + return app + except TypeError as e: + if not _called_with_wrong_args(app_factory): + raise + + raise NoAppException( + f"Detected factory '{attr_name}' in module '{module.__name__}'," + " but could not call it without arguments. Use" + f" '{module.__name__}:{attr_name}(args)'" + " to specify arguments." + ) from e + + raise NoAppException( + "Failed to find Flask application or factory in module" + f" '{module.__name__}'. Use '{module.__name__}:name'" + " to specify one." + ) + + +def _called_with_wrong_args(f: t.Callable[..., Flask]) -> bool: + """Check whether calling a function raised a ``TypeError`` because + the call failed or because something in the factory raised the + error. + + :param f: The function that was called. + :return: ``True`` if the call failed. + """ + tb = sys.exc_info()[2] + + try: + while tb is not None: + if tb.tb_frame.f_code is f.__code__: + # In the function, it was called successfully. + return False + + tb = tb.tb_next + + # Didn't reach the function. + return True + finally: + # Delete tb to break a circular reference. + # https://docs.python.org/2/library/sys.html#sys.exc_info + del tb + + +def find_app_by_string(module: ModuleType, app_name: str) -> Flask: + """Check if the given string is a variable name or a function. Call + a function to get the app instance, or return the variable directly. + """ + from . import Flask + + # Parse app_name as a single expression to determine if it's a valid + # attribute name or function call. + try: + expr = ast.parse(app_name.strip(), mode="eval").body + except SyntaxError: + raise NoAppException( + f"Failed to parse {app_name!r} as an attribute name or function call." + ) from None + + if isinstance(expr, ast.Name): + name = expr.id + args = [] + kwargs = {} + elif isinstance(expr, ast.Call): + # Ensure the function name is an attribute name only. + if not isinstance(expr.func, ast.Name): + raise NoAppException( + f"Function reference must be a simple name: {app_name!r}." + ) + + name = expr.func.id + + # Parse the positional and keyword arguments as literals. + try: + args = [ast.literal_eval(arg) for arg in expr.args] + kwargs = { + kw.arg: ast.literal_eval(kw.value) + for kw in expr.keywords + if kw.arg is not None + } + except ValueError: + # literal_eval gives cryptic error messages, show a generic + # message with the full expression instead. + raise NoAppException( + f"Failed to parse arguments as literal values: {app_name!r}." + ) from None + else: + raise NoAppException( + f"Failed to parse {app_name!r} as an attribute name or function call." + ) + + try: + attr = getattr(module, name) + except AttributeError as e: + raise NoAppException( + f"Failed to find attribute {name!r} in {module.__name__!r}." + ) from e + + # If the attribute is a function, call it with any args and kwargs + # to get the real application. + if inspect.isfunction(attr): + try: + app = attr(*args, **kwargs) + except TypeError as e: + if not _called_with_wrong_args(attr): + raise + + raise NoAppException( + f"The factory {app_name!r} in module" + f" {module.__name__!r} could not be called with the" + " specified arguments." + ) from e + else: + app = attr + + if isinstance(app, Flask): + return app + + raise NoAppException( + "A valid Flask application was not obtained from" + f" '{module.__name__}:{app_name}'." + ) + + +def prepare_import(path: str) -> str: + """Given a filename this will try to calculate the python path, add it + to the search path and return the actual module name that is expected. + """ + path = os.path.realpath(path) + + fname, ext = os.path.splitext(path) + if ext == ".py": + path = fname + + if os.path.basename(path) == "__init__": + path = os.path.dirname(path) + + module_name = [] + + # move up until outside package structure (no __init__.py) + while True: + path, name = os.path.split(path) + module_name.append(name) + + if not os.path.exists(os.path.join(path, "__init__.py")): + break + + if sys.path[0] != path: + sys.path.insert(0, path) + + return ".".join(module_name[::-1]) + + +@t.overload +def locate_app( + module_name: str, app_name: str | None, raise_if_not_found: t.Literal[True] = True +) -> Flask: ... + + +@t.overload +def locate_app( + module_name: str, app_name: str | None, raise_if_not_found: t.Literal[False] = ... +) -> Flask | None: ... + + +def locate_app( + module_name: str, app_name: str | None, raise_if_not_found: bool = True +) -> Flask | None: + try: + __import__(module_name) + except ImportError: + # Reraise the ImportError if it occurred within the imported module. + # Determine this by checking whether the trace has a depth > 1. + if sys.exc_info()[2].tb_next: # type: ignore[union-attr] + raise NoAppException( + f"While importing {module_name!r}, an ImportError was" + f" raised:\n\n{traceback.format_exc()}" + ) from None + elif raise_if_not_found: + raise NoAppException(f"Could not import {module_name!r}.") from None + else: + return None + + module = sys.modules[module_name] + + if app_name is None: + return find_best_app(module) + else: + return find_app_by_string(module, app_name) + + +def get_version(ctx: click.Context, param: click.Parameter, value: t.Any) -> None: + if not value or ctx.resilient_parsing: + return + + flask_version = importlib.metadata.version("flask") + werkzeug_version = importlib.metadata.version("werkzeug") + + click.echo( + f"Python {platform.python_version()}\n" + f"Flask {flask_version}\n" + f"Werkzeug {werkzeug_version}", + color=ctx.color, + ) + ctx.exit() + + +version_option = click.Option( + ["--version"], + help="Show the Flask version.", + expose_value=False, + callback=get_version, + is_flag=True, + is_eager=True, +) + + +class ScriptInfo: + """Helper object to deal with Flask applications. This is usually not + necessary to interface with as it's used internally in the dispatching + to click. In future versions of Flask this object will most likely play + a bigger role. Typically it's created automatically by the + :class:`FlaskGroup` but you can also manually create it and pass it + onwards as click object. + """ + + def __init__( + self, + app_import_path: str | None = None, + create_app: t.Callable[..., Flask] | None = None, + set_debug_flag: bool = True, + ) -> None: + #: Optionally the import path for the Flask application. + self.app_import_path = app_import_path + #: Optionally a function that is passed the script info to create + #: the instance of the application. + self.create_app = create_app + #: A dictionary with arbitrary data that can be associated with + #: this script info. + self.data: dict[t.Any, t.Any] = {} + self.set_debug_flag = set_debug_flag + self._loaded_app: Flask | None = None + + def load_app(self) -> Flask: + """Loads the Flask app (if not yet loaded) and returns it. Calling + this multiple times will just result in the already loaded app to + be returned. + """ + if self._loaded_app is not None: + return self._loaded_app + + if self.create_app is not None: + app: Flask | None = self.create_app() + else: + if self.app_import_path: + path, name = ( + re.split(r":(?![\\/])", self.app_import_path, maxsplit=1) + [None] + )[:2] + import_name = prepare_import(path) + app = locate_app(import_name, name) + else: + for path in ("wsgi.py", "app.py"): + import_name = prepare_import(path) + app = locate_app(import_name, None, raise_if_not_found=False) + + if app is not None: + break + + if app is None: + raise NoAppException( + "Could not locate a Flask application. Use the" + " 'flask --app' option, 'FLASK_APP' environment" + " variable, or a 'wsgi.py' or 'app.py' file in the" + " current directory." + ) + + if self.set_debug_flag: + # Update the app's debug flag through the descriptor so that + # other values repopulate as well. + app.debug = get_debug_flag() + + self._loaded_app = app + return app + + +pass_script_info = click.make_pass_decorator(ScriptInfo, ensure=True) + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) + + +def with_appcontext(f: F) -> F: + """Wraps a callback so that it's guaranteed to be executed with the + script's application context. + + Custom commands (and their options) registered under ``app.cli`` or + ``blueprint.cli`` will always have an app context available, this + decorator is not required in that case. + + .. versionchanged:: 2.2 + The app context is active for subcommands as well as the + decorated callback. The app context is always available to + ``app.cli`` command and parameter callbacks. + """ + + @click.pass_context + def decorator(ctx: click.Context, /, *args: t.Any, **kwargs: t.Any) -> t.Any: + if not current_app: + app = ctx.ensure_object(ScriptInfo).load_app() + ctx.with_resource(app.app_context()) + + return ctx.invoke(f, *args, **kwargs) + + return update_wrapper(decorator, f) # type: ignore[return-value] + + +class AppGroup(click.Group): + """This works similar to a regular click :class:`~click.Group` but it + changes the behavior of the :meth:`command` decorator so that it + automatically wraps the functions in :func:`with_appcontext`. + + Not to be confused with :class:`FlaskGroup`. + """ + + def command( # type: ignore[override] + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], click.Command]: + """This works exactly like the method of the same name on a regular + :class:`click.Group` but it wraps callbacks in :func:`with_appcontext` + unless it's disabled by passing ``with_appcontext=False``. + """ + wrap_for_ctx = kwargs.pop("with_appcontext", True) + + def decorator(f: t.Callable[..., t.Any]) -> click.Command: + if wrap_for_ctx: + f = with_appcontext(f) + return super(AppGroup, self).command(*args, **kwargs)(f) # type: ignore[no-any-return] + + return decorator + + def group( # type: ignore[override] + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], click.Group]: + """This works exactly like the method of the same name on a regular + :class:`click.Group` but it defaults the group class to + :class:`AppGroup`. + """ + kwargs.setdefault("cls", AppGroup) + return super().group(*args, **kwargs) # type: ignore[no-any-return] + + +def _set_app(ctx: click.Context, param: click.Option, value: str | None) -> str | None: + if value is None: + return None + + info = ctx.ensure_object(ScriptInfo) + info.app_import_path = value + return value + + +# This option is eager so the app will be available if --help is given. +# --help is also eager, so --app must be before it in the param list. +# no_args_is_help bypasses eager processing, so this option must be +# processed manually in that case to ensure FLASK_APP gets picked up. +_app_option = click.Option( + ["-A", "--app"], + metavar="IMPORT", + help=( + "The Flask application or factory function to load, in the form 'module:name'." + " Module can be a dotted import or file path. Name is not required if it is" + " 'app', 'application', 'create_app', or 'make_app', and can be 'name(args)' to" + " pass arguments." + ), + is_eager=True, + expose_value=False, + callback=_set_app, +) + + +def _set_debug(ctx: click.Context, param: click.Option, value: bool) -> bool | None: + # If the flag isn't provided, it will default to False. Don't use + # that, let debug be set by env in that case. + source = ctx.get_parameter_source(param.name) # type: ignore[arg-type] + + if source is not None and source in ( + ParameterSource.DEFAULT, + ParameterSource.DEFAULT_MAP, + ): + return None + + # Set with env var instead of ScriptInfo.load so that it can be + # accessed early during a factory function. + os.environ["FLASK_DEBUG"] = "1" if value else "0" + return value + + +_debug_option = click.Option( + ["--debug/--no-debug"], + help="Set debug mode.", + expose_value=False, + callback=_set_debug, +) + + +def _env_file_callback( + ctx: click.Context, param: click.Option, value: str | None +) -> str | None: + if value is None: + return None + + import importlib + + try: + importlib.import_module("dotenv") + except ImportError: + raise click.BadParameter( + "python-dotenv must be installed to load an env file.", + ctx=ctx, + param=param, + ) from None + + # Don't check FLASK_SKIP_DOTENV, that only disables automatically + # loading .env and .flaskenv files. + load_dotenv(value) + return value + + +# This option is eager so env vars are loaded as early as possible to be +# used by other options. +_env_file_option = click.Option( + ["-e", "--env-file"], + type=click.Path(exists=True, dir_okay=False), + help="Load environment variables from this file. python-dotenv must be installed.", + is_eager=True, + expose_value=False, + callback=_env_file_callback, +) + + +class FlaskGroup(AppGroup): + """Special subclass of the :class:`AppGroup` group that supports + loading more commands from the configured Flask app. Normally a + developer does not have to interface with this class but there are + some very advanced use cases for which it makes sense to create an + instance of this. see :ref:`custom-scripts`. + + :param add_default_commands: if this is True then the default run and + shell commands will be added. + :param add_version_option: adds the ``--version`` option. + :param create_app: an optional callback that is passed the script info and + returns the loaded app. + :param load_dotenv: Load the nearest :file:`.env` and :file:`.flaskenv` + files to set environment variables. Will also change the working + directory to the directory containing the first file found. + :param set_debug_flag: Set the app's debug flag. + + .. versionchanged:: 2.2 + Added the ``-A/--app``, ``--debug/--no-debug``, ``-e/--env-file`` options. + + .. versionchanged:: 2.2 + An app context is pushed when running ``app.cli`` commands, so + ``@with_appcontext`` is no longer required for those commands. + + .. versionchanged:: 1.0 + If installed, python-dotenv will be used to load environment variables + from :file:`.env` and :file:`.flaskenv` files. + """ + + def __init__( + self, + add_default_commands: bool = True, + create_app: t.Callable[..., Flask] | None = None, + add_version_option: bool = True, + load_dotenv: bool = True, + set_debug_flag: bool = True, + **extra: t.Any, + ) -> None: + params = list(extra.pop("params", None) or ()) + # Processing is done with option callbacks instead of a group + # callback. This allows users to make a custom group callback + # without losing the behavior. --env-file must come first so + # that it is eagerly evaluated before --app. + params.extend((_env_file_option, _app_option, _debug_option)) + + if add_version_option: + params.append(version_option) + + if "context_settings" not in extra: + extra["context_settings"] = {} + + extra["context_settings"].setdefault("auto_envvar_prefix", "FLASK") + + super().__init__(params=params, **extra) + + self.create_app = create_app + self.load_dotenv = load_dotenv + self.set_debug_flag = set_debug_flag + + if add_default_commands: + self.add_command(run_command) + self.add_command(shell_command) + self.add_command(routes_command) + + self._loaded_plugin_commands = False + + def _load_plugin_commands(self) -> None: + if self._loaded_plugin_commands: + return + + if sys.version_info >= (3, 10): + from importlib import metadata + else: + # Use a backport on Python < 3.10. We technically have + # importlib.metadata on 3.8+, but the API changed in 3.10, + # so use the backport for consistency. + import importlib_metadata as metadata + + for ep in metadata.entry_points(group="flask.commands"): + self.add_command(ep.load(), ep.name) + + self._loaded_plugin_commands = True + + def get_command(self, ctx: click.Context, name: str) -> click.Command | None: + self._load_plugin_commands() + # Look up built-in and plugin commands, which should be + # available even if the app fails to load. + rv = super().get_command(ctx, name) + + if rv is not None: + return rv + + info = ctx.ensure_object(ScriptInfo) + + # Look up commands provided by the app, showing an error and + # continuing if the app couldn't be loaded. + try: + app = info.load_app() + except NoAppException as e: + click.secho(f"Error: {e.format_message()}\n", err=True, fg="red") + return None + + # Push an app context for the loaded app unless it is already + # active somehow. This makes the context available to parameter + # and command callbacks without needing @with_appcontext. + if not current_app or current_app._get_current_object() is not app: # type: ignore[attr-defined] + ctx.with_resource(app.app_context()) + + return app.cli.get_command(ctx, name) + + def list_commands(self, ctx: click.Context) -> list[str]: + self._load_plugin_commands() + # Start with the built-in and plugin commands. + rv = set(super().list_commands(ctx)) + info = ctx.ensure_object(ScriptInfo) + + # Add commands provided by the app, showing an error and + # continuing if the app couldn't be loaded. + try: + rv.update(info.load_app().cli.list_commands(ctx)) + except NoAppException as e: + # When an app couldn't be loaded, show the error message + # without the traceback. + click.secho(f"Error: {e.format_message()}\n", err=True, fg="red") + except Exception: + # When any other errors occurred during loading, show the + # full traceback. + click.secho(f"{traceback.format_exc()}\n", err=True, fg="red") + + return sorted(rv) + + def make_context( + self, + info_name: str | None, + args: list[str], + parent: click.Context | None = None, + **extra: t.Any, + ) -> click.Context: + # Set a flag to tell app.run to become a no-op. If app.run was + # not in a __name__ == __main__ guard, it would start the server + # when importing, blocking whatever command is being called. + os.environ["FLASK_RUN_FROM_CLI"] = "true" + + # Attempt to load .env and .flask env files. The --env-file + # option can cause another file to be loaded. + if get_load_dotenv(self.load_dotenv): + load_dotenv() + + if "obj" not in extra and "obj" not in self.context_settings: + extra["obj"] = ScriptInfo( + create_app=self.create_app, set_debug_flag=self.set_debug_flag + ) + + return super().make_context(info_name, args, parent=parent, **extra) + + def parse_args(self, ctx: click.Context, args: list[str]) -> list[str]: + if not args and self.no_args_is_help: + # Attempt to load --env-file and --app early in case they + # were given as env vars. Otherwise no_args_is_help will not + # see commands from app.cli. + _env_file_option.handle_parse_result(ctx, {}, []) + _app_option.handle_parse_result(ctx, {}, []) + + return super().parse_args(ctx, args) + + +def _path_is_ancestor(path: str, other: str) -> bool: + """Take ``other`` and remove the length of ``path`` from it. Then join it + to ``path``. If it is the original value, ``path`` is an ancestor of + ``other``.""" + return os.path.join(path, other[len(path) :].lstrip(os.sep)) == other + + +def load_dotenv(path: str | os.PathLike[str] | None = None) -> bool: + """Load "dotenv" files in order of precedence to set environment variables. + + If an env var is already set it is not overwritten, so earlier files in the + list are preferred over later files. + + This is a no-op if `python-dotenv`_ is not installed. + + .. _python-dotenv: https://github.com/theskumar/python-dotenv#readme + + :param path: Load the file at this location instead of searching. + :return: ``True`` if a file was loaded. + + .. versionchanged:: 2.0 + The current directory is not changed to the location of the + loaded file. + + .. versionchanged:: 2.0 + When loading the env files, set the default encoding to UTF-8. + + .. versionchanged:: 1.1.0 + Returns ``False`` when python-dotenv is not installed, or when + the given path isn't a file. + + .. versionadded:: 1.0 + """ + try: + import dotenv + except ImportError: + if path or os.path.isfile(".env") or os.path.isfile(".flaskenv"): + click.secho( + " * Tip: There are .env or .flaskenv files present." + ' Do "pip install python-dotenv" to use them.', + fg="yellow", + err=True, + ) + + return False + + # Always return after attempting to load a given path, don't load + # the default files. + if path is not None: + if os.path.isfile(path): + return dotenv.load_dotenv(path, encoding="utf-8") + + return False + + loaded = False + + for name in (".env", ".flaskenv"): + path = dotenv.find_dotenv(name, usecwd=True) + + if not path: + continue + + dotenv.load_dotenv(path, encoding="utf-8") + loaded = True + + return loaded # True if at least one file was located and loaded. + + +def show_server_banner(debug: bool, app_import_path: str | None) -> None: + """Show extra startup messages the first time the server is run, + ignoring the reloader. + """ + if is_running_from_reloader(): + return + + if app_import_path is not None: + click.echo(f" * Serving Flask app '{app_import_path}'") + + if debug is not None: + click.echo(f" * Debug mode: {'on' if debug else 'off'}") + + +class CertParamType(click.ParamType): + """Click option type for the ``--cert`` option. Allows either an + existing file, the string ``'adhoc'``, or an import for a + :class:`~ssl.SSLContext` object. + """ + + name = "path" + + def __init__(self) -> None: + self.path_type = click.Path(exists=True, dir_okay=False, resolve_path=True) + + def convert( + self, value: t.Any, param: click.Parameter | None, ctx: click.Context | None + ) -> t.Any: + try: + import ssl + except ImportError: + raise click.BadParameter( + 'Using "--cert" requires Python to be compiled with SSL support.', + ctx, + param, + ) from None + + try: + return self.path_type(value, param, ctx) + except click.BadParameter: + value = click.STRING(value, param, ctx).lower() + + if value == "adhoc": + try: + import cryptography # noqa: F401 + except ImportError: + raise click.BadParameter( + "Using ad-hoc certificates requires the cryptography library.", + ctx, + param, + ) from None + + return value + + obj = import_string(value, silent=True) + + if isinstance(obj, ssl.SSLContext): + return obj + + raise + + +def _validate_key(ctx: click.Context, param: click.Parameter, value: t.Any) -> t.Any: + """The ``--key`` option must be specified when ``--cert`` is a file. + Modifies the ``cert`` param to be a ``(cert, key)`` pair if needed. + """ + cert = ctx.params.get("cert") + is_adhoc = cert == "adhoc" + + try: + import ssl + except ImportError: + is_context = False + else: + is_context = isinstance(cert, ssl.SSLContext) + + if value is not None: + if is_adhoc: + raise click.BadParameter( + 'When "--cert" is "adhoc", "--key" is not used.', ctx, param + ) + + if is_context: + raise click.BadParameter( + 'When "--cert" is an SSLContext object, "--key" is not used.', + ctx, + param, + ) + + if not cert: + raise click.BadParameter('"--cert" must also be specified.', ctx, param) + + ctx.params["cert"] = cert, value + + else: + if cert and not (is_adhoc or is_context): + raise click.BadParameter('Required when using "--cert".', ctx, param) + + return value + + +class SeparatedPathType(click.Path): + """Click option type that accepts a list of values separated by the + OS's path separator (``:``, ``;`` on Windows). Each value is + validated as a :class:`click.Path` type. + """ + + def convert( + self, value: t.Any, param: click.Parameter | None, ctx: click.Context | None + ) -> t.Any: + items = self.split_envvar_value(value) + # can't call no-arg super() inside list comprehension until Python 3.12 + super_convert = super().convert + return [super_convert(item, param, ctx) for item in items] + + +@click.command("run", short_help="Run a development server.") +@click.option("--host", "-h", default="127.0.0.1", help="The interface to bind to.") +@click.option("--port", "-p", default=5000, help="The port to bind to.") +@click.option( + "--cert", + type=CertParamType(), + help="Specify a certificate file to use HTTPS.", + is_eager=True, +) +@click.option( + "--key", + type=click.Path(exists=True, dir_okay=False, resolve_path=True), + callback=_validate_key, + expose_value=False, + help="The key file to use when specifying a certificate.", +) +@click.option( + "--reload/--no-reload", + default=None, + help="Enable or disable the reloader. By default the reloader " + "is active if debug is enabled.", +) +@click.option( + "--debugger/--no-debugger", + default=None, + help="Enable or disable the debugger. By default the debugger " + "is active if debug is enabled.", +) +@click.option( + "--with-threads/--without-threads", + default=True, + help="Enable or disable multithreading.", +) +@click.option( + "--extra-files", + default=None, + type=SeparatedPathType(), + help=( + "Extra files that trigger a reload on change. Multiple paths" + f" are separated by {os.path.pathsep!r}." + ), +) +@click.option( + "--exclude-patterns", + default=None, + type=SeparatedPathType(), + help=( + "Files matching these fnmatch patterns will not trigger a reload" + " on change. Multiple patterns are separated by" + f" {os.path.pathsep!r}." + ), +) +@pass_script_info +def run_command( + info: ScriptInfo, + host: str, + port: int, + reload: bool, + debugger: bool, + with_threads: bool, + cert: ssl.SSLContext | tuple[str, str | None] | t.Literal["adhoc"] | None, + extra_files: list[str] | None, + exclude_patterns: list[str] | None, +) -> None: + """Run a local development server. + + This server is for development purposes only. It does not provide + the stability, security, or performance of production WSGI servers. + + The reloader and debugger are enabled by default with the '--debug' + option. + """ + try: + app: WSGIApplication = info.load_app() + except Exception as e: + if is_running_from_reloader(): + # When reloading, print out the error immediately, but raise + # it later so the debugger or server can handle it. + traceback.print_exc() + err = e + + def app( + environ: WSGIEnvironment, start_response: StartResponse + ) -> cabc.Iterable[bytes]: + raise err from None + + else: + # When not reloading, raise the error immediately so the + # command fails. + raise e from None + + debug = get_debug_flag() + + if reload is None: + reload = debug + + if debugger is None: + debugger = debug + + show_server_banner(debug, info.app_import_path) + + run_simple( + host, + port, + app, + use_reloader=reload, + use_debugger=debugger, + threaded=with_threads, + ssl_context=cert, + extra_files=extra_files, + exclude_patterns=exclude_patterns, + ) + + +run_command.params.insert(0, _debug_option) + + +@click.command("shell", short_help="Run a shell in the app context.") +@with_appcontext +def shell_command() -> None: + """Run an interactive Python shell in the context of a given + Flask application. The application will populate the default + namespace of this shell according to its configuration. + + This is useful for executing small snippets of management code + without having to manually configure the application. + """ + import code + + banner = ( + f"Python {sys.version} on {sys.platform}\n" + f"App: {current_app.import_name}\n" + f"Instance: {current_app.instance_path}" + ) + ctx: dict[str, t.Any] = {} + + # Support the regular Python interpreter startup script if someone + # is using it. + startup = os.environ.get("PYTHONSTARTUP") + if startup and os.path.isfile(startup): + with open(startup) as f: + eval(compile(f.read(), startup, "exec"), ctx) + + ctx.update(current_app.make_shell_context()) + + # Site, customize, or startup script can set a hook to call when + # entering interactive mode. The default one sets up readline with + # tab and history completion. + interactive_hook = getattr(sys, "__interactivehook__", None) + + if interactive_hook is not None: + try: + import readline + from rlcompleter import Completer + except ImportError: + pass + else: + # rlcompleter uses __main__.__dict__ by default, which is + # flask.__main__. Use the shell context instead. + readline.set_completer(Completer(ctx).complete) + + interactive_hook() + + code.interact(banner=banner, local=ctx) + + +@click.command("routes", short_help="Show the routes for the app.") +@click.option( + "--sort", + "-s", + type=click.Choice(("endpoint", "methods", "domain", "rule", "match")), + default="endpoint", + help=( + "Method to sort routes by. 'match' is the order that Flask will match routes" + " when dispatching a request." + ), +) +@click.option("--all-methods", is_flag=True, help="Show HEAD and OPTIONS methods.") +@with_appcontext +def routes_command(sort: str, all_methods: bool) -> None: + """Show all registered routes with endpoints and methods.""" + rules = list(current_app.url_map.iter_rules()) + + if not rules: + click.echo("No routes were registered.") + return + + ignored_methods = set() if all_methods else {"HEAD", "OPTIONS"} + host_matching = current_app.url_map.host_matching + has_domain = any(rule.host if host_matching else rule.subdomain for rule in rules) + rows = [] + + for rule in rules: + row = [ + rule.endpoint, + ", ".join(sorted((rule.methods or set()) - ignored_methods)), + ] + + if has_domain: + row.append((rule.host if host_matching else rule.subdomain) or "") + + row.append(rule.rule) + rows.append(row) + + headers = ["Endpoint", "Methods"] + sorts = ["endpoint", "methods"] + + if has_domain: + headers.append("Host" if host_matching else "Subdomain") + sorts.append("domain") + + headers.append("Rule") + sorts.append("rule") + + try: + rows.sort(key=itemgetter(sorts.index(sort))) + except ValueError: + pass + + rows.insert(0, headers) + widths = [max(len(row[i]) for row in rows) for i in range(len(headers))] + rows.insert(1, ["-" * w for w in widths]) + template = " ".join(f"{{{i}:<{w}}}" for i, w in enumerate(widths)) + + for row in rows: + click.echo(template.format(*row)) + + +cli = FlaskGroup( + name="flask", + help="""\ +A general utility script for Flask applications. + +An application to load must be given with the '--app' option, +'FLASK_APP' environment variable, or with a 'wsgi.py' or 'app.py' file +in the current directory. +""", +) + + +def main() -> None: + cli.main() + + +if __name__ == "__main__": + main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/config.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/config.py new file mode 100644 index 000000000..7e3ba1790 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/config.py @@ -0,0 +1,370 @@ +from __future__ import annotations + +import errno +import json +import os +import types +import typing as t + +from werkzeug.utils import import_string + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .sansio.app import App + + +T = t.TypeVar("T") + + +class ConfigAttribute(t.Generic[T]): + """Makes an attribute forward to the config""" + + def __init__( + self, name: str, get_converter: t.Callable[[t.Any], T] | None = None + ) -> None: + self.__name__ = name + self.get_converter = get_converter + + @t.overload + def __get__(self, obj: None, owner: None) -> te.Self: ... + + @t.overload + def __get__(self, obj: App, owner: type[App]) -> T: ... + + def __get__(self, obj: App | None, owner: type[App] | None = None) -> T | te.Self: + if obj is None: + return self + + rv = obj.config[self.__name__] + + if self.get_converter is not None: + rv = self.get_converter(rv) + + return rv # type: ignore[no-any-return] + + def __set__(self, obj: App, value: t.Any) -> None: + obj.config[self.__name__] = value + + +class Config(dict): # type: ignore[type-arg] + """Works exactly like a dict but provides ways to fill it from files + or special dictionaries. There are two common patterns to populate the + config. + + Either you can fill the config from a config file:: + + app.config.from_pyfile('yourconfig.cfg') + + Or alternatively you can define the configuration options in the + module that calls :meth:`from_object` or provide an import path to + a module that should be loaded. It is also possible to tell it to + use the same module and with that provide the configuration values + just before the call:: + + DEBUG = True + SECRET_KEY = 'development key' + app.config.from_object(__name__) + + In both cases (loading from any Python file or loading from modules), + only uppercase keys are added to the config. This makes it possible to use + lowercase values in the config file for temporary values that are not added + to the config or to define the config keys in the same file that implements + the application. + + Probably the most interesting way to load configurations is from an + environment variable pointing to a file:: + + app.config.from_envvar('YOURAPPLICATION_SETTINGS') + + In this case before launching the application you have to set this + environment variable to the file you want to use. On Linux and OS X + use the export statement:: + + export YOURAPPLICATION_SETTINGS='/path/to/config/file' + + On windows use `set` instead. + + :param root_path: path to which files are read relative from. When the + config object is created by the application, this is + the application's :attr:`~flask.Flask.root_path`. + :param defaults: an optional dictionary of default values + """ + + def __init__( + self, + root_path: str | os.PathLike[str], + defaults: dict[str, t.Any] | None = None, + ) -> None: + super().__init__(defaults or {}) + self.root_path = root_path + + def from_envvar(self, variable_name: str, silent: bool = False) -> bool: + """Loads a configuration from an environment variable pointing to + a configuration file. This is basically just a shortcut with nicer + error messages for this line of code:: + + app.config.from_pyfile(os.environ['YOURAPPLICATION_SETTINGS']) + + :param variable_name: name of the environment variable + :param silent: set to ``True`` if you want silent failure for missing + files. + :return: ``True`` if the file was loaded successfully. + """ + rv = os.environ.get(variable_name) + if not rv: + if silent: + return False + raise RuntimeError( + f"The environment variable {variable_name!r} is not set" + " and as such configuration could not be loaded. Set" + " this variable and make it point to a configuration" + " file" + ) + return self.from_pyfile(rv, silent=silent) + + def from_prefixed_env( + self, prefix: str = "FLASK", *, loads: t.Callable[[str], t.Any] = json.loads + ) -> bool: + """Load any environment variables that start with ``FLASK_``, + dropping the prefix from the env key for the config key. Values + are passed through a loading function to attempt to convert them + to more specific types than strings. + + Keys are loaded in :func:`sorted` order. + + The default loading function attempts to parse values as any + valid JSON type, including dicts and lists. + + Specific items in nested dicts can be set by separating the + keys with double underscores (``__``). If an intermediate key + doesn't exist, it will be initialized to an empty dict. + + :param prefix: Load env vars that start with this prefix, + separated with an underscore (``_``). + :param loads: Pass each string value to this function and use + the returned value as the config value. If any error is + raised it is ignored and the value remains a string. The + default is :func:`json.loads`. + + .. versionadded:: 2.1 + """ + prefix = f"{prefix}_" + len_prefix = len(prefix) + + for key in sorted(os.environ): + if not key.startswith(prefix): + continue + + value = os.environ[key] + + try: + value = loads(value) + except Exception: + # Keep the value as a string if loading failed. + pass + + # Change to key.removeprefix(prefix) on Python >= 3.9. + key = key[len_prefix:] + + if "__" not in key: + # A non-nested key, set directly. + self[key] = value + continue + + # Traverse nested dictionaries with keys separated by "__". + current = self + *parts, tail = key.split("__") + + for part in parts: + # If an intermediate dict does not exist, create it. + if part not in current: + current[part] = {} + + current = current[part] + + current[tail] = value + + return True + + def from_pyfile( + self, filename: str | os.PathLike[str], silent: bool = False + ) -> bool: + """Updates the values in the config from a Python file. This function + behaves as if the file was imported as module with the + :meth:`from_object` function. + + :param filename: the filename of the config. This can either be an + absolute filename or a filename relative to the + root path. + :param silent: set to ``True`` if you want silent failure for missing + files. + :return: ``True`` if the file was loaded successfully. + + .. versionadded:: 0.7 + `silent` parameter. + """ + filename = os.path.join(self.root_path, filename) + d = types.ModuleType("config") + d.__file__ = filename + try: + with open(filename, mode="rb") as config_file: + exec(compile(config_file.read(), filename, "exec"), d.__dict__) + except OSError as e: + if silent and e.errno in (errno.ENOENT, errno.EISDIR, errno.ENOTDIR): + return False + e.strerror = f"Unable to load configuration file ({e.strerror})" + raise + self.from_object(d) + return True + + def from_object(self, obj: object | str) -> None: + """Updates the values from the given object. An object can be of one + of the following two types: + + - a string: in this case the object with that name will be imported + - an actual object reference: that object is used directly + + Objects are usually either modules or classes. :meth:`from_object` + loads only the uppercase attributes of the module/class. A ``dict`` + object will not work with :meth:`from_object` because the keys of a + ``dict`` are not attributes of the ``dict`` class. + + Example of module-based configuration:: + + app.config.from_object('yourapplication.default_config') + from yourapplication import default_config + app.config.from_object(default_config) + + Nothing is done to the object before loading. If the object is a + class and has ``@property`` attributes, it needs to be + instantiated before being passed to this method. + + You should not use this function to load the actual configuration but + rather configuration defaults. The actual config should be loaded + with :meth:`from_pyfile` and ideally from a location not within the + package because the package might be installed system wide. + + See :ref:`config-dev-prod` for an example of class-based configuration + using :meth:`from_object`. + + :param obj: an import name or object + """ + if isinstance(obj, str): + obj = import_string(obj) + for key in dir(obj): + if key.isupper(): + self[key] = getattr(obj, key) + + def from_file( + self, + filename: str | os.PathLike[str], + load: t.Callable[[t.IO[t.Any]], t.Mapping[str, t.Any]], + silent: bool = False, + text: bool = True, + ) -> bool: + """Update the values in the config from a file that is loaded + using the ``load`` parameter. The loaded data is passed to the + :meth:`from_mapping` method. + + .. code-block:: python + + import json + app.config.from_file("config.json", load=json.load) + + import tomllib + app.config.from_file("config.toml", load=tomllib.load, text=False) + + :param filename: The path to the data file. This can be an + absolute path or relative to the config root path. + :param load: A callable that takes a file handle and returns a + mapping of loaded data from the file. + :type load: ``Callable[[Reader], Mapping]`` where ``Reader`` + implements a ``read`` method. + :param silent: Ignore the file if it doesn't exist. + :param text: Open the file in text or binary mode. + :return: ``True`` if the file was loaded successfully. + + .. versionchanged:: 2.3 + The ``text`` parameter was added. + + .. versionadded:: 2.0 + """ + filename = os.path.join(self.root_path, filename) + + try: + with open(filename, "r" if text else "rb") as f: + obj = load(f) + except OSError as e: + if silent and e.errno in (errno.ENOENT, errno.EISDIR): + return False + + e.strerror = f"Unable to load configuration file ({e.strerror})" + raise + + return self.from_mapping(obj) + + def from_mapping( + self, mapping: t.Mapping[str, t.Any] | None = None, **kwargs: t.Any + ) -> bool: + """Updates the config like :meth:`update` ignoring items with + non-upper keys. + + :return: Always returns ``True``. + + .. versionadded:: 0.11 + """ + mappings: dict[str, t.Any] = {} + if mapping is not None: + mappings.update(mapping) + mappings.update(kwargs) + for key, value in mappings.items(): + if key.isupper(): + self[key] = value + return True + + def get_namespace( + self, namespace: str, lowercase: bool = True, trim_namespace: bool = True + ) -> dict[str, t.Any]: + """Returns a dictionary containing a subset of configuration options + that match the specified namespace/prefix. Example usage:: + + app.config['IMAGE_STORE_TYPE'] = 'fs' + app.config['IMAGE_STORE_PATH'] = '/var/app/images' + app.config['IMAGE_STORE_BASE_URL'] = 'http://img.website.com' + image_store_config = app.config.get_namespace('IMAGE_STORE_') + + The resulting dictionary `image_store_config` would look like:: + + { + 'type': 'fs', + 'path': '/var/app/images', + 'base_url': 'http://img.website.com' + } + + This is often useful when configuration options map directly to + keyword arguments in functions or class constructors. + + :param namespace: a configuration namespace + :param lowercase: a flag indicating if the keys of the resulting + dictionary should be lowercase + :param trim_namespace: a flag indicating if the keys of the resulting + dictionary should not include the namespace + + .. versionadded:: 0.11 + """ + rv = {} + for k, v in self.items(): + if not k.startswith(namespace): + continue + if trim_namespace: + key = k[len(namespace) :] + else: + key = k + if lowercase: + key = key.lower() + rv[key] = v + return rv + + def __repr__(self) -> str: + return f"<{type(self).__name__} {dict.__repr__(self)}>" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/ctx.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/ctx.py new file mode 100644 index 000000000..9b164d390 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/ctx.py @@ -0,0 +1,449 @@ +from __future__ import annotations + +import contextvars +import sys +import typing as t +from functools import update_wrapper +from types import TracebackType + +from werkzeug.exceptions import HTTPException + +from . import typing as ft +from .globals import _cv_app +from .globals import _cv_request +from .signals import appcontext_popped +from .signals import appcontext_pushed + +if t.TYPE_CHECKING: # pragma: no cover + from _typeshed.wsgi import WSGIEnvironment + + from .app import Flask + from .sessions import SessionMixin + from .wrappers import Request + + +# a singleton sentinel value for parameter defaults +_sentinel = object() + + +class _AppCtxGlobals: + """A plain object. Used as a namespace for storing data during an + application context. + + Creating an app context automatically creates this object, which is + made available as the :data:`g` proxy. + + .. describe:: 'key' in g + + Check whether an attribute is present. + + .. versionadded:: 0.10 + + .. describe:: iter(g) + + Return an iterator over the attribute names. + + .. versionadded:: 0.10 + """ + + # Define attr methods to let mypy know this is a namespace object + # that has arbitrary attributes. + + def __getattr__(self, name: str) -> t.Any: + try: + return self.__dict__[name] + except KeyError: + raise AttributeError(name) from None + + def __setattr__(self, name: str, value: t.Any) -> None: + self.__dict__[name] = value + + def __delattr__(self, name: str) -> None: + try: + del self.__dict__[name] + except KeyError: + raise AttributeError(name) from None + + def get(self, name: str, default: t.Any | None = None) -> t.Any: + """Get an attribute by name, or a default value. Like + :meth:`dict.get`. + + :param name: Name of attribute to get. + :param default: Value to return if the attribute is not present. + + .. versionadded:: 0.10 + """ + return self.__dict__.get(name, default) + + def pop(self, name: str, default: t.Any = _sentinel) -> t.Any: + """Get and remove an attribute by name. Like :meth:`dict.pop`. + + :param name: Name of attribute to pop. + :param default: Value to return if the attribute is not present, + instead of raising a ``KeyError``. + + .. versionadded:: 0.11 + """ + if default is _sentinel: + return self.__dict__.pop(name) + else: + return self.__dict__.pop(name, default) + + def setdefault(self, name: str, default: t.Any = None) -> t.Any: + """Get the value of an attribute if it is present, otherwise + set and return a default value. Like :meth:`dict.setdefault`. + + :param name: Name of attribute to get. + :param default: Value to set and return if the attribute is not + present. + + .. versionadded:: 0.11 + """ + return self.__dict__.setdefault(name, default) + + def __contains__(self, item: str) -> bool: + return item in self.__dict__ + + def __iter__(self) -> t.Iterator[str]: + return iter(self.__dict__) + + def __repr__(self) -> str: + ctx = _cv_app.get(None) + if ctx is not None: + return f"" + return object.__repr__(self) + + +def after_this_request( + f: ft.AfterRequestCallable[t.Any], +) -> ft.AfterRequestCallable[t.Any]: + """Executes a function after this request. This is useful to modify + response objects. The function is passed the response object and has + to return the same or a new one. + + Example:: + + @app.route('/') + def index(): + @after_this_request + def add_header(response): + response.headers['X-Foo'] = 'Parachute' + return response + return 'Hello World!' + + This is more useful if a function other than the view function wants to + modify a response. For instance think of a decorator that wants to add + some headers without converting the return value into a response object. + + .. versionadded:: 0.9 + """ + ctx = _cv_request.get(None) + + if ctx is None: + raise RuntimeError( + "'after_this_request' can only be used when a request" + " context is active, such as in a view function." + ) + + ctx._after_request_functions.append(f) + return f + + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) + + +def copy_current_request_context(f: F) -> F: + """A helper function that decorates a function to retain the current + request context. This is useful when working with greenlets. The moment + the function is decorated a copy of the request context is created and + then pushed when the function is called. The current session is also + included in the copied request context. + + Example:: + + import gevent + from flask import copy_current_request_context + + @app.route('/') + def index(): + @copy_current_request_context + def do_some_work(): + # do some work here, it can access flask.request or + # flask.session like you would otherwise in the view function. + ... + gevent.spawn(do_some_work) + return 'Regular response' + + .. versionadded:: 0.10 + """ + ctx = _cv_request.get(None) + + if ctx is None: + raise RuntimeError( + "'copy_current_request_context' can only be used when a" + " request context is active, such as in a view function." + ) + + ctx = ctx.copy() + + def wrapper(*args: t.Any, **kwargs: t.Any) -> t.Any: + with ctx: # type: ignore[union-attr] + return ctx.app.ensure_sync(f)(*args, **kwargs) # type: ignore[union-attr] + + return update_wrapper(wrapper, f) # type: ignore[return-value] + + +def has_request_context() -> bool: + """If you have code that wants to test if a request context is there or + not this function can be used. For instance, you may want to take advantage + of request information if the request object is available, but fail + silently if it is unavailable. + + :: + + class User(db.Model): + + def __init__(self, username, remote_addr=None): + self.username = username + if remote_addr is None and has_request_context(): + remote_addr = request.remote_addr + self.remote_addr = remote_addr + + Alternatively you can also just test any of the context bound objects + (such as :class:`request` or :class:`g`) for truthness:: + + class User(db.Model): + + def __init__(self, username, remote_addr=None): + self.username = username + if remote_addr is None and request: + remote_addr = request.remote_addr + self.remote_addr = remote_addr + + .. versionadded:: 0.7 + """ + return _cv_request.get(None) is not None + + +def has_app_context() -> bool: + """Works like :func:`has_request_context` but for the application + context. You can also just do a boolean check on the + :data:`current_app` object instead. + + .. versionadded:: 0.9 + """ + return _cv_app.get(None) is not None + + +class AppContext: + """The app context contains application-specific information. An app + context is created and pushed at the beginning of each request if + one is not already active. An app context is also pushed when + running CLI commands. + """ + + def __init__(self, app: Flask) -> None: + self.app = app + self.url_adapter = app.create_url_adapter(None) + self.g: _AppCtxGlobals = app.app_ctx_globals_class() + self._cv_tokens: list[contextvars.Token[AppContext]] = [] + + def push(self) -> None: + """Binds the app context to the current context.""" + self._cv_tokens.append(_cv_app.set(self)) + appcontext_pushed.send(self.app, _async_wrapper=self.app.ensure_sync) + + def pop(self, exc: BaseException | None = _sentinel) -> None: # type: ignore + """Pops the app context.""" + try: + if len(self._cv_tokens) == 1: + if exc is _sentinel: + exc = sys.exc_info()[1] + self.app.do_teardown_appcontext(exc) + finally: + ctx = _cv_app.get() + _cv_app.reset(self._cv_tokens.pop()) + + if ctx is not self: + raise AssertionError( + f"Popped wrong app context. ({ctx!r} instead of {self!r})" + ) + + appcontext_popped.send(self.app, _async_wrapper=self.app.ensure_sync) + + def __enter__(self) -> AppContext: + self.push() + return self + + def __exit__( + self, + exc_type: type | None, + exc_value: BaseException | None, + tb: TracebackType | None, + ) -> None: + self.pop(exc_value) + + +class RequestContext: + """The request context contains per-request information. The Flask + app creates and pushes it at the beginning of the request, then pops + it at the end of the request. It will create the URL adapter and + request object for the WSGI environment provided. + + Do not attempt to use this class directly, instead use + :meth:`~flask.Flask.test_request_context` and + :meth:`~flask.Flask.request_context` to create this object. + + When the request context is popped, it will evaluate all the + functions registered on the application for teardown execution + (:meth:`~flask.Flask.teardown_request`). + + The request context is automatically popped at the end of the + request. When using the interactive debugger, the context will be + restored so ``request`` is still accessible. Similarly, the test + client can preserve the context after the request ends. However, + teardown functions may already have closed some resources such as + database connections. + """ + + def __init__( + self, + app: Flask, + environ: WSGIEnvironment, + request: Request | None = None, + session: SessionMixin | None = None, + ) -> None: + self.app = app + if request is None: + request = app.request_class(environ) + request.json_module = app.json + self.request: Request = request + self.url_adapter = None + try: + self.url_adapter = app.create_url_adapter(self.request) + except HTTPException as e: + self.request.routing_exception = e + self.flashes: list[tuple[str, str]] | None = None + self.session: SessionMixin | None = session + # Functions that should be executed after the request on the response + # object. These will be called before the regular "after_request" + # functions. + self._after_request_functions: list[ft.AfterRequestCallable[t.Any]] = [] + + self._cv_tokens: list[ + tuple[contextvars.Token[RequestContext], AppContext | None] + ] = [] + + def copy(self) -> RequestContext: + """Creates a copy of this request context with the same request object. + This can be used to move a request context to a different greenlet. + Because the actual request object is the same this cannot be used to + move a request context to a different thread unless access to the + request object is locked. + + .. versionadded:: 0.10 + + .. versionchanged:: 1.1 + The current session object is used instead of reloading the original + data. This prevents `flask.session` pointing to an out-of-date object. + """ + return self.__class__( + self.app, + environ=self.request.environ, + request=self.request, + session=self.session, + ) + + def match_request(self) -> None: + """Can be overridden by a subclass to hook into the matching + of the request. + """ + try: + result = self.url_adapter.match(return_rule=True) # type: ignore + self.request.url_rule, self.request.view_args = result # type: ignore + except HTTPException as e: + self.request.routing_exception = e + + def push(self) -> None: + # Before we push the request context we have to ensure that there + # is an application context. + app_ctx = _cv_app.get(None) + + if app_ctx is None or app_ctx.app is not self.app: + app_ctx = self.app.app_context() + app_ctx.push() + else: + app_ctx = None + + self._cv_tokens.append((_cv_request.set(self), app_ctx)) + + # Open the session at the moment that the request context is available. + # This allows a custom open_session method to use the request context. + # Only open a new session if this is the first time the request was + # pushed, otherwise stream_with_context loses the session. + if self.session is None: + session_interface = self.app.session_interface + self.session = session_interface.open_session(self.app, self.request) + + if self.session is None: + self.session = session_interface.make_null_session(self.app) + + # Match the request URL after loading the session, so that the + # session is available in custom URL converters. + if self.url_adapter is not None: + self.match_request() + + def pop(self, exc: BaseException | None = _sentinel) -> None: # type: ignore + """Pops the request context and unbinds it by doing that. This will + also trigger the execution of functions registered by the + :meth:`~flask.Flask.teardown_request` decorator. + + .. versionchanged:: 0.9 + Added the `exc` argument. + """ + clear_request = len(self._cv_tokens) == 1 + + try: + if clear_request: + if exc is _sentinel: + exc = sys.exc_info()[1] + self.app.do_teardown_request(exc) + + request_close = getattr(self.request, "close", None) + if request_close is not None: + request_close() + finally: + ctx = _cv_request.get() + token, app_ctx = self._cv_tokens.pop() + _cv_request.reset(token) + + # get rid of circular dependencies at the end of the request + # so that we don't require the GC to be active. + if clear_request: + ctx.request.environ["werkzeug.request"] = None + + if app_ctx is not None: + app_ctx.pop(exc) + + if ctx is not self: + raise AssertionError( + f"Popped wrong request context. ({ctx!r} instead of {self!r})" + ) + + def __enter__(self) -> RequestContext: + self.push() + return self + + def __exit__( + self, + exc_type: type | None, + exc_value: BaseException | None, + tb: TracebackType | None, + ) -> None: + self.pop(exc_value) + + def __repr__(self) -> str: + return ( + f"<{type(self).__name__} {self.request.url!r}" + f" [{self.request.method}] of {self.app.name}>" + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/debughelpers.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/debughelpers.py new file mode 100644 index 000000000..2c8c4c483 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/debughelpers.py @@ -0,0 +1,178 @@ +from __future__ import annotations + +import typing as t + +from jinja2.loaders import BaseLoader +from werkzeug.routing import RequestRedirect + +from .blueprints import Blueprint +from .globals import request_ctx +from .sansio.app import App + +if t.TYPE_CHECKING: + from .sansio.scaffold import Scaffold + from .wrappers import Request + + +class UnexpectedUnicodeError(AssertionError, UnicodeError): + """Raised in places where we want some better error reporting for + unexpected unicode or binary data. + """ + + +class DebugFilesKeyError(KeyError, AssertionError): + """Raised from request.files during debugging. The idea is that it can + provide a better error message than just a generic KeyError/BadRequest. + """ + + def __init__(self, request: Request, key: str) -> None: + form_matches = request.form.getlist(key) + buf = [ + f"You tried to access the file {key!r} in the request.files" + " dictionary but it does not exist. The mimetype for the" + f" request is {request.mimetype!r} instead of" + " 'multipart/form-data' which means that no file contents" + " were transmitted. To fix this error you should provide" + ' enctype="multipart/form-data" in your form.' + ] + if form_matches: + names = ", ".join(repr(x) for x in form_matches) + buf.append( + "\n\nThe browser instead transmitted some file names. " + f"This was submitted: {names}" + ) + self.msg = "".join(buf) + + def __str__(self) -> str: + return self.msg + + +class FormDataRoutingRedirect(AssertionError): + """This exception is raised in debug mode if a routing redirect + would cause the browser to drop the method or body. This happens + when method is not GET, HEAD or OPTIONS and the status code is not + 307 or 308. + """ + + def __init__(self, request: Request) -> None: + exc = request.routing_exception + assert isinstance(exc, RequestRedirect) + buf = [ + f"A request was sent to '{request.url}', but routing issued" + f" a redirect to the canonical URL '{exc.new_url}'." + ] + + if f"{request.base_url}/" == exc.new_url.partition("?")[0]: + buf.append( + " The URL was defined with a trailing slash. Flask" + " will redirect to the URL with a trailing slash if it" + " was accessed without one." + ) + + buf.append( + " Send requests to the canonical URL, or use 307 or 308 for" + " routing redirects. Otherwise, browsers will drop form" + " data.\n\n" + "This exception is only raised in debug mode." + ) + super().__init__("".join(buf)) + + +def attach_enctype_error_multidict(request: Request) -> None: + """Patch ``request.files.__getitem__`` to raise a descriptive error + about ``enctype=multipart/form-data``. + + :param request: The request to patch. + :meta private: + """ + oldcls = request.files.__class__ + + class newcls(oldcls): # type: ignore[valid-type, misc] + def __getitem__(self, key: str) -> t.Any: + try: + return super().__getitem__(key) + except KeyError as e: + if key not in request.form: + raise + + raise DebugFilesKeyError(request, key).with_traceback( + e.__traceback__ + ) from None + + newcls.__name__ = oldcls.__name__ + newcls.__module__ = oldcls.__module__ + request.files.__class__ = newcls + + +def _dump_loader_info(loader: BaseLoader) -> t.Iterator[str]: + yield f"class: {type(loader).__module__}.{type(loader).__name__}" + for key, value in sorted(loader.__dict__.items()): + if key.startswith("_"): + continue + if isinstance(value, (tuple, list)): + if not all(isinstance(x, str) for x in value): + continue + yield f"{key}:" + for item in value: + yield f" - {item}" + continue + elif not isinstance(value, (str, int, float, bool)): + continue + yield f"{key}: {value!r}" + + +def explain_template_loading_attempts( + app: App, + template: str, + attempts: list[ + tuple[ + BaseLoader, + Scaffold, + tuple[str, str | None, t.Callable[[], bool] | None] | None, + ] + ], +) -> None: + """This should help developers understand what failed""" + info = [f"Locating template {template!r}:"] + total_found = 0 + blueprint = None + if request_ctx and request_ctx.request.blueprint is not None: + blueprint = request_ctx.request.blueprint + + for idx, (loader, srcobj, triple) in enumerate(attempts): + if isinstance(srcobj, App): + src_info = f"application {srcobj.import_name!r}" + elif isinstance(srcobj, Blueprint): + src_info = f"blueprint {srcobj.name!r} ({srcobj.import_name})" + else: + src_info = repr(srcobj) + + info.append(f"{idx + 1:5}: trying loader of {src_info}") + + for line in _dump_loader_info(loader): + info.append(f" {line}") + + if triple is None: + detail = "no match" + else: + detail = f"found ({triple[1] or ''!r})" + total_found += 1 + info.append(f" -> {detail}") + + seems_fishy = False + if total_found == 0: + info.append("Error: the template could not be found.") + seems_fishy = True + elif total_found > 1: + info.append("Warning: multiple loaders returned a match for the template.") + seems_fishy = True + + if blueprint is not None and seems_fishy: + info.append( + " The template was looked up from an endpoint that belongs" + f" to the blueprint {blueprint!r}." + ) + info.append(" Maybe you did not place a template in the right folder?") + info.append(" See https://flask.palletsprojects.com/blueprints/#templates") + + app.logger.info("\n".join(info)) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/globals.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/globals.py new file mode 100644 index 000000000..e2c410cc5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/globals.py @@ -0,0 +1,51 @@ +from __future__ import annotations + +import typing as t +from contextvars import ContextVar + +from werkzeug.local import LocalProxy + +if t.TYPE_CHECKING: # pragma: no cover + from .app import Flask + from .ctx import _AppCtxGlobals + from .ctx import AppContext + from .ctx import RequestContext + from .sessions import SessionMixin + from .wrappers import Request + + +_no_app_msg = """\ +Working outside of application context. + +This typically means that you attempted to use functionality that needed +the current application. To solve this, set up an application context +with app.app_context(). See the documentation for more information.\ +""" +_cv_app: ContextVar[AppContext] = ContextVar("flask.app_ctx") +app_ctx: AppContext = LocalProxy( # type: ignore[assignment] + _cv_app, unbound_message=_no_app_msg +) +current_app: Flask = LocalProxy( # type: ignore[assignment] + _cv_app, "app", unbound_message=_no_app_msg +) +g: _AppCtxGlobals = LocalProxy( # type: ignore[assignment] + _cv_app, "g", unbound_message=_no_app_msg +) + +_no_req_msg = """\ +Working outside of request context. + +This typically means that you attempted to use functionality that needed +an active HTTP request. Consult the documentation on testing for +information about how to avoid this problem.\ +""" +_cv_request: ContextVar[RequestContext] = ContextVar("flask.request_ctx") +request_ctx: RequestContext = LocalProxy( # type: ignore[assignment] + _cv_request, unbound_message=_no_req_msg +) +request: Request = LocalProxy( # type: ignore[assignment] + _cv_request, "request", unbound_message=_no_req_msg +) +session: SessionMixin = LocalProxy( # type: ignore[assignment] + _cv_request, "session", unbound_message=_no_req_msg +) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/helpers.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/helpers.py new file mode 100644 index 000000000..359a842af --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/helpers.py @@ -0,0 +1,621 @@ +from __future__ import annotations + +import importlib.util +import os +import sys +import typing as t +from datetime import datetime +from functools import lru_cache +from functools import update_wrapper + +import werkzeug.utils +from werkzeug.exceptions import abort as _wz_abort +from werkzeug.utils import redirect as _wz_redirect +from werkzeug.wrappers import Response as BaseResponse + +from .globals import _cv_request +from .globals import current_app +from .globals import request +from .globals import request_ctx +from .globals import session +from .signals import message_flashed + +if t.TYPE_CHECKING: # pragma: no cover + from .wrappers import Response + + +def get_debug_flag() -> bool: + """Get whether debug mode should be enabled for the app, indicated by the + :envvar:`FLASK_DEBUG` environment variable. The default is ``False``. + """ + val = os.environ.get("FLASK_DEBUG") + return bool(val and val.lower() not in {"0", "false", "no"}) + + +def get_load_dotenv(default: bool = True) -> bool: + """Get whether the user has disabled loading default dotenv files by + setting :envvar:`FLASK_SKIP_DOTENV`. The default is ``True``, load + the files. + + :param default: What to return if the env var isn't set. + """ + val = os.environ.get("FLASK_SKIP_DOTENV") + + if not val: + return default + + return val.lower() in ("0", "false", "no") + + +def stream_with_context( + generator_or_function: t.Iterator[t.AnyStr] | t.Callable[..., t.Iterator[t.AnyStr]], +) -> t.Iterator[t.AnyStr]: + """Request contexts disappear when the response is started on the server. + This is done for efficiency reasons and to make it less likely to encounter + memory leaks with badly written WSGI middlewares. The downside is that if + you are using streamed responses, the generator cannot access request bound + information any more. + + This function however can help you keep the context around for longer:: + + from flask import stream_with_context, request, Response + + @app.route('/stream') + def streamed_response(): + @stream_with_context + def generate(): + yield 'Hello ' + yield request.args['name'] + yield '!' + return Response(generate()) + + Alternatively it can also be used around a specific generator:: + + from flask import stream_with_context, request, Response + + @app.route('/stream') + def streamed_response(): + def generate(): + yield 'Hello ' + yield request.args['name'] + yield '!' + return Response(stream_with_context(generate())) + + .. versionadded:: 0.9 + """ + try: + gen = iter(generator_or_function) # type: ignore[arg-type] + except TypeError: + + def decorator(*args: t.Any, **kwargs: t.Any) -> t.Any: + gen = generator_or_function(*args, **kwargs) # type: ignore[operator] + return stream_with_context(gen) + + return update_wrapper(decorator, generator_or_function) # type: ignore[arg-type] + + def generator() -> t.Iterator[t.AnyStr | None]: + ctx = _cv_request.get(None) + if ctx is None: + raise RuntimeError( + "'stream_with_context' can only be used when a request" + " context is active, such as in a view function." + ) + with ctx: + # Dummy sentinel. Has to be inside the context block or we're + # not actually keeping the context around. + yield None + + # The try/finally is here so that if someone passes a WSGI level + # iterator in we're still running the cleanup logic. Generators + # don't need that because they are closed on their destruction + # automatically. + try: + yield from gen + finally: + if hasattr(gen, "close"): + gen.close() + + # The trick is to start the generator. Then the code execution runs until + # the first dummy None is yielded at which point the context was already + # pushed. This item is discarded. Then when the iteration continues the + # real generator is executed. + wrapped_g = generator() + next(wrapped_g) + return wrapped_g # type: ignore[return-value] + + +def make_response(*args: t.Any) -> Response: + """Sometimes it is necessary to set additional headers in a view. Because + views do not have to return response objects but can return a value that + is converted into a response object by Flask itself, it becomes tricky to + add headers to it. This function can be called instead of using a return + and you will get a response object which you can use to attach headers. + + If view looked like this and you want to add a new header:: + + def index(): + return render_template('index.html', foo=42) + + You can now do something like this:: + + def index(): + response = make_response(render_template('index.html', foo=42)) + response.headers['X-Parachutes'] = 'parachutes are cool' + return response + + This function accepts the very same arguments you can return from a + view function. This for example creates a response with a 404 error + code:: + + response = make_response(render_template('not_found.html'), 404) + + The other use case of this function is to force the return value of a + view function into a response which is helpful with view + decorators:: + + response = make_response(view_function()) + response.headers['X-Parachutes'] = 'parachutes are cool' + + Internally this function does the following things: + + - if no arguments are passed, it creates a new response argument + - if one argument is passed, :meth:`flask.Flask.make_response` + is invoked with it. + - if more than one argument is passed, the arguments are passed + to the :meth:`flask.Flask.make_response` function as tuple. + + .. versionadded:: 0.6 + """ + if not args: + return current_app.response_class() + if len(args) == 1: + args = args[0] + return current_app.make_response(args) + + +def url_for( + endpoint: str, + *, + _anchor: str | None = None, + _method: str | None = None, + _scheme: str | None = None, + _external: bool | None = None, + **values: t.Any, +) -> str: + """Generate a URL to the given endpoint with the given values. + + This requires an active request or application context, and calls + :meth:`current_app.url_for() `. See that method + for full documentation. + + :param endpoint: The endpoint name associated with the URL to + generate. If this starts with a ``.``, the current blueprint + name (if any) will be used. + :param _anchor: If given, append this as ``#anchor`` to the URL. + :param _method: If given, generate the URL associated with this + method for the endpoint. + :param _scheme: If given, the URL will have this scheme if it is + external. + :param _external: If given, prefer the URL to be internal (False) or + require it to be external (True). External URLs include the + scheme and domain. When not in an active request, URLs are + external by default. + :param values: Values to use for the variable parts of the URL rule. + Unknown keys are appended as query string arguments, like + ``?a=b&c=d``. + + .. versionchanged:: 2.2 + Calls ``current_app.url_for``, allowing an app to override the + behavior. + + .. versionchanged:: 0.10 + The ``_scheme`` parameter was added. + + .. versionchanged:: 0.9 + The ``_anchor`` and ``_method`` parameters were added. + + .. versionchanged:: 0.9 + Calls ``app.handle_url_build_error`` on build errors. + """ + return current_app.url_for( + endpoint, + _anchor=_anchor, + _method=_method, + _scheme=_scheme, + _external=_external, + **values, + ) + + +def redirect( + location: str, code: int = 302, Response: type[BaseResponse] | None = None +) -> BaseResponse: + """Create a redirect response object. + + If :data:`~flask.current_app` is available, it will use its + :meth:`~flask.Flask.redirect` method, otherwise it will use + :func:`werkzeug.utils.redirect`. + + :param location: The URL to redirect to. + :param code: The status code for the redirect. + :param Response: The response class to use. Not used when + ``current_app`` is active, which uses ``app.response_class``. + + .. versionadded:: 2.2 + Calls ``current_app.redirect`` if available instead of always + using Werkzeug's default ``redirect``. + """ + if current_app: + return current_app.redirect(location, code=code) + + return _wz_redirect(location, code=code, Response=Response) + + +def abort(code: int | BaseResponse, *args: t.Any, **kwargs: t.Any) -> t.NoReturn: + """Raise an :exc:`~werkzeug.exceptions.HTTPException` for the given + status code. + + If :data:`~flask.current_app` is available, it will call its + :attr:`~flask.Flask.aborter` object, otherwise it will use + :func:`werkzeug.exceptions.abort`. + + :param code: The status code for the exception, which must be + registered in ``app.aborter``. + :param args: Passed to the exception. + :param kwargs: Passed to the exception. + + .. versionadded:: 2.2 + Calls ``current_app.aborter`` if available instead of always + using Werkzeug's default ``abort``. + """ + if current_app: + current_app.aborter(code, *args, **kwargs) + + _wz_abort(code, *args, **kwargs) + + +def get_template_attribute(template_name: str, attribute: str) -> t.Any: + """Loads a macro (or variable) a template exports. This can be used to + invoke a macro from within Python code. If you for example have a + template named :file:`_cider.html` with the following contents: + + .. sourcecode:: html+jinja + + {% macro hello(name) %}Hello {{ name }}!{% endmacro %} + + You can access this from Python code like this:: + + hello = get_template_attribute('_cider.html', 'hello') + return hello('World') + + .. versionadded:: 0.2 + + :param template_name: the name of the template + :param attribute: the name of the variable of macro to access + """ + return getattr(current_app.jinja_env.get_template(template_name).module, attribute) + + +def flash(message: str, category: str = "message") -> None: + """Flashes a message to the next request. In order to remove the + flashed message from the session and to display it to the user, + the template has to call :func:`get_flashed_messages`. + + .. versionchanged:: 0.3 + `category` parameter added. + + :param message: the message to be flashed. + :param category: the category for the message. The following values + are recommended: ``'message'`` for any kind of message, + ``'error'`` for errors, ``'info'`` for information + messages and ``'warning'`` for warnings. However any + kind of string can be used as category. + """ + # Original implementation: + # + # session.setdefault('_flashes', []).append((category, message)) + # + # This assumed that changes made to mutable structures in the session are + # always in sync with the session object, which is not true for session + # implementations that use external storage for keeping their keys/values. + flashes = session.get("_flashes", []) + flashes.append((category, message)) + session["_flashes"] = flashes + app = current_app._get_current_object() # type: ignore + message_flashed.send( + app, + _async_wrapper=app.ensure_sync, + message=message, + category=category, + ) + + +def get_flashed_messages( + with_categories: bool = False, category_filter: t.Iterable[str] = () +) -> list[str] | list[tuple[str, str]]: + """Pulls all flashed messages from the session and returns them. + Further calls in the same request to the function will return + the same messages. By default just the messages are returned, + but when `with_categories` is set to ``True``, the return value will + be a list of tuples in the form ``(category, message)`` instead. + + Filter the flashed messages to one or more categories by providing those + categories in `category_filter`. This allows rendering categories in + separate html blocks. The `with_categories` and `category_filter` + arguments are distinct: + + * `with_categories` controls whether categories are returned with message + text (``True`` gives a tuple, where ``False`` gives just the message text). + * `category_filter` filters the messages down to only those matching the + provided categories. + + See :doc:`/patterns/flashing` for examples. + + .. versionchanged:: 0.3 + `with_categories` parameter added. + + .. versionchanged:: 0.9 + `category_filter` parameter added. + + :param with_categories: set to ``True`` to also receive categories. + :param category_filter: filter of categories to limit return values. Only + categories in the list will be returned. + """ + flashes = request_ctx.flashes + if flashes is None: + flashes = session.pop("_flashes") if "_flashes" in session else [] + request_ctx.flashes = flashes + if category_filter: + flashes = list(filter(lambda f: f[0] in category_filter, flashes)) + if not with_categories: + return [x[1] for x in flashes] + return flashes + + +def _prepare_send_file_kwargs(**kwargs: t.Any) -> dict[str, t.Any]: + if kwargs.get("max_age") is None: + kwargs["max_age"] = current_app.get_send_file_max_age + + kwargs.update( + environ=request.environ, + use_x_sendfile=current_app.config["USE_X_SENDFILE"], + response_class=current_app.response_class, + _root_path=current_app.root_path, # type: ignore + ) + return kwargs + + +def send_file( + path_or_file: os.PathLike[t.AnyStr] | str | t.BinaryIO, + mimetype: str | None = None, + as_attachment: bool = False, + download_name: str | None = None, + conditional: bool = True, + etag: bool | str = True, + last_modified: datetime | int | float | None = None, + max_age: None | (int | t.Callable[[str | None], int | None]) = None, +) -> Response: + """Send the contents of a file to the client. + + The first argument can be a file path or a file-like object. Paths + are preferred in most cases because Werkzeug can manage the file and + get extra information from the path. Passing a file-like object + requires that the file is opened in binary mode, and is mostly + useful when building a file in memory with :class:`io.BytesIO`. + + Never pass file paths provided by a user. The path is assumed to be + trusted, so a user could craft a path to access a file you didn't + intend. Use :func:`send_from_directory` to safely serve + user-requested paths from within a directory. + + If the WSGI server sets a ``file_wrapper`` in ``environ``, it is + used, otherwise Werkzeug's built-in wrapper is used. Alternatively, + if the HTTP server supports ``X-Sendfile``, configuring Flask with + ``USE_X_SENDFILE = True`` will tell the server to send the given + path, which is much more efficient than reading it in Python. + + :param path_or_file: The path to the file to send, relative to the + current working directory if a relative path is given. + Alternatively, a file-like object opened in binary mode. Make + sure the file pointer is seeked to the start of the data. + :param mimetype: The MIME type to send for the file. If not + provided, it will try to detect it from the file name. + :param as_attachment: Indicate to a browser that it should offer to + save the file instead of displaying it. + :param download_name: The default name browsers will use when saving + the file. Defaults to the passed file name. + :param conditional: Enable conditional and range responses based on + request headers. Requires passing a file path and ``environ``. + :param etag: Calculate an ETag for the file, which requires passing + a file path. Can also be a string to use instead. + :param last_modified: The last modified time to send for the file, + in seconds. If not provided, it will try to detect it from the + file path. + :param max_age: How long the client should cache the file, in + seconds. If set, ``Cache-Control`` will be ``public``, otherwise + it will be ``no-cache`` to prefer conditional caching. + + .. versionchanged:: 2.0 + ``download_name`` replaces the ``attachment_filename`` + parameter. If ``as_attachment=False``, it is passed with + ``Content-Disposition: inline`` instead. + + .. versionchanged:: 2.0 + ``max_age`` replaces the ``cache_timeout`` parameter. + ``conditional`` is enabled and ``max_age`` is not set by + default. + + .. versionchanged:: 2.0 + ``etag`` replaces the ``add_etags`` parameter. It can be a + string to use instead of generating one. + + .. versionchanged:: 2.0 + Passing a file-like object that inherits from + :class:`~io.TextIOBase` will raise a :exc:`ValueError` rather + than sending an empty file. + + .. versionadded:: 2.0 + Moved the implementation to Werkzeug. This is now a wrapper to + pass some Flask-specific arguments. + + .. versionchanged:: 1.1 + ``filename`` may be a :class:`~os.PathLike` object. + + .. versionchanged:: 1.1 + Passing a :class:`~io.BytesIO` object supports range requests. + + .. versionchanged:: 1.0.3 + Filenames are encoded with ASCII instead of Latin-1 for broader + compatibility with WSGI servers. + + .. versionchanged:: 1.0 + UTF-8 filenames as specified in :rfc:`2231` are supported. + + .. versionchanged:: 0.12 + The filename is no longer automatically inferred from file + objects. If you want to use automatic MIME and etag support, + pass a filename via ``filename_or_fp`` or + ``attachment_filename``. + + .. versionchanged:: 0.12 + ``attachment_filename`` is preferred over ``filename`` for MIME + detection. + + .. versionchanged:: 0.9 + ``cache_timeout`` defaults to + :meth:`Flask.get_send_file_max_age`. + + .. versionchanged:: 0.7 + MIME guessing and etag support for file-like objects was + removed because it was unreliable. Pass a filename if you are + able to, otherwise attach an etag yourself. + + .. versionchanged:: 0.5 + The ``add_etags``, ``cache_timeout`` and ``conditional`` + parameters were added. The default behavior is to add etags. + + .. versionadded:: 0.2 + """ + return werkzeug.utils.send_file( # type: ignore[return-value] + **_prepare_send_file_kwargs( + path_or_file=path_or_file, + environ=request.environ, + mimetype=mimetype, + as_attachment=as_attachment, + download_name=download_name, + conditional=conditional, + etag=etag, + last_modified=last_modified, + max_age=max_age, + ) + ) + + +def send_from_directory( + directory: os.PathLike[str] | str, + path: os.PathLike[str] | str, + **kwargs: t.Any, +) -> Response: + """Send a file from within a directory using :func:`send_file`. + + .. code-block:: python + + @app.route("/uploads/") + def download_file(name): + return send_from_directory( + app.config['UPLOAD_FOLDER'], name, as_attachment=True + ) + + This is a secure way to serve files from a folder, such as static + files or uploads. Uses :func:`~werkzeug.security.safe_join` to + ensure the path coming from the client is not maliciously crafted to + point outside the specified directory. + + If the final path does not point to an existing regular file, + raises a 404 :exc:`~werkzeug.exceptions.NotFound` error. + + :param directory: The directory that ``path`` must be located under, + relative to the current application's root path. + :param path: The path to the file to send, relative to + ``directory``. + :param kwargs: Arguments to pass to :func:`send_file`. + + .. versionchanged:: 2.0 + ``path`` replaces the ``filename`` parameter. + + .. versionadded:: 2.0 + Moved the implementation to Werkzeug. This is now a wrapper to + pass some Flask-specific arguments. + + .. versionadded:: 0.5 + """ + return werkzeug.utils.send_from_directory( # type: ignore[return-value] + directory, path, **_prepare_send_file_kwargs(**kwargs) + ) + + +def get_root_path(import_name: str) -> str: + """Find the root path of a package, or the path that contains a + module. If it cannot be found, returns the current working + directory. + + Not to be confused with the value returned by :func:`find_package`. + + :meta private: + """ + # Module already imported and has a file attribute. Use that first. + mod = sys.modules.get(import_name) + + if mod is not None and hasattr(mod, "__file__") and mod.__file__ is not None: + return os.path.dirname(os.path.abspath(mod.__file__)) + + # Next attempt: check the loader. + try: + spec = importlib.util.find_spec(import_name) + + if spec is None: + raise ValueError + except (ImportError, ValueError): + loader = None + else: + loader = spec.loader + + # Loader does not exist or we're referring to an unloaded main + # module or a main module without path (interactive sessions), go + # with the current working directory. + if loader is None: + return os.getcwd() + + if hasattr(loader, "get_filename"): + filepath = loader.get_filename(import_name) + else: + # Fall back to imports. + __import__(import_name) + mod = sys.modules[import_name] + filepath = getattr(mod, "__file__", None) + + # If we don't have a file path it might be because it is a + # namespace package. In this case pick the root path from the + # first module that is contained in the package. + if filepath is None: + raise RuntimeError( + "No root path can be found for the provided module" + f" {import_name!r}. This can happen because the module" + " came from an import hook that does not provide file" + " name information or because it's a namespace package." + " In this case the root path needs to be explicitly" + " provided." + ) + + # filepath is import_name.py for a module, or __init__.py for a package. + return os.path.dirname(os.path.abspath(filepath)) # type: ignore[no-any-return] + + +@lru_cache(maxsize=None) +def _split_blueprint_path(name: str) -> list[str]: + out: list[str] = [name] + + if "." in name: + out.extend(_split_blueprint_path(name.rpartition(".")[0])) + + return out diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/__init__.py new file mode 100644 index 000000000..c0941d049 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/__init__.py @@ -0,0 +1,170 @@ +from __future__ import annotations + +import json as _json +import typing as t + +from ..globals import current_app +from .provider import _default + +if t.TYPE_CHECKING: # pragma: no cover + from ..wrappers import Response + + +def dumps(obj: t.Any, **kwargs: t.Any) -> str: + """Serialize data as JSON. + + If :data:`~flask.current_app` is available, it will use its + :meth:`app.json.dumps() ` + method, otherwise it will use :func:`json.dumps`. + + :param obj: The data to serialize. + :param kwargs: Arguments passed to the ``dumps`` implementation. + + .. versionchanged:: 2.3 + The ``app`` parameter was removed. + + .. versionchanged:: 2.2 + Calls ``current_app.json.dumps``, allowing an app to override + the behavior. + + .. versionchanged:: 2.0.2 + :class:`decimal.Decimal` is supported by converting to a string. + + .. versionchanged:: 2.0 + ``encoding`` will be removed in Flask 2.1. + + .. versionchanged:: 1.0.3 + ``app`` can be passed directly, rather than requiring an app + context for configuration. + """ + if current_app: + return current_app.json.dumps(obj, **kwargs) + + kwargs.setdefault("default", _default) + return _json.dumps(obj, **kwargs) + + +def dump(obj: t.Any, fp: t.IO[str], **kwargs: t.Any) -> None: + """Serialize data as JSON and write to a file. + + If :data:`~flask.current_app` is available, it will use its + :meth:`app.json.dump() ` + method, otherwise it will use :func:`json.dump`. + + :param obj: The data to serialize. + :param fp: A file opened for writing text. Should use the UTF-8 + encoding to be valid JSON. + :param kwargs: Arguments passed to the ``dump`` implementation. + + .. versionchanged:: 2.3 + The ``app`` parameter was removed. + + .. versionchanged:: 2.2 + Calls ``current_app.json.dump``, allowing an app to override + the behavior. + + .. versionchanged:: 2.0 + Writing to a binary file, and the ``encoding`` argument, will be + removed in Flask 2.1. + """ + if current_app: + current_app.json.dump(obj, fp, **kwargs) + else: + kwargs.setdefault("default", _default) + _json.dump(obj, fp, **kwargs) + + +def loads(s: str | bytes, **kwargs: t.Any) -> t.Any: + """Deserialize data as JSON. + + If :data:`~flask.current_app` is available, it will use its + :meth:`app.json.loads() ` + method, otherwise it will use :func:`json.loads`. + + :param s: Text or UTF-8 bytes. + :param kwargs: Arguments passed to the ``loads`` implementation. + + .. versionchanged:: 2.3 + The ``app`` parameter was removed. + + .. versionchanged:: 2.2 + Calls ``current_app.json.loads``, allowing an app to override + the behavior. + + .. versionchanged:: 2.0 + ``encoding`` will be removed in Flask 2.1. The data must be a + string or UTF-8 bytes. + + .. versionchanged:: 1.0.3 + ``app`` can be passed directly, rather than requiring an app + context for configuration. + """ + if current_app: + return current_app.json.loads(s, **kwargs) + + return _json.loads(s, **kwargs) + + +def load(fp: t.IO[t.AnyStr], **kwargs: t.Any) -> t.Any: + """Deserialize data as JSON read from a file. + + If :data:`~flask.current_app` is available, it will use its + :meth:`app.json.load() ` + method, otherwise it will use :func:`json.load`. + + :param fp: A file opened for reading text or UTF-8 bytes. + :param kwargs: Arguments passed to the ``load`` implementation. + + .. versionchanged:: 2.3 + The ``app`` parameter was removed. + + .. versionchanged:: 2.2 + Calls ``current_app.json.load``, allowing an app to override + the behavior. + + .. versionchanged:: 2.2 + The ``app`` parameter will be removed in Flask 2.3. + + .. versionchanged:: 2.0 + ``encoding`` will be removed in Flask 2.1. The file must be text + mode, or binary mode with UTF-8 bytes. + """ + if current_app: + return current_app.json.load(fp, **kwargs) + + return _json.load(fp, **kwargs) + + +def jsonify(*args: t.Any, **kwargs: t.Any) -> Response: + """Serialize the given arguments as JSON, and return a + :class:`~flask.Response` object with the ``application/json`` + mimetype. A dict or list returned from a view will be converted to a + JSON response automatically without needing to call this. + + This requires an active request or application context, and calls + :meth:`app.json.response() `. + + In debug mode, the output is formatted with indentation to make it + easier to read. This may also be controlled by the provider. + + Either positional or keyword arguments can be given, not both. + If no arguments are given, ``None`` is serialized. + + :param args: A single value to serialize, or multiple values to + treat as a list to serialize. + :param kwargs: Treat as a dict to serialize. + + .. versionchanged:: 2.2 + Calls ``current_app.json.response``, allowing an app to override + the behavior. + + .. versionchanged:: 2.0.2 + :class:`decimal.Decimal` is supported by converting to a string. + + .. versionchanged:: 0.11 + Added support for serializing top-level arrays. This was a + security risk in ancient browsers. See :ref:`security-json`. + + .. versionadded:: 0.2 + """ + return current_app.json.response(*args, **kwargs) # type: ignore[return-value] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/provider.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/provider.py new file mode 100644 index 000000000..f9b2e8ff5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/provider.py @@ -0,0 +1,215 @@ +from __future__ import annotations + +import dataclasses +import decimal +import json +import typing as t +import uuid +import weakref +from datetime import date + +from werkzeug.http import http_date + +if t.TYPE_CHECKING: # pragma: no cover + from werkzeug.sansio.response import Response + + from ..sansio.app import App + + +class JSONProvider: + """A standard set of JSON operations for an application. Subclasses + of this can be used to customize JSON behavior or use different + JSON libraries. + + To implement a provider for a specific library, subclass this base + class and implement at least :meth:`dumps` and :meth:`loads`. All + other methods have default implementations. + + To use a different provider, either subclass ``Flask`` and set + :attr:`~flask.Flask.json_provider_class` to a provider class, or set + :attr:`app.json ` to an instance of the class. + + :param app: An application instance. This will be stored as a + :class:`weakref.proxy` on the :attr:`_app` attribute. + + .. versionadded:: 2.2 + """ + + def __init__(self, app: App) -> None: + self._app: App = weakref.proxy(app) + + def dumps(self, obj: t.Any, **kwargs: t.Any) -> str: + """Serialize data as JSON. + + :param obj: The data to serialize. + :param kwargs: May be passed to the underlying JSON library. + """ + raise NotImplementedError + + def dump(self, obj: t.Any, fp: t.IO[str], **kwargs: t.Any) -> None: + """Serialize data as JSON and write to a file. + + :param obj: The data to serialize. + :param fp: A file opened for writing text. Should use the UTF-8 + encoding to be valid JSON. + :param kwargs: May be passed to the underlying JSON library. + """ + fp.write(self.dumps(obj, **kwargs)) + + def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any: + """Deserialize data as JSON. + + :param s: Text or UTF-8 bytes. + :param kwargs: May be passed to the underlying JSON library. + """ + raise NotImplementedError + + def load(self, fp: t.IO[t.AnyStr], **kwargs: t.Any) -> t.Any: + """Deserialize data as JSON read from a file. + + :param fp: A file opened for reading text or UTF-8 bytes. + :param kwargs: May be passed to the underlying JSON library. + """ + return self.loads(fp.read(), **kwargs) + + def _prepare_response_obj( + self, args: tuple[t.Any, ...], kwargs: dict[str, t.Any] + ) -> t.Any: + if args and kwargs: + raise TypeError("app.json.response() takes either args or kwargs, not both") + + if not args and not kwargs: + return None + + if len(args) == 1: + return args[0] + + return args or kwargs + + def response(self, *args: t.Any, **kwargs: t.Any) -> Response: + """Serialize the given arguments as JSON, and return a + :class:`~flask.Response` object with the ``application/json`` + mimetype. + + The :func:`~flask.json.jsonify` function calls this method for + the current application. + + Either positional or keyword arguments can be given, not both. + If no arguments are given, ``None`` is serialized. + + :param args: A single value to serialize, or multiple values to + treat as a list to serialize. + :param kwargs: Treat as a dict to serialize. + """ + obj = self._prepare_response_obj(args, kwargs) + return self._app.response_class(self.dumps(obj), mimetype="application/json") + + +def _default(o: t.Any) -> t.Any: + if isinstance(o, date): + return http_date(o) + + if isinstance(o, (decimal.Decimal, uuid.UUID)): + return str(o) + + if dataclasses and dataclasses.is_dataclass(o): + return dataclasses.asdict(o) + + if hasattr(o, "__html__"): + return str(o.__html__()) + + raise TypeError(f"Object of type {type(o).__name__} is not JSON serializable") + + +class DefaultJSONProvider(JSONProvider): + """Provide JSON operations using Python's built-in :mod:`json` + library. Serializes the following additional data types: + + - :class:`datetime.datetime` and :class:`datetime.date` are + serialized to :rfc:`822` strings. This is the same as the HTTP + date format. + - :class:`uuid.UUID` is serialized to a string. + - :class:`dataclasses.dataclass` is passed to + :func:`dataclasses.asdict`. + - :class:`~markupsafe.Markup` (or any object with a ``__html__`` + method) will call the ``__html__`` method to get a string. + """ + + default: t.Callable[[t.Any], t.Any] = staticmethod(_default) # type: ignore[assignment] + """Apply this function to any object that :meth:`json.dumps` does + not know how to serialize. It should return a valid JSON type or + raise a ``TypeError``. + """ + + ensure_ascii = True + """Replace non-ASCII characters with escape sequences. This may be + more compatible with some clients, but can be disabled for better + performance and size. + """ + + sort_keys = True + """Sort the keys in any serialized dicts. This may be useful for + some caching situations, but can be disabled for better performance. + When enabled, keys must all be strings, they are not converted + before sorting. + """ + + compact: bool | None = None + """If ``True``, or ``None`` out of debug mode, the :meth:`response` + output will not add indentation, newlines, or spaces. If ``False``, + or ``None`` in debug mode, it will use a non-compact representation. + """ + + mimetype = "application/json" + """The mimetype set in :meth:`response`.""" + + def dumps(self, obj: t.Any, **kwargs: t.Any) -> str: + """Serialize data as JSON to a string. + + Keyword arguments are passed to :func:`json.dumps`. Sets some + parameter defaults from the :attr:`default`, + :attr:`ensure_ascii`, and :attr:`sort_keys` attributes. + + :param obj: The data to serialize. + :param kwargs: Passed to :func:`json.dumps`. + """ + kwargs.setdefault("default", self.default) + kwargs.setdefault("ensure_ascii", self.ensure_ascii) + kwargs.setdefault("sort_keys", self.sort_keys) + return json.dumps(obj, **kwargs) + + def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any: + """Deserialize data as JSON from a string or bytes. + + :param s: Text or UTF-8 bytes. + :param kwargs: Passed to :func:`json.loads`. + """ + return json.loads(s, **kwargs) + + def response(self, *args: t.Any, **kwargs: t.Any) -> Response: + """Serialize the given arguments as JSON, and return a + :class:`~flask.Response` object with it. The response mimetype + will be "application/json" and can be changed with + :attr:`mimetype`. + + If :attr:`compact` is ``False`` or debug mode is enabled, the + output will be formatted to be easier to read. + + Either positional or keyword arguments can be given, not both. + If no arguments are given, ``None`` is serialized. + + :param args: A single value to serialize, or multiple values to + treat as a list to serialize. + :param kwargs: Treat as a dict to serialize. + """ + obj = self._prepare_response_obj(args, kwargs) + dump_args: dict[str, t.Any] = {} + + if (self.compact is None and self._app.debug) or self.compact is False: + dump_args.setdefault("indent", 2) + else: + dump_args.setdefault("separators", (",", ":")) + + return self._app.response_class( + f"{self.dumps(obj, **dump_args)}\n", mimetype=self.mimetype + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/tag.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/tag.py new file mode 100644 index 000000000..8dc3629bf --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/json/tag.py @@ -0,0 +1,327 @@ +""" +Tagged JSON +~~~~~~~~~~~ + +A compact representation for lossless serialization of non-standard JSON +types. :class:`~flask.sessions.SecureCookieSessionInterface` uses this +to serialize the session data, but it may be useful in other places. It +can be extended to support other types. + +.. autoclass:: TaggedJSONSerializer + :members: + +.. autoclass:: JSONTag + :members: + +Let's see an example that adds support for +:class:`~collections.OrderedDict`. Dicts don't have an order in JSON, so +to handle this we will dump the items as a list of ``[key, value]`` +pairs. Subclass :class:`JSONTag` and give it the new key ``' od'`` to +identify the type. The session serializer processes dicts first, so +insert the new tag at the front of the order since ``OrderedDict`` must +be processed before ``dict``. + +.. code-block:: python + + from flask.json.tag import JSONTag + + class TagOrderedDict(JSONTag): + __slots__ = ('serializer',) + key = ' od' + + def check(self, value): + return isinstance(value, OrderedDict) + + def to_json(self, value): + return [[k, self.serializer.tag(v)] for k, v in iteritems(value)] + + def to_python(self, value): + return OrderedDict(value) + + app.session_interface.serializer.register(TagOrderedDict, index=0) +""" + +from __future__ import annotations + +import typing as t +from base64 import b64decode +from base64 import b64encode +from datetime import datetime +from uuid import UUID + +from markupsafe import Markup +from werkzeug.http import http_date +from werkzeug.http import parse_date + +from ..json import dumps +from ..json import loads + + +class JSONTag: + """Base class for defining type tags for :class:`TaggedJSONSerializer`.""" + + __slots__ = ("serializer",) + + #: The tag to mark the serialized object with. If empty, this tag is + #: only used as an intermediate step during tagging. + key: str = "" + + def __init__(self, serializer: TaggedJSONSerializer) -> None: + """Create a tagger for the given serializer.""" + self.serializer = serializer + + def check(self, value: t.Any) -> bool: + """Check if the given value should be tagged by this tag.""" + raise NotImplementedError + + def to_json(self, value: t.Any) -> t.Any: + """Convert the Python object to an object that is a valid JSON type. + The tag will be added later.""" + raise NotImplementedError + + def to_python(self, value: t.Any) -> t.Any: + """Convert the JSON representation back to the correct type. The tag + will already be removed.""" + raise NotImplementedError + + def tag(self, value: t.Any) -> dict[str, t.Any]: + """Convert the value to a valid JSON type and add the tag structure + around it.""" + return {self.key: self.to_json(value)} + + +class TagDict(JSONTag): + """Tag for 1-item dicts whose only key matches a registered tag. + + Internally, the dict key is suffixed with `__`, and the suffix is removed + when deserializing. + """ + + __slots__ = () + key = " di" + + def check(self, value: t.Any) -> bool: + return ( + isinstance(value, dict) + and len(value) == 1 + and next(iter(value)) in self.serializer.tags + ) + + def to_json(self, value: t.Any) -> t.Any: + key = next(iter(value)) + return {f"{key}__": self.serializer.tag(value[key])} + + def to_python(self, value: t.Any) -> t.Any: + key = next(iter(value)) + return {key[:-2]: value[key]} + + +class PassDict(JSONTag): + __slots__ = () + + def check(self, value: t.Any) -> bool: + return isinstance(value, dict) + + def to_json(self, value: t.Any) -> t.Any: + # JSON objects may only have string keys, so don't bother tagging the + # key here. + return {k: self.serializer.tag(v) for k, v in value.items()} + + tag = to_json + + +class TagTuple(JSONTag): + __slots__ = () + key = " t" + + def check(self, value: t.Any) -> bool: + return isinstance(value, tuple) + + def to_json(self, value: t.Any) -> t.Any: + return [self.serializer.tag(item) for item in value] + + def to_python(self, value: t.Any) -> t.Any: + return tuple(value) + + +class PassList(JSONTag): + __slots__ = () + + def check(self, value: t.Any) -> bool: + return isinstance(value, list) + + def to_json(self, value: t.Any) -> t.Any: + return [self.serializer.tag(item) for item in value] + + tag = to_json + + +class TagBytes(JSONTag): + __slots__ = () + key = " b" + + def check(self, value: t.Any) -> bool: + return isinstance(value, bytes) + + def to_json(self, value: t.Any) -> t.Any: + return b64encode(value).decode("ascii") + + def to_python(self, value: t.Any) -> t.Any: + return b64decode(value) + + +class TagMarkup(JSONTag): + """Serialize anything matching the :class:`~markupsafe.Markup` API by + having a ``__html__`` method to the result of that method. Always + deserializes to an instance of :class:`~markupsafe.Markup`.""" + + __slots__ = () + key = " m" + + def check(self, value: t.Any) -> bool: + return callable(getattr(value, "__html__", None)) + + def to_json(self, value: t.Any) -> t.Any: + return str(value.__html__()) + + def to_python(self, value: t.Any) -> t.Any: + return Markup(value) + + +class TagUUID(JSONTag): + __slots__ = () + key = " u" + + def check(self, value: t.Any) -> bool: + return isinstance(value, UUID) + + def to_json(self, value: t.Any) -> t.Any: + return value.hex + + def to_python(self, value: t.Any) -> t.Any: + return UUID(value) + + +class TagDateTime(JSONTag): + __slots__ = () + key = " d" + + def check(self, value: t.Any) -> bool: + return isinstance(value, datetime) + + def to_json(self, value: t.Any) -> t.Any: + return http_date(value) + + def to_python(self, value: t.Any) -> t.Any: + return parse_date(value) + + +class TaggedJSONSerializer: + """Serializer that uses a tag system to compactly represent objects that + are not JSON types. Passed as the intermediate serializer to + :class:`itsdangerous.Serializer`. + + The following extra types are supported: + + * :class:`dict` + * :class:`tuple` + * :class:`bytes` + * :class:`~markupsafe.Markup` + * :class:`~uuid.UUID` + * :class:`~datetime.datetime` + """ + + __slots__ = ("tags", "order") + + #: Tag classes to bind when creating the serializer. Other tags can be + #: added later using :meth:`~register`. + default_tags = [ + TagDict, + PassDict, + TagTuple, + PassList, + TagBytes, + TagMarkup, + TagUUID, + TagDateTime, + ] + + def __init__(self) -> None: + self.tags: dict[str, JSONTag] = {} + self.order: list[JSONTag] = [] + + for cls in self.default_tags: + self.register(cls) + + def register( + self, + tag_class: type[JSONTag], + force: bool = False, + index: int | None = None, + ) -> None: + """Register a new tag with this serializer. + + :param tag_class: tag class to register. Will be instantiated with this + serializer instance. + :param force: overwrite an existing tag. If false (default), a + :exc:`KeyError` is raised. + :param index: index to insert the new tag in the tag order. Useful when + the new tag is a special case of an existing tag. If ``None`` + (default), the tag is appended to the end of the order. + + :raise KeyError: if the tag key is already registered and ``force`` is + not true. + """ + tag = tag_class(self) + key = tag.key + + if key: + if not force and key in self.tags: + raise KeyError(f"Tag '{key}' is already registered.") + + self.tags[key] = tag + + if index is None: + self.order.append(tag) + else: + self.order.insert(index, tag) + + def tag(self, value: t.Any) -> t.Any: + """Convert a value to a tagged representation if necessary.""" + for tag in self.order: + if tag.check(value): + return tag.tag(value) + + return value + + def untag(self, value: dict[str, t.Any]) -> t.Any: + """Convert a tagged representation back to the original type.""" + if len(value) != 1: + return value + + key = next(iter(value)) + + if key not in self.tags: + return value + + return self.tags[key].to_python(value[key]) + + def _untag_scan(self, value: t.Any) -> t.Any: + if isinstance(value, dict): + # untag each item recursively + value = {k: self._untag_scan(v) for k, v in value.items()} + # untag the dict itself + value = self.untag(value) + elif isinstance(value, list): + # untag each item recursively + value = [self._untag_scan(item) for item in value] + + return value + + def dumps(self, value: t.Any) -> str: + """Tag the value and dump it to a compact JSON string.""" + return dumps(self.tag(value), separators=(",", ":")) + + def loads(self, value: str) -> t.Any: + """Load data from a JSON string and deserialized any tagged objects.""" + return self._untag_scan(loads(value)) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/logging.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/logging.py new file mode 100644 index 000000000..0cb8f4374 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/logging.py @@ -0,0 +1,79 @@ +from __future__ import annotations + +import logging +import sys +import typing as t + +from werkzeug.local import LocalProxy + +from .globals import request + +if t.TYPE_CHECKING: # pragma: no cover + from .sansio.app import App + + +@LocalProxy +def wsgi_errors_stream() -> t.TextIO: + """Find the most appropriate error stream for the application. If a request + is active, log to ``wsgi.errors``, otherwise use ``sys.stderr``. + + If you configure your own :class:`logging.StreamHandler`, you may want to + use this for the stream. If you are using file or dict configuration and + can't import this directly, you can refer to it as + ``ext://flask.logging.wsgi_errors_stream``. + """ + if request: + return request.environ["wsgi.errors"] # type: ignore[no-any-return] + + return sys.stderr + + +def has_level_handler(logger: logging.Logger) -> bool: + """Check if there is a handler in the logging chain that will handle the + given logger's :meth:`effective level <~logging.Logger.getEffectiveLevel>`. + """ + level = logger.getEffectiveLevel() + current = logger + + while current: + if any(handler.level <= level for handler in current.handlers): + return True + + if not current.propagate: + break + + current = current.parent # type: ignore + + return False + + +#: Log messages to :func:`~flask.logging.wsgi_errors_stream` with the format +#: ``[%(asctime)s] %(levelname)s in %(module)s: %(message)s``. +default_handler = logging.StreamHandler(wsgi_errors_stream) # type: ignore +default_handler.setFormatter( + logging.Formatter("[%(asctime)s] %(levelname)s in %(module)s: %(message)s") +) + + +def create_logger(app: App) -> logging.Logger: + """Get the Flask app's logger and configure it if needed. + + The logger name will be the same as + :attr:`app.import_name `. + + When :attr:`~flask.Flask.debug` is enabled, set the logger level to + :data:`logging.DEBUG` if it is not set. + + If there is no handler for the logger's effective level, add a + :class:`~logging.StreamHandler` for + :func:`~flask.logging.wsgi_errors_stream` with a basic format. + """ + logger = logging.getLogger(app.name) + + if app.debug and not logger.level: + logger.setLevel(logging.DEBUG) + + if not has_level_handler(logger): + logger.addHandler(default_handler) + + return logger diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/flask/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/README.md b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/README.md new file mode 100644 index 000000000..623ac1982 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/README.md @@ -0,0 +1,6 @@ +# Sansio + +This folder contains code that can be used by alternative Flask +implementations, for example Quart. The code therefore cannot do any +IO, nor be part of a likely IO path. Finally this code cannot use the +Flask globals. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/app.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/app.py new file mode 100644 index 000000000..01fd5dbfa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/app.py @@ -0,0 +1,964 @@ +from __future__ import annotations + +import logging +import os +import sys +import typing as t +from datetime import timedelta +from itertools import chain + +from werkzeug.exceptions import Aborter +from werkzeug.exceptions import BadRequest +from werkzeug.exceptions import BadRequestKeyError +from werkzeug.routing import BuildError +from werkzeug.routing import Map +from werkzeug.routing import Rule +from werkzeug.sansio.response import Response +from werkzeug.utils import cached_property +from werkzeug.utils import redirect as _wz_redirect + +from .. import typing as ft +from ..config import Config +from ..config import ConfigAttribute +from ..ctx import _AppCtxGlobals +from ..helpers import _split_blueprint_path +from ..helpers import get_debug_flag +from ..json.provider import DefaultJSONProvider +from ..json.provider import JSONProvider +from ..logging import create_logger +from ..templating import DispatchingJinjaLoader +from ..templating import Environment +from .scaffold import _endpoint_from_view_func +from .scaffold import find_package +from .scaffold import Scaffold +from .scaffold import setupmethod + +if t.TYPE_CHECKING: # pragma: no cover + from werkzeug.wrappers import Response as BaseResponse + + from ..testing import FlaskClient + from ..testing import FlaskCliRunner + from .blueprints import Blueprint + +T_shell_context_processor = t.TypeVar( + "T_shell_context_processor", bound=ft.ShellContextProcessorCallable +) +T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable) +T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable) +T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable) +T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable) + + +def _make_timedelta(value: timedelta | int | None) -> timedelta | None: + if value is None or isinstance(value, timedelta): + return value + + return timedelta(seconds=value) + + +class App(Scaffold): + """The flask object implements a WSGI application and acts as the central + object. It is passed the name of the module or package of the + application. Once it is created it will act as a central registry for + the view functions, the URL rules, template configuration and much more. + + The name of the package is used to resolve resources from inside the + package or the folder the module is contained in depending on if the + package parameter resolves to an actual python package (a folder with + an :file:`__init__.py` file inside) or a standard module (just a ``.py`` file). + + For more information about resource loading, see :func:`open_resource`. + + Usually you create a :class:`Flask` instance in your main module or + in the :file:`__init__.py` file of your package like this:: + + from flask import Flask + app = Flask(__name__) + + .. admonition:: About the First Parameter + + The idea of the first parameter is to give Flask an idea of what + belongs to your application. This name is used to find resources + on the filesystem, can be used by extensions to improve debugging + information and a lot more. + + So it's important what you provide there. If you are using a single + module, `__name__` is always the correct value. If you however are + using a package, it's usually recommended to hardcode the name of + your package there. + + For example if your application is defined in :file:`yourapplication/app.py` + you should create it with one of the two versions below:: + + app = Flask('yourapplication') + app = Flask(__name__.split('.')[0]) + + Why is that? The application will work even with `__name__`, thanks + to how resources are looked up. However it will make debugging more + painful. Certain extensions can make assumptions based on the + import name of your application. For example the Flask-SQLAlchemy + extension will look for the code in your application that triggered + an SQL query in debug mode. If the import name is not properly set + up, that debugging information is lost. (For example it would only + pick up SQL queries in `yourapplication.app` and not + `yourapplication.views.frontend`) + + .. versionadded:: 0.7 + The `static_url_path`, `static_folder`, and `template_folder` + parameters were added. + + .. versionadded:: 0.8 + The `instance_path` and `instance_relative_config` parameters were + added. + + .. versionadded:: 0.11 + The `root_path` parameter was added. + + .. versionadded:: 1.0 + The ``host_matching`` and ``static_host`` parameters were added. + + .. versionadded:: 1.0 + The ``subdomain_matching`` parameter was added. Subdomain + matching needs to be enabled manually now. Setting + :data:`SERVER_NAME` does not implicitly enable it. + + :param import_name: the name of the application package + :param static_url_path: can be used to specify a different path for the + static files on the web. Defaults to the name + of the `static_folder` folder. + :param static_folder: The folder with static files that is served at + ``static_url_path``. Relative to the application ``root_path`` + or an absolute path. Defaults to ``'static'``. + :param static_host: the host to use when adding the static route. + Defaults to None. Required when using ``host_matching=True`` + with a ``static_folder`` configured. + :param host_matching: set ``url_map.host_matching`` attribute. + Defaults to False. + :param subdomain_matching: consider the subdomain relative to + :data:`SERVER_NAME` when matching routes. Defaults to False. + :param template_folder: the folder that contains the templates that should + be used by the application. Defaults to + ``'templates'`` folder in the root path of the + application. + :param instance_path: An alternative instance path for the application. + By default the folder ``'instance'`` next to the + package or module is assumed to be the instance + path. + :param instance_relative_config: if set to ``True`` relative filenames + for loading the config are assumed to + be relative to the instance path instead + of the application root. + :param root_path: The path to the root of the application files. + This should only be set manually when it can't be detected + automatically, such as for namespace packages. + """ + + #: The class of the object assigned to :attr:`aborter`, created by + #: :meth:`create_aborter`. That object is called by + #: :func:`flask.abort` to raise HTTP errors, and can be + #: called directly as well. + #: + #: Defaults to :class:`werkzeug.exceptions.Aborter`. + #: + #: .. versionadded:: 2.2 + aborter_class = Aborter + + #: The class that is used for the Jinja environment. + #: + #: .. versionadded:: 0.11 + jinja_environment = Environment + + #: The class that is used for the :data:`~flask.g` instance. + #: + #: Example use cases for a custom class: + #: + #: 1. Store arbitrary attributes on flask.g. + #: 2. Add a property for lazy per-request database connectors. + #: 3. Return None instead of AttributeError on unexpected attributes. + #: 4. Raise exception if an unexpected attr is set, a "controlled" flask.g. + #: + #: In Flask 0.9 this property was called `request_globals_class` but it + #: was changed in 0.10 to :attr:`app_ctx_globals_class` because the + #: flask.g object is now application context scoped. + #: + #: .. versionadded:: 0.10 + app_ctx_globals_class = _AppCtxGlobals + + #: The class that is used for the ``config`` attribute of this app. + #: Defaults to :class:`~flask.Config`. + #: + #: Example use cases for a custom class: + #: + #: 1. Default values for certain config options. + #: 2. Access to config values through attributes in addition to keys. + #: + #: .. versionadded:: 0.11 + config_class = Config + + #: The testing flag. Set this to ``True`` to enable the test mode of + #: Flask extensions (and in the future probably also Flask itself). + #: For example this might activate test helpers that have an + #: additional runtime cost which should not be enabled by default. + #: + #: If this is enabled and PROPAGATE_EXCEPTIONS is not changed from the + #: default it's implicitly enabled. + #: + #: This attribute can also be configured from the config with the + #: ``TESTING`` configuration key. Defaults to ``False``. + testing = ConfigAttribute[bool]("TESTING") + + #: If a secret key is set, cryptographic components can use this to + #: sign cookies and other things. Set this to a complex random value + #: when you want to use the secure cookie for instance. + #: + #: This attribute can also be configured from the config with the + #: :data:`SECRET_KEY` configuration key. Defaults to ``None``. + secret_key = ConfigAttribute[t.Union[str, bytes, None]]("SECRET_KEY") + + #: A :class:`~datetime.timedelta` which is used to set the expiration + #: date of a permanent session. The default is 31 days which makes a + #: permanent session survive for roughly one month. + #: + #: This attribute can also be configured from the config with the + #: ``PERMANENT_SESSION_LIFETIME`` configuration key. Defaults to + #: ``timedelta(days=31)`` + permanent_session_lifetime = ConfigAttribute[timedelta]( + "PERMANENT_SESSION_LIFETIME", + get_converter=_make_timedelta, # type: ignore[arg-type] + ) + + json_provider_class: type[JSONProvider] = DefaultJSONProvider + """A subclass of :class:`~flask.json.provider.JSONProvider`. An + instance is created and assigned to :attr:`app.json` when creating + the app. + + The default, :class:`~flask.json.provider.DefaultJSONProvider`, uses + Python's built-in :mod:`json` library. A different provider can use + a different JSON library. + + .. versionadded:: 2.2 + """ + + #: Options that are passed to the Jinja environment in + #: :meth:`create_jinja_environment`. Changing these options after + #: the environment is created (accessing :attr:`jinja_env`) will + #: have no effect. + #: + #: .. versionchanged:: 1.1.0 + #: This is a ``dict`` instead of an ``ImmutableDict`` to allow + #: easier configuration. + #: + jinja_options: dict[str, t.Any] = {} + + #: The rule object to use for URL rules created. This is used by + #: :meth:`add_url_rule`. Defaults to :class:`werkzeug.routing.Rule`. + #: + #: .. versionadded:: 0.7 + url_rule_class = Rule + + #: The map object to use for storing the URL rules and routing + #: configuration parameters. Defaults to :class:`werkzeug.routing.Map`. + #: + #: .. versionadded:: 1.1.0 + url_map_class = Map + + #: The :meth:`test_client` method creates an instance of this test + #: client class. Defaults to :class:`~flask.testing.FlaskClient`. + #: + #: .. versionadded:: 0.7 + test_client_class: type[FlaskClient] | None = None + + #: The :class:`~click.testing.CliRunner` subclass, by default + #: :class:`~flask.testing.FlaskCliRunner` that is used by + #: :meth:`test_cli_runner`. Its ``__init__`` method should take a + #: Flask app object as the first argument. + #: + #: .. versionadded:: 1.0 + test_cli_runner_class: type[FlaskCliRunner] | None = None + + default_config: dict[str, t.Any] + response_class: type[Response] + + def __init__( + self, + import_name: str, + static_url_path: str | None = None, + static_folder: str | os.PathLike[str] | None = "static", + static_host: str | None = None, + host_matching: bool = False, + subdomain_matching: bool = False, + template_folder: str | os.PathLike[str] | None = "templates", + instance_path: str | None = None, + instance_relative_config: bool = False, + root_path: str | None = None, + ): + super().__init__( + import_name=import_name, + static_folder=static_folder, + static_url_path=static_url_path, + template_folder=template_folder, + root_path=root_path, + ) + + if instance_path is None: + instance_path = self.auto_find_instance_path() + elif not os.path.isabs(instance_path): + raise ValueError( + "If an instance path is provided it must be absolute." + " A relative path was given instead." + ) + + #: Holds the path to the instance folder. + #: + #: .. versionadded:: 0.8 + self.instance_path = instance_path + + #: The configuration dictionary as :class:`Config`. This behaves + #: exactly like a regular dictionary but supports additional methods + #: to load a config from files. + self.config = self.make_config(instance_relative_config) + + #: An instance of :attr:`aborter_class` created by + #: :meth:`make_aborter`. This is called by :func:`flask.abort` + #: to raise HTTP errors, and can be called directly as well. + #: + #: .. versionadded:: 2.2 + #: Moved from ``flask.abort``, which calls this object. + self.aborter = self.make_aborter() + + self.json: JSONProvider = self.json_provider_class(self) + """Provides access to JSON methods. Functions in ``flask.json`` + will call methods on this provider when the application context + is active. Used for handling JSON requests and responses. + + An instance of :attr:`json_provider_class`. Can be customized by + changing that attribute on a subclass, or by assigning to this + attribute afterwards. + + The default, :class:`~flask.json.provider.DefaultJSONProvider`, + uses Python's built-in :mod:`json` library. A different provider + can use a different JSON library. + + .. versionadded:: 2.2 + """ + + #: A list of functions that are called by + #: :meth:`handle_url_build_error` when :meth:`.url_for` raises a + #: :exc:`~werkzeug.routing.BuildError`. Each function is called + #: with ``error``, ``endpoint`` and ``values``. If a function + #: returns ``None`` or raises a ``BuildError``, it is skipped. + #: Otherwise, its return value is returned by ``url_for``. + #: + #: .. versionadded:: 0.9 + self.url_build_error_handlers: list[ + t.Callable[[Exception, str, dict[str, t.Any]], str] + ] = [] + + #: A list of functions that are called when the application context + #: is destroyed. Since the application context is also torn down + #: if the request ends this is the place to store code that disconnects + #: from databases. + #: + #: .. versionadded:: 0.9 + self.teardown_appcontext_funcs: list[ft.TeardownCallable] = [] + + #: A list of shell context processor functions that should be run + #: when a shell context is created. + #: + #: .. versionadded:: 0.11 + self.shell_context_processors: list[ft.ShellContextProcessorCallable] = [] + + #: Maps registered blueprint names to blueprint objects. The + #: dict retains the order the blueprints were registered in. + #: Blueprints can be registered multiple times, this dict does + #: not track how often they were attached. + #: + #: .. versionadded:: 0.7 + self.blueprints: dict[str, Blueprint] = {} + + #: a place where extensions can store application specific state. For + #: example this is where an extension could store database engines and + #: similar things. + #: + #: The key must match the name of the extension module. For example in + #: case of a "Flask-Foo" extension in `flask_foo`, the key would be + #: ``'foo'``. + #: + #: .. versionadded:: 0.7 + self.extensions: dict[str, t.Any] = {} + + #: The :class:`~werkzeug.routing.Map` for this instance. You can use + #: this to change the routing converters after the class was created + #: but before any routes are connected. Example:: + #: + #: from werkzeug.routing import BaseConverter + #: + #: class ListConverter(BaseConverter): + #: def to_python(self, value): + #: return value.split(',') + #: def to_url(self, values): + #: return ','.join(super(ListConverter, self).to_url(value) + #: for value in values) + #: + #: app = Flask(__name__) + #: app.url_map.converters['list'] = ListConverter + self.url_map = self.url_map_class(host_matching=host_matching) + + self.subdomain_matching = subdomain_matching + + # tracks internally if the application already handled at least one + # request. + self._got_first_request = False + + def _check_setup_finished(self, f_name: str) -> None: + if self._got_first_request: + raise AssertionError( + f"The setup method '{f_name}' can no longer be called" + " on the application. It has already handled its first" + " request, any changes will not be applied" + " consistently.\n" + "Make sure all imports, decorators, functions, etc." + " needed to set up the application are done before" + " running it." + ) + + @cached_property + def name(self) -> str: # type: ignore + """The name of the application. This is usually the import name + with the difference that it's guessed from the run file if the + import name is main. This name is used as a display name when + Flask needs the name of the application. It can be set and overridden + to change the value. + + .. versionadded:: 0.8 + """ + if self.import_name == "__main__": + fn: str | None = getattr(sys.modules["__main__"], "__file__", None) + if fn is None: + return "__main__" + return os.path.splitext(os.path.basename(fn))[0] + return self.import_name + + @cached_property + def logger(self) -> logging.Logger: + """A standard Python :class:`~logging.Logger` for the app, with + the same name as :attr:`name`. + + In debug mode, the logger's :attr:`~logging.Logger.level` will + be set to :data:`~logging.DEBUG`. + + If there are no handlers configured, a default handler will be + added. See :doc:`/logging` for more information. + + .. versionchanged:: 1.1.0 + The logger takes the same name as :attr:`name` rather than + hard-coding ``"flask.app"``. + + .. versionchanged:: 1.0.0 + Behavior was simplified. The logger is always named + ``"flask.app"``. The level is only set during configuration, + it doesn't check ``app.debug`` each time. Only one format is + used, not different ones depending on ``app.debug``. No + handlers are removed, and a handler is only added if no + handlers are already configured. + + .. versionadded:: 0.3 + """ + return create_logger(self) + + @cached_property + def jinja_env(self) -> Environment: + """The Jinja environment used to load templates. + + The environment is created the first time this property is + accessed. Changing :attr:`jinja_options` after that will have no + effect. + """ + return self.create_jinja_environment() + + def create_jinja_environment(self) -> Environment: + raise NotImplementedError() + + def make_config(self, instance_relative: bool = False) -> Config: + """Used to create the config attribute by the Flask constructor. + The `instance_relative` parameter is passed in from the constructor + of Flask (there named `instance_relative_config`) and indicates if + the config should be relative to the instance path or the root path + of the application. + + .. versionadded:: 0.8 + """ + root_path = self.root_path + if instance_relative: + root_path = self.instance_path + defaults = dict(self.default_config) + defaults["DEBUG"] = get_debug_flag() + return self.config_class(root_path, defaults) + + def make_aborter(self) -> Aborter: + """Create the object to assign to :attr:`aborter`. That object + is called by :func:`flask.abort` to raise HTTP errors, and can + be called directly as well. + + By default, this creates an instance of :attr:`aborter_class`, + which defaults to :class:`werkzeug.exceptions.Aborter`. + + .. versionadded:: 2.2 + """ + return self.aborter_class() + + def auto_find_instance_path(self) -> str: + """Tries to locate the instance path if it was not provided to the + constructor of the application class. It will basically calculate + the path to a folder named ``instance`` next to your main file or + the package. + + .. versionadded:: 0.8 + """ + prefix, package_path = find_package(self.import_name) + if prefix is None: + return os.path.join(package_path, "instance") + return os.path.join(prefix, "var", f"{self.name}-instance") + + def create_global_jinja_loader(self) -> DispatchingJinjaLoader: + """Creates the loader for the Jinja2 environment. Can be used to + override just the loader and keeping the rest unchanged. It's + discouraged to override this function. Instead one should override + the :meth:`jinja_loader` function instead. + + The global loader dispatches between the loaders of the application + and the individual blueprints. + + .. versionadded:: 0.7 + """ + return DispatchingJinjaLoader(self) + + def select_jinja_autoescape(self, filename: str) -> bool: + """Returns ``True`` if autoescaping should be active for the given + template name. If no template name is given, returns `True`. + + .. versionchanged:: 2.2 + Autoescaping is now enabled by default for ``.svg`` files. + + .. versionadded:: 0.5 + """ + if filename is None: + return True + return filename.endswith((".html", ".htm", ".xml", ".xhtml", ".svg")) + + @property + def debug(self) -> bool: + """Whether debug mode is enabled. When using ``flask run`` to start the + development server, an interactive debugger will be shown for unhandled + exceptions, and the server will be reloaded when code changes. This maps to the + :data:`DEBUG` config key. It may not behave as expected if set late. + + **Do not enable debug mode when deploying in production.** + + Default: ``False`` + """ + return self.config["DEBUG"] # type: ignore[no-any-return] + + @debug.setter + def debug(self, value: bool) -> None: + self.config["DEBUG"] = value + + if self.config["TEMPLATES_AUTO_RELOAD"] is None: + self.jinja_env.auto_reload = value + + @setupmethod + def register_blueprint(self, blueprint: Blueprint, **options: t.Any) -> None: + """Register a :class:`~flask.Blueprint` on the application. Keyword + arguments passed to this method will override the defaults set on the + blueprint. + + Calls the blueprint's :meth:`~flask.Blueprint.register` method after + recording the blueprint in the application's :attr:`blueprints`. + + :param blueprint: The blueprint to register. + :param url_prefix: Blueprint routes will be prefixed with this. + :param subdomain: Blueprint routes will match on this subdomain. + :param url_defaults: Blueprint routes will use these default values for + view arguments. + :param options: Additional keyword arguments are passed to + :class:`~flask.blueprints.BlueprintSetupState`. They can be + accessed in :meth:`~flask.Blueprint.record` callbacks. + + .. versionchanged:: 2.0.1 + The ``name`` option can be used to change the (pre-dotted) + name the blueprint is registered with. This allows the same + blueprint to be registered multiple times with unique names + for ``url_for``. + + .. versionadded:: 0.7 + """ + blueprint.register(self, options) + + def iter_blueprints(self) -> t.ValuesView[Blueprint]: + """Iterates over all blueprints by the order they were registered. + + .. versionadded:: 0.11 + """ + return self.blueprints.values() + + @setupmethod + def add_url_rule( + self, + rule: str, + endpoint: str | None = None, + view_func: ft.RouteCallable | None = None, + provide_automatic_options: bool | None = None, + **options: t.Any, + ) -> None: + if endpoint is None: + endpoint = _endpoint_from_view_func(view_func) # type: ignore + options["endpoint"] = endpoint + methods = options.pop("methods", None) + + # if the methods are not given and the view_func object knows its + # methods we can use that instead. If neither exists, we go with + # a tuple of only ``GET`` as default. + if methods is None: + methods = getattr(view_func, "methods", None) or ("GET",) + if isinstance(methods, str): + raise TypeError( + "Allowed methods must be a list of strings, for" + ' example: @app.route(..., methods=["POST"])' + ) + methods = {item.upper() for item in methods} + + # Methods that should always be added + required_methods = set(getattr(view_func, "required_methods", ())) + + # starting with Flask 0.8 the view_func object can disable and + # force-enable the automatic options handling. + if provide_automatic_options is None: + provide_automatic_options = getattr( + view_func, "provide_automatic_options", None + ) + + if provide_automatic_options is None: + if "OPTIONS" not in methods: + provide_automatic_options = True + required_methods.add("OPTIONS") + else: + provide_automatic_options = False + + # Add the required methods now. + methods |= required_methods + + rule_obj = self.url_rule_class(rule, methods=methods, **options) + rule_obj.provide_automatic_options = provide_automatic_options # type: ignore[attr-defined] + + self.url_map.add(rule_obj) + if view_func is not None: + old_func = self.view_functions.get(endpoint) + if old_func is not None and old_func != view_func: + raise AssertionError( + "View function mapping is overwriting an existing" + f" endpoint function: {endpoint}" + ) + self.view_functions[endpoint] = view_func + + @setupmethod + def template_filter( + self, name: str | None = None + ) -> t.Callable[[T_template_filter], T_template_filter]: + """A decorator that is used to register custom template filter. + You can specify a name for the filter, otherwise the function + name will be used. Example:: + + @app.template_filter() + def reverse(s): + return s[::-1] + + :param name: the optional name of the filter, otherwise the + function name will be used. + """ + + def decorator(f: T_template_filter) -> T_template_filter: + self.add_template_filter(f, name=name) + return f + + return decorator + + @setupmethod + def add_template_filter( + self, f: ft.TemplateFilterCallable, name: str | None = None + ) -> None: + """Register a custom template filter. Works exactly like the + :meth:`template_filter` decorator. + + :param name: the optional name of the filter, otherwise the + function name will be used. + """ + self.jinja_env.filters[name or f.__name__] = f + + @setupmethod + def template_test( + self, name: str | None = None + ) -> t.Callable[[T_template_test], T_template_test]: + """A decorator that is used to register custom template test. + You can specify a name for the test, otherwise the function + name will be used. Example:: + + @app.template_test() + def is_prime(n): + if n == 2: + return True + for i in range(2, int(math.ceil(math.sqrt(n))) + 1): + if n % i == 0: + return False + return True + + .. versionadded:: 0.10 + + :param name: the optional name of the test, otherwise the + function name will be used. + """ + + def decorator(f: T_template_test) -> T_template_test: + self.add_template_test(f, name=name) + return f + + return decorator + + @setupmethod + def add_template_test( + self, f: ft.TemplateTestCallable, name: str | None = None + ) -> None: + """Register a custom template test. Works exactly like the + :meth:`template_test` decorator. + + .. versionadded:: 0.10 + + :param name: the optional name of the test, otherwise the + function name will be used. + """ + self.jinja_env.tests[name or f.__name__] = f + + @setupmethod + def template_global( + self, name: str | None = None + ) -> t.Callable[[T_template_global], T_template_global]: + """A decorator that is used to register a custom template global function. + You can specify a name for the global function, otherwise the function + name will be used. Example:: + + @app.template_global() + def double(n): + return 2 * n + + .. versionadded:: 0.10 + + :param name: the optional name of the global function, otherwise the + function name will be used. + """ + + def decorator(f: T_template_global) -> T_template_global: + self.add_template_global(f, name=name) + return f + + return decorator + + @setupmethod + def add_template_global( + self, f: ft.TemplateGlobalCallable, name: str | None = None + ) -> None: + """Register a custom template global function. Works exactly like the + :meth:`template_global` decorator. + + .. versionadded:: 0.10 + + :param name: the optional name of the global function, otherwise the + function name will be used. + """ + self.jinja_env.globals[name or f.__name__] = f + + @setupmethod + def teardown_appcontext(self, f: T_teardown) -> T_teardown: + """Registers a function to be called when the application + context is popped. The application context is typically popped + after the request context for each request, at the end of CLI + commands, or after a manually pushed context ends. + + .. code-block:: python + + with app.app_context(): + ... + + When the ``with`` block exits (or ``ctx.pop()`` is called), the + teardown functions are called just before the app context is + made inactive. Since a request context typically also manages an + application context it would also be called when you pop a + request context. + + When a teardown function was called because of an unhandled + exception it will be passed an error object. If an + :meth:`errorhandler` is registered, it will handle the exception + and the teardown will not receive it. + + Teardown functions must avoid raising exceptions. If they + execute code that might fail they must surround that code with a + ``try``/``except`` block and log any errors. + + The return values of teardown functions are ignored. + + .. versionadded:: 0.9 + """ + self.teardown_appcontext_funcs.append(f) + return f + + @setupmethod + def shell_context_processor( + self, f: T_shell_context_processor + ) -> T_shell_context_processor: + """Registers a shell context processor function. + + .. versionadded:: 0.11 + """ + self.shell_context_processors.append(f) + return f + + def _find_error_handler( + self, e: Exception, blueprints: list[str] + ) -> ft.ErrorHandlerCallable | None: + """Return a registered error handler for an exception in this order: + blueprint handler for a specific code, app handler for a specific code, + blueprint handler for an exception class, app handler for an exception + class, or ``None`` if a suitable handler is not found. + """ + exc_class, code = self._get_exc_class_and_code(type(e)) + names = (*blueprints, None) + + for c in (code, None) if code is not None else (None,): + for name in names: + handler_map = self.error_handler_spec[name][c] + + if not handler_map: + continue + + for cls in exc_class.__mro__: + handler = handler_map.get(cls) + + if handler is not None: + return handler + return None + + def trap_http_exception(self, e: Exception) -> bool: + """Checks if an HTTP exception should be trapped or not. By default + this will return ``False`` for all exceptions except for a bad request + key error if ``TRAP_BAD_REQUEST_ERRORS`` is set to ``True``. It + also returns ``True`` if ``TRAP_HTTP_EXCEPTIONS`` is set to ``True``. + + This is called for all HTTP exceptions raised by a view function. + If it returns ``True`` for any exception the error handler for this + exception is not called and it shows up as regular exception in the + traceback. This is helpful for debugging implicitly raised HTTP + exceptions. + + .. versionchanged:: 1.0 + Bad request errors are not trapped by default in debug mode. + + .. versionadded:: 0.8 + """ + if self.config["TRAP_HTTP_EXCEPTIONS"]: + return True + + trap_bad_request = self.config["TRAP_BAD_REQUEST_ERRORS"] + + # if unset, trap key errors in debug mode + if ( + trap_bad_request is None + and self.debug + and isinstance(e, BadRequestKeyError) + ): + return True + + if trap_bad_request: + return isinstance(e, BadRequest) + + return False + + def should_ignore_error(self, error: BaseException | None) -> bool: + """This is called to figure out if an error should be ignored + or not as far as the teardown system is concerned. If this + function returns ``True`` then the teardown handlers will not be + passed the error. + + .. versionadded:: 0.10 + """ + return False + + def redirect(self, location: str, code: int = 302) -> BaseResponse: + """Create a redirect response object. + + This is called by :func:`flask.redirect`, and can be called + directly as well. + + :param location: The URL to redirect to. + :param code: The status code for the redirect. + + .. versionadded:: 2.2 + Moved from ``flask.redirect``, which calls this method. + """ + return _wz_redirect( + location, + code=code, + Response=self.response_class, # type: ignore[arg-type] + ) + + def inject_url_defaults(self, endpoint: str, values: dict[str, t.Any]) -> None: + """Injects the URL defaults for the given endpoint directly into + the values dictionary passed. This is used internally and + automatically called on URL building. + + .. versionadded:: 0.7 + """ + names: t.Iterable[str | None] = (None,) + + # url_for may be called outside a request context, parse the + # passed endpoint instead of using request.blueprints. + if "." in endpoint: + names = chain( + names, reversed(_split_blueprint_path(endpoint.rpartition(".")[0])) + ) + + for name in names: + if name in self.url_default_functions: + for func in self.url_default_functions[name]: + func(endpoint, values) + + def handle_url_build_error( + self, error: BuildError, endpoint: str, values: dict[str, t.Any] + ) -> str: + """Called by :meth:`.url_for` if a + :exc:`~werkzeug.routing.BuildError` was raised. If this returns + a value, it will be returned by ``url_for``, otherwise the error + will be re-raised. + + Each function in :attr:`url_build_error_handlers` is called with + ``error``, ``endpoint`` and ``values``. If a function returns + ``None`` or raises a ``BuildError``, it is skipped. Otherwise, + its return value is returned by ``url_for``. + + :param error: The active ``BuildError`` being handled. + :param endpoint: The endpoint being built. + :param values: The keyword arguments passed to ``url_for``. + """ + for handler in self.url_build_error_handlers: + try: + rv = handler(error, endpoint, values) + except BuildError as e: + # make error available outside except block + error = e + else: + if rv is not None: + return rv + + # Re-raise if called with an active exception, otherwise raise + # the passed in exception. + if error is sys.exc_info()[1]: + raise + + raise error diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/blueprints.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/blueprints.py new file mode 100644 index 000000000..4f912cca0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/blueprints.py @@ -0,0 +1,632 @@ +from __future__ import annotations + +import os +import typing as t +from collections import defaultdict +from functools import update_wrapper + +from .. import typing as ft +from .scaffold import _endpoint_from_view_func +from .scaffold import _sentinel +from .scaffold import Scaffold +from .scaffold import setupmethod + +if t.TYPE_CHECKING: # pragma: no cover + from .app import App + +DeferredSetupFunction = t.Callable[["BlueprintSetupState"], None] +T_after_request = t.TypeVar("T_after_request", bound=ft.AfterRequestCallable[t.Any]) +T_before_request = t.TypeVar("T_before_request", bound=ft.BeforeRequestCallable) +T_error_handler = t.TypeVar("T_error_handler", bound=ft.ErrorHandlerCallable) +T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable) +T_template_context_processor = t.TypeVar( + "T_template_context_processor", bound=ft.TemplateContextProcessorCallable +) +T_template_filter = t.TypeVar("T_template_filter", bound=ft.TemplateFilterCallable) +T_template_global = t.TypeVar("T_template_global", bound=ft.TemplateGlobalCallable) +T_template_test = t.TypeVar("T_template_test", bound=ft.TemplateTestCallable) +T_url_defaults = t.TypeVar("T_url_defaults", bound=ft.URLDefaultCallable) +T_url_value_preprocessor = t.TypeVar( + "T_url_value_preprocessor", bound=ft.URLValuePreprocessorCallable +) + + +class BlueprintSetupState: + """Temporary holder object for registering a blueprint with the + application. An instance of this class is created by the + :meth:`~flask.Blueprint.make_setup_state` method and later passed + to all register callback functions. + """ + + def __init__( + self, + blueprint: Blueprint, + app: App, + options: t.Any, + first_registration: bool, + ) -> None: + #: a reference to the current application + self.app = app + + #: a reference to the blueprint that created this setup state. + self.blueprint = blueprint + + #: a dictionary with all options that were passed to the + #: :meth:`~flask.Flask.register_blueprint` method. + self.options = options + + #: as blueprints can be registered multiple times with the + #: application and not everything wants to be registered + #: multiple times on it, this attribute can be used to figure + #: out if the blueprint was registered in the past already. + self.first_registration = first_registration + + subdomain = self.options.get("subdomain") + if subdomain is None: + subdomain = self.blueprint.subdomain + + #: The subdomain that the blueprint should be active for, ``None`` + #: otherwise. + self.subdomain = subdomain + + url_prefix = self.options.get("url_prefix") + if url_prefix is None: + url_prefix = self.blueprint.url_prefix + #: The prefix that should be used for all URLs defined on the + #: blueprint. + self.url_prefix = url_prefix + + self.name = self.options.get("name", blueprint.name) + self.name_prefix = self.options.get("name_prefix", "") + + #: A dictionary with URL defaults that is added to each and every + #: URL that was defined with the blueprint. + self.url_defaults = dict(self.blueprint.url_values_defaults) + self.url_defaults.update(self.options.get("url_defaults", ())) + + def add_url_rule( + self, + rule: str, + endpoint: str | None = None, + view_func: ft.RouteCallable | None = None, + **options: t.Any, + ) -> None: + """A helper method to register a rule (and optionally a view function) + to the application. The endpoint is automatically prefixed with the + blueprint's name. + """ + if self.url_prefix is not None: + if rule: + rule = "/".join((self.url_prefix.rstrip("/"), rule.lstrip("/"))) + else: + rule = self.url_prefix + options.setdefault("subdomain", self.subdomain) + if endpoint is None: + endpoint = _endpoint_from_view_func(view_func) # type: ignore + defaults = self.url_defaults + if "defaults" in options: + defaults = dict(defaults, **options.pop("defaults")) + + self.app.add_url_rule( + rule, + f"{self.name_prefix}.{self.name}.{endpoint}".lstrip("."), + view_func, + defaults=defaults, + **options, + ) + + +class Blueprint(Scaffold): + """Represents a blueprint, a collection of routes and other + app-related functions that can be registered on a real application + later. + + A blueprint is an object that allows defining application functions + without requiring an application object ahead of time. It uses the + same decorators as :class:`~flask.Flask`, but defers the need for an + application by recording them for later registration. + + Decorating a function with a blueprint creates a deferred function + that is called with :class:`~flask.blueprints.BlueprintSetupState` + when the blueprint is registered on an application. + + See :doc:`/blueprints` for more information. + + :param name: The name of the blueprint. Will be prepended to each + endpoint name. + :param import_name: The name of the blueprint package, usually + ``__name__``. This helps locate the ``root_path`` for the + blueprint. + :param static_folder: A folder with static files that should be + served by the blueprint's static route. The path is relative to + the blueprint's root path. Blueprint static files are disabled + by default. + :param static_url_path: The url to serve static files from. + Defaults to ``static_folder``. If the blueprint does not have + a ``url_prefix``, the app's static route will take precedence, + and the blueprint's static files won't be accessible. + :param template_folder: A folder with templates that should be added + to the app's template search path. The path is relative to the + blueprint's root path. Blueprint templates are disabled by + default. Blueprint templates have a lower precedence than those + in the app's templates folder. + :param url_prefix: A path to prepend to all of the blueprint's URLs, + to make them distinct from the rest of the app's routes. + :param subdomain: A subdomain that blueprint routes will match on by + default. + :param url_defaults: A dict of default values that blueprint routes + will receive by default. + :param root_path: By default, the blueprint will automatically set + this based on ``import_name``. In certain situations this + automatic detection can fail, so the path can be specified + manually instead. + + .. versionchanged:: 1.1.0 + Blueprints have a ``cli`` group to register nested CLI commands. + The ``cli_group`` parameter controls the name of the group under + the ``flask`` command. + + .. versionadded:: 0.7 + """ + + _got_registered_once = False + + def __init__( + self, + name: str, + import_name: str, + static_folder: str | os.PathLike[str] | None = None, + static_url_path: str | None = None, + template_folder: str | os.PathLike[str] | None = None, + url_prefix: str | None = None, + subdomain: str | None = None, + url_defaults: dict[str, t.Any] | None = None, + root_path: str | None = None, + cli_group: str | None = _sentinel, # type: ignore[assignment] + ): + super().__init__( + import_name=import_name, + static_folder=static_folder, + static_url_path=static_url_path, + template_folder=template_folder, + root_path=root_path, + ) + + if not name: + raise ValueError("'name' may not be empty.") + + if "." in name: + raise ValueError("'name' may not contain a dot '.' character.") + + self.name = name + self.url_prefix = url_prefix + self.subdomain = subdomain + self.deferred_functions: list[DeferredSetupFunction] = [] + + if url_defaults is None: + url_defaults = {} + + self.url_values_defaults = url_defaults + self.cli_group = cli_group + self._blueprints: list[tuple[Blueprint, dict[str, t.Any]]] = [] + + def _check_setup_finished(self, f_name: str) -> None: + if self._got_registered_once: + raise AssertionError( + f"The setup method '{f_name}' can no longer be called on the blueprint" + f" '{self.name}'. It has already been registered at least once, any" + " changes will not be applied consistently.\n" + "Make sure all imports, decorators, functions, etc. needed to set up" + " the blueprint are done before registering it." + ) + + @setupmethod + def record(self, func: DeferredSetupFunction) -> None: + """Registers a function that is called when the blueprint is + registered on the application. This function is called with the + state as argument as returned by the :meth:`make_setup_state` + method. + """ + self.deferred_functions.append(func) + + @setupmethod + def record_once(self, func: DeferredSetupFunction) -> None: + """Works like :meth:`record` but wraps the function in another + function that will ensure the function is only called once. If the + blueprint is registered a second time on the application, the + function passed is not called. + """ + + def wrapper(state: BlueprintSetupState) -> None: + if state.first_registration: + func(state) + + self.record(update_wrapper(wrapper, func)) + + def make_setup_state( + self, app: App, options: dict[str, t.Any], first_registration: bool = False + ) -> BlueprintSetupState: + """Creates an instance of :meth:`~flask.blueprints.BlueprintSetupState` + object that is later passed to the register callback functions. + Subclasses can override this to return a subclass of the setup state. + """ + return BlueprintSetupState(self, app, options, first_registration) + + @setupmethod + def register_blueprint(self, blueprint: Blueprint, **options: t.Any) -> None: + """Register a :class:`~flask.Blueprint` on this blueprint. Keyword + arguments passed to this method will override the defaults set + on the blueprint. + + .. versionchanged:: 2.0.1 + The ``name`` option can be used to change the (pre-dotted) + name the blueprint is registered with. This allows the same + blueprint to be registered multiple times with unique names + for ``url_for``. + + .. versionadded:: 2.0 + """ + if blueprint is self: + raise ValueError("Cannot register a blueprint on itself") + self._blueprints.append((blueprint, options)) + + def register(self, app: App, options: dict[str, t.Any]) -> None: + """Called by :meth:`Flask.register_blueprint` to register all + views and callbacks registered on the blueprint with the + application. Creates a :class:`.BlueprintSetupState` and calls + each :meth:`record` callback with it. + + :param app: The application this blueprint is being registered + with. + :param options: Keyword arguments forwarded from + :meth:`~Flask.register_blueprint`. + + .. versionchanged:: 2.3 + Nested blueprints now correctly apply subdomains. + + .. versionchanged:: 2.1 + Registering the same blueprint with the same name multiple + times is an error. + + .. versionchanged:: 2.0.1 + Nested blueprints are registered with their dotted name. + This allows different blueprints with the same name to be + nested at different locations. + + .. versionchanged:: 2.0.1 + The ``name`` option can be used to change the (pre-dotted) + name the blueprint is registered with. This allows the same + blueprint to be registered multiple times with unique names + for ``url_for``. + """ + name_prefix = options.get("name_prefix", "") + self_name = options.get("name", self.name) + name = f"{name_prefix}.{self_name}".lstrip(".") + + if name in app.blueprints: + bp_desc = "this" if app.blueprints[name] is self else "a different" + existing_at = f" '{name}'" if self_name != name else "" + + raise ValueError( + f"The name '{self_name}' is already registered for" + f" {bp_desc} blueprint{existing_at}. Use 'name=' to" + f" provide a unique name." + ) + + first_bp_registration = not any(bp is self for bp in app.blueprints.values()) + first_name_registration = name not in app.blueprints + + app.blueprints[name] = self + self._got_registered_once = True + state = self.make_setup_state(app, options, first_bp_registration) + + if self.has_static_folder: + state.add_url_rule( + f"{self.static_url_path}/", + view_func=self.send_static_file, # type: ignore[attr-defined] + endpoint="static", + ) + + # Merge blueprint data into parent. + if first_bp_registration or first_name_registration: + self._merge_blueprint_funcs(app, name) + + for deferred in self.deferred_functions: + deferred(state) + + cli_resolved_group = options.get("cli_group", self.cli_group) + + if self.cli.commands: + if cli_resolved_group is None: + app.cli.commands.update(self.cli.commands) + elif cli_resolved_group is _sentinel: + self.cli.name = name + app.cli.add_command(self.cli) + else: + self.cli.name = cli_resolved_group + app.cli.add_command(self.cli) + + for blueprint, bp_options in self._blueprints: + bp_options = bp_options.copy() + bp_url_prefix = bp_options.get("url_prefix") + bp_subdomain = bp_options.get("subdomain") + + if bp_subdomain is None: + bp_subdomain = blueprint.subdomain + + if state.subdomain is not None and bp_subdomain is not None: + bp_options["subdomain"] = bp_subdomain + "." + state.subdomain + elif bp_subdomain is not None: + bp_options["subdomain"] = bp_subdomain + elif state.subdomain is not None: + bp_options["subdomain"] = state.subdomain + + if bp_url_prefix is None: + bp_url_prefix = blueprint.url_prefix + + if state.url_prefix is not None and bp_url_prefix is not None: + bp_options["url_prefix"] = ( + state.url_prefix.rstrip("/") + "/" + bp_url_prefix.lstrip("/") + ) + elif bp_url_prefix is not None: + bp_options["url_prefix"] = bp_url_prefix + elif state.url_prefix is not None: + bp_options["url_prefix"] = state.url_prefix + + bp_options["name_prefix"] = name + blueprint.register(app, bp_options) + + def _merge_blueprint_funcs(self, app: App, name: str) -> None: + def extend( + bp_dict: dict[ft.AppOrBlueprintKey, list[t.Any]], + parent_dict: dict[ft.AppOrBlueprintKey, list[t.Any]], + ) -> None: + for key, values in bp_dict.items(): + key = name if key is None else f"{name}.{key}" + parent_dict[key].extend(values) + + for key, value in self.error_handler_spec.items(): + key = name if key is None else f"{name}.{key}" + value = defaultdict( + dict, + { + code: {exc_class: func for exc_class, func in code_values.items()} + for code, code_values in value.items() + }, + ) + app.error_handler_spec[key] = value + + for endpoint, func in self.view_functions.items(): + app.view_functions[endpoint] = func + + extend(self.before_request_funcs, app.before_request_funcs) + extend(self.after_request_funcs, app.after_request_funcs) + extend( + self.teardown_request_funcs, + app.teardown_request_funcs, + ) + extend(self.url_default_functions, app.url_default_functions) + extend(self.url_value_preprocessors, app.url_value_preprocessors) + extend(self.template_context_processors, app.template_context_processors) + + @setupmethod + def add_url_rule( + self, + rule: str, + endpoint: str | None = None, + view_func: ft.RouteCallable | None = None, + provide_automatic_options: bool | None = None, + **options: t.Any, + ) -> None: + """Register a URL rule with the blueprint. See :meth:`.Flask.add_url_rule` for + full documentation. + + The URL rule is prefixed with the blueprint's URL prefix. The endpoint name, + used with :func:`url_for`, is prefixed with the blueprint's name. + """ + if endpoint and "." in endpoint: + raise ValueError("'endpoint' may not contain a dot '.' character.") + + if view_func and hasattr(view_func, "__name__") and "." in view_func.__name__: + raise ValueError("'view_func' name may not contain a dot '.' character.") + + self.record( + lambda s: s.add_url_rule( + rule, + endpoint, + view_func, + provide_automatic_options=provide_automatic_options, + **options, + ) + ) + + @setupmethod + def app_template_filter( + self, name: str | None = None + ) -> t.Callable[[T_template_filter], T_template_filter]: + """Register a template filter, available in any template rendered by the + application. Equivalent to :meth:`.Flask.template_filter`. + + :param name: the optional name of the filter, otherwise the + function name will be used. + """ + + def decorator(f: T_template_filter) -> T_template_filter: + self.add_app_template_filter(f, name=name) + return f + + return decorator + + @setupmethod + def add_app_template_filter( + self, f: ft.TemplateFilterCallable, name: str | None = None + ) -> None: + """Register a template filter, available in any template rendered by the + application. Works like the :meth:`app_template_filter` decorator. Equivalent to + :meth:`.Flask.add_template_filter`. + + :param name: the optional name of the filter, otherwise the + function name will be used. + """ + + def register_template(state: BlueprintSetupState) -> None: + state.app.jinja_env.filters[name or f.__name__] = f + + self.record_once(register_template) + + @setupmethod + def app_template_test( + self, name: str | None = None + ) -> t.Callable[[T_template_test], T_template_test]: + """Register a template test, available in any template rendered by the + application. Equivalent to :meth:`.Flask.template_test`. + + .. versionadded:: 0.10 + + :param name: the optional name of the test, otherwise the + function name will be used. + """ + + def decorator(f: T_template_test) -> T_template_test: + self.add_app_template_test(f, name=name) + return f + + return decorator + + @setupmethod + def add_app_template_test( + self, f: ft.TemplateTestCallable, name: str | None = None + ) -> None: + """Register a template test, available in any template rendered by the + application. Works like the :meth:`app_template_test` decorator. Equivalent to + :meth:`.Flask.add_template_test`. + + .. versionadded:: 0.10 + + :param name: the optional name of the test, otherwise the + function name will be used. + """ + + def register_template(state: BlueprintSetupState) -> None: + state.app.jinja_env.tests[name or f.__name__] = f + + self.record_once(register_template) + + @setupmethod + def app_template_global( + self, name: str | None = None + ) -> t.Callable[[T_template_global], T_template_global]: + """Register a template global, available in any template rendered by the + application. Equivalent to :meth:`.Flask.template_global`. + + .. versionadded:: 0.10 + + :param name: the optional name of the global, otherwise the + function name will be used. + """ + + def decorator(f: T_template_global) -> T_template_global: + self.add_app_template_global(f, name=name) + return f + + return decorator + + @setupmethod + def add_app_template_global( + self, f: ft.TemplateGlobalCallable, name: str | None = None + ) -> None: + """Register a template global, available in any template rendered by the + application. Works like the :meth:`app_template_global` decorator. Equivalent to + :meth:`.Flask.add_template_global`. + + .. versionadded:: 0.10 + + :param name: the optional name of the global, otherwise the + function name will be used. + """ + + def register_template(state: BlueprintSetupState) -> None: + state.app.jinja_env.globals[name or f.__name__] = f + + self.record_once(register_template) + + @setupmethod + def before_app_request(self, f: T_before_request) -> T_before_request: + """Like :meth:`before_request`, but before every request, not only those handled + by the blueprint. Equivalent to :meth:`.Flask.before_request`. + """ + self.record_once( + lambda s: s.app.before_request_funcs.setdefault(None, []).append(f) + ) + return f + + @setupmethod + def after_app_request(self, f: T_after_request) -> T_after_request: + """Like :meth:`after_request`, but after every request, not only those handled + by the blueprint. Equivalent to :meth:`.Flask.after_request`. + """ + self.record_once( + lambda s: s.app.after_request_funcs.setdefault(None, []).append(f) + ) + return f + + @setupmethod + def teardown_app_request(self, f: T_teardown) -> T_teardown: + """Like :meth:`teardown_request`, but after every request, not only those + handled by the blueprint. Equivalent to :meth:`.Flask.teardown_request`. + """ + self.record_once( + lambda s: s.app.teardown_request_funcs.setdefault(None, []).append(f) + ) + return f + + @setupmethod + def app_context_processor( + self, f: T_template_context_processor + ) -> T_template_context_processor: + """Like :meth:`context_processor`, but for templates rendered by every view, not + only by the blueprint. Equivalent to :meth:`.Flask.context_processor`. + """ + self.record_once( + lambda s: s.app.template_context_processors.setdefault(None, []).append(f) + ) + return f + + @setupmethod + def app_errorhandler( + self, code: type[Exception] | int + ) -> t.Callable[[T_error_handler], T_error_handler]: + """Like :meth:`errorhandler`, but for every request, not only those handled by + the blueprint. Equivalent to :meth:`.Flask.errorhandler`. + """ + + def decorator(f: T_error_handler) -> T_error_handler: + def from_blueprint(state: BlueprintSetupState) -> None: + state.app.errorhandler(code)(f) + + self.record_once(from_blueprint) + return f + + return decorator + + @setupmethod + def app_url_value_preprocessor( + self, f: T_url_value_preprocessor + ) -> T_url_value_preprocessor: + """Like :meth:`url_value_preprocessor`, but for every request, not only those + handled by the blueprint. Equivalent to :meth:`.Flask.url_value_preprocessor`. + """ + self.record_once( + lambda s: s.app.url_value_preprocessors.setdefault(None, []).append(f) + ) + return f + + @setupmethod + def app_url_defaults(self, f: T_url_defaults) -> T_url_defaults: + """Like :meth:`url_defaults`, but for every request, not only those handled by + the blueprint. Equivalent to :meth:`.Flask.url_defaults`. + """ + self.record_once( + lambda s: s.app.url_default_functions.setdefault(None, []).append(f) + ) + return f diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/scaffold.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/scaffold.py new file mode 100644 index 000000000..69e33a095 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sansio/scaffold.py @@ -0,0 +1,801 @@ +from __future__ import annotations + +import importlib.util +import os +import pathlib +import sys +import typing as t +from collections import defaultdict +from functools import update_wrapper + +from jinja2 import BaseLoader +from jinja2 import FileSystemLoader +from werkzeug.exceptions import default_exceptions +from werkzeug.exceptions import HTTPException +from werkzeug.utils import cached_property + +from .. import typing as ft +from ..helpers import get_root_path +from ..templating import _default_template_ctx_processor + +if t.TYPE_CHECKING: # pragma: no cover + from click import Group + +# a singleton sentinel value for parameter defaults +_sentinel = object() + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) +T_after_request = t.TypeVar("T_after_request", bound=ft.AfterRequestCallable[t.Any]) +T_before_request = t.TypeVar("T_before_request", bound=ft.BeforeRequestCallable) +T_error_handler = t.TypeVar("T_error_handler", bound=ft.ErrorHandlerCallable) +T_teardown = t.TypeVar("T_teardown", bound=ft.TeardownCallable) +T_template_context_processor = t.TypeVar( + "T_template_context_processor", bound=ft.TemplateContextProcessorCallable +) +T_url_defaults = t.TypeVar("T_url_defaults", bound=ft.URLDefaultCallable) +T_url_value_preprocessor = t.TypeVar( + "T_url_value_preprocessor", bound=ft.URLValuePreprocessorCallable +) +T_route = t.TypeVar("T_route", bound=ft.RouteCallable) + + +def setupmethod(f: F) -> F: + f_name = f.__name__ + + def wrapper_func(self: Scaffold, *args: t.Any, **kwargs: t.Any) -> t.Any: + self._check_setup_finished(f_name) + return f(self, *args, **kwargs) + + return t.cast(F, update_wrapper(wrapper_func, f)) + + +class Scaffold: + """Common behavior shared between :class:`~flask.Flask` and + :class:`~flask.blueprints.Blueprint`. + + :param import_name: The import name of the module where this object + is defined. Usually :attr:`__name__` should be used. + :param static_folder: Path to a folder of static files to serve. + If this is set, a static route will be added. + :param static_url_path: URL prefix for the static route. + :param template_folder: Path to a folder containing template files. + for rendering. If this is set, a Jinja loader will be added. + :param root_path: The path that static, template, and resource files + are relative to. Typically not set, it is discovered based on + the ``import_name``. + + .. versionadded:: 2.0 + """ + + cli: Group + name: str + _static_folder: str | None = None + _static_url_path: str | None = None + + def __init__( + self, + import_name: str, + static_folder: str | os.PathLike[str] | None = None, + static_url_path: str | None = None, + template_folder: str | os.PathLike[str] | None = None, + root_path: str | None = None, + ): + #: The name of the package or module that this object belongs + #: to. Do not change this once it is set by the constructor. + self.import_name = import_name + + self.static_folder = static_folder # type: ignore + self.static_url_path = static_url_path + + #: The path to the templates folder, relative to + #: :attr:`root_path`, to add to the template loader. ``None`` if + #: templates should not be added. + self.template_folder = template_folder + + if root_path is None: + root_path = get_root_path(self.import_name) + + #: Absolute path to the package on the filesystem. Used to look + #: up resources contained in the package. + self.root_path = root_path + + #: A dictionary mapping endpoint names to view functions. + #: + #: To register a view function, use the :meth:`route` decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.view_functions: dict[str, ft.RouteCallable] = {} + + #: A data structure of registered error handlers, in the format + #: ``{scope: {code: {class: handler}}}``. The ``scope`` key is + #: the name of a blueprint the handlers are active for, or + #: ``None`` for all requests. The ``code`` key is the HTTP + #: status code for ``HTTPException``, or ``None`` for + #: other exceptions. The innermost dictionary maps exception + #: classes to handler functions. + #: + #: To register an error handler, use the :meth:`errorhandler` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.error_handler_spec: dict[ + ft.AppOrBlueprintKey, + dict[int | None, dict[type[Exception], ft.ErrorHandlerCallable]], + ] = defaultdict(lambda: defaultdict(dict)) + + #: A data structure of functions to call at the beginning of + #: each request, in the format ``{scope: [functions]}``. The + #: ``scope`` key is the name of a blueprint the functions are + #: active for, or ``None`` for all requests. + #: + #: To register a function, use the :meth:`before_request` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.before_request_funcs: dict[ + ft.AppOrBlueprintKey, list[ft.BeforeRequestCallable] + ] = defaultdict(list) + + #: A data structure of functions to call at the end of each + #: request, in the format ``{scope: [functions]}``. The + #: ``scope`` key is the name of a blueprint the functions are + #: active for, or ``None`` for all requests. + #: + #: To register a function, use the :meth:`after_request` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.after_request_funcs: dict[ + ft.AppOrBlueprintKey, list[ft.AfterRequestCallable[t.Any]] + ] = defaultdict(list) + + #: A data structure of functions to call at the end of each + #: request even if an exception is raised, in the format + #: ``{scope: [functions]}``. The ``scope`` key is the name of a + #: blueprint the functions are active for, or ``None`` for all + #: requests. + #: + #: To register a function, use the :meth:`teardown_request` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.teardown_request_funcs: dict[ + ft.AppOrBlueprintKey, list[ft.TeardownCallable] + ] = defaultdict(list) + + #: A data structure of functions to call to pass extra context + #: values when rendering templates, in the format + #: ``{scope: [functions]}``. The ``scope`` key is the name of a + #: blueprint the functions are active for, or ``None`` for all + #: requests. + #: + #: To register a function, use the :meth:`context_processor` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.template_context_processors: dict[ + ft.AppOrBlueprintKey, list[ft.TemplateContextProcessorCallable] + ] = defaultdict(list, {None: [_default_template_ctx_processor]}) + + #: A data structure of functions to call to modify the keyword + #: arguments passed to the view function, in the format + #: ``{scope: [functions]}``. The ``scope`` key is the name of a + #: blueprint the functions are active for, or ``None`` for all + #: requests. + #: + #: To register a function, use the + #: :meth:`url_value_preprocessor` decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.url_value_preprocessors: dict[ + ft.AppOrBlueprintKey, + list[ft.URLValuePreprocessorCallable], + ] = defaultdict(list) + + #: A data structure of functions to call to modify the keyword + #: arguments when generating URLs, in the format + #: ``{scope: [functions]}``. The ``scope`` key is the name of a + #: blueprint the functions are active for, or ``None`` for all + #: requests. + #: + #: To register a function, use the :meth:`url_defaults` + #: decorator. + #: + #: This data structure is internal. It should not be modified + #: directly and its format may change at any time. + self.url_default_functions: dict[ + ft.AppOrBlueprintKey, list[ft.URLDefaultCallable] + ] = defaultdict(list) + + def __repr__(self) -> str: + return f"<{type(self).__name__} {self.name!r}>" + + def _check_setup_finished(self, f_name: str) -> None: + raise NotImplementedError + + @property + def static_folder(self) -> str | None: + """The absolute path to the configured static folder. ``None`` + if no static folder is set. + """ + if self._static_folder is not None: + return os.path.join(self.root_path, self._static_folder) + else: + return None + + @static_folder.setter + def static_folder(self, value: str | os.PathLike[str] | None) -> None: + if value is not None: + value = os.fspath(value).rstrip(r"\/") + + self._static_folder = value + + @property + def has_static_folder(self) -> bool: + """``True`` if :attr:`static_folder` is set. + + .. versionadded:: 0.5 + """ + return self.static_folder is not None + + @property + def static_url_path(self) -> str | None: + """The URL prefix that the static route will be accessible from. + + If it was not configured during init, it is derived from + :attr:`static_folder`. + """ + if self._static_url_path is not None: + return self._static_url_path + + if self.static_folder is not None: + basename = os.path.basename(self.static_folder) + return f"/{basename}".rstrip("/") + + return None + + @static_url_path.setter + def static_url_path(self, value: str | None) -> None: + if value is not None: + value = value.rstrip("/") + + self._static_url_path = value + + @cached_property + def jinja_loader(self) -> BaseLoader | None: + """The Jinja loader for this object's templates. By default this + is a class :class:`jinja2.loaders.FileSystemLoader` to + :attr:`template_folder` if it is set. + + .. versionadded:: 0.5 + """ + if self.template_folder is not None: + return FileSystemLoader(os.path.join(self.root_path, self.template_folder)) + else: + return None + + def _method_route( + self, + method: str, + rule: str, + options: dict[str, t.Any], + ) -> t.Callable[[T_route], T_route]: + if "methods" in options: + raise TypeError("Use the 'route' decorator to use the 'methods' argument.") + + return self.route(rule, methods=[method], **options) + + @setupmethod + def get(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Shortcut for :meth:`route` with ``methods=["GET"]``. + + .. versionadded:: 2.0 + """ + return self._method_route("GET", rule, options) + + @setupmethod + def post(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Shortcut for :meth:`route` with ``methods=["POST"]``. + + .. versionadded:: 2.0 + """ + return self._method_route("POST", rule, options) + + @setupmethod + def put(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Shortcut for :meth:`route` with ``methods=["PUT"]``. + + .. versionadded:: 2.0 + """ + return self._method_route("PUT", rule, options) + + @setupmethod + def delete(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Shortcut for :meth:`route` with ``methods=["DELETE"]``. + + .. versionadded:: 2.0 + """ + return self._method_route("DELETE", rule, options) + + @setupmethod + def patch(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Shortcut for :meth:`route` with ``methods=["PATCH"]``. + + .. versionadded:: 2.0 + """ + return self._method_route("PATCH", rule, options) + + @setupmethod + def route(self, rule: str, **options: t.Any) -> t.Callable[[T_route], T_route]: + """Decorate a view function to register it with the given URL + rule and options. Calls :meth:`add_url_rule`, which has more + details about the implementation. + + .. code-block:: python + + @app.route("/") + def index(): + return "Hello, World!" + + See :ref:`url-route-registrations`. + + The endpoint name for the route defaults to the name of the view + function if the ``endpoint`` parameter isn't passed. + + The ``methods`` parameter defaults to ``["GET"]``. ``HEAD`` and + ``OPTIONS`` are added automatically. + + :param rule: The URL rule string. + :param options: Extra options passed to the + :class:`~werkzeug.routing.Rule` object. + """ + + def decorator(f: T_route) -> T_route: + endpoint = options.pop("endpoint", None) + self.add_url_rule(rule, endpoint, f, **options) + return f + + return decorator + + @setupmethod + def add_url_rule( + self, + rule: str, + endpoint: str | None = None, + view_func: ft.RouteCallable | None = None, + provide_automatic_options: bool | None = None, + **options: t.Any, + ) -> None: + """Register a rule for routing incoming requests and building + URLs. The :meth:`route` decorator is a shortcut to call this + with the ``view_func`` argument. These are equivalent: + + .. code-block:: python + + @app.route("/") + def index(): + ... + + .. code-block:: python + + def index(): + ... + + app.add_url_rule("/", view_func=index) + + See :ref:`url-route-registrations`. + + The endpoint name for the route defaults to the name of the view + function if the ``endpoint`` parameter isn't passed. An error + will be raised if a function has already been registered for the + endpoint. + + The ``methods`` parameter defaults to ``["GET"]``. ``HEAD`` is + always added automatically, and ``OPTIONS`` is added + automatically by default. + + ``view_func`` does not necessarily need to be passed, but if the + rule should participate in routing an endpoint name must be + associated with a view function at some point with the + :meth:`endpoint` decorator. + + .. code-block:: python + + app.add_url_rule("/", endpoint="index") + + @app.endpoint("index") + def index(): + ... + + If ``view_func`` has a ``required_methods`` attribute, those + methods are added to the passed and automatic methods. If it + has a ``provide_automatic_methods`` attribute, it is used as the + default if the parameter is not passed. + + :param rule: The URL rule string. + :param endpoint: The endpoint name to associate with the rule + and view function. Used when routing and building URLs. + Defaults to ``view_func.__name__``. + :param view_func: The view function to associate with the + endpoint name. + :param provide_automatic_options: Add the ``OPTIONS`` method and + respond to ``OPTIONS`` requests automatically. + :param options: Extra options passed to the + :class:`~werkzeug.routing.Rule` object. + """ + raise NotImplementedError + + @setupmethod + def endpoint(self, endpoint: str) -> t.Callable[[F], F]: + """Decorate a view function to register it for the given + endpoint. Used if a rule is added without a ``view_func`` with + :meth:`add_url_rule`. + + .. code-block:: python + + app.add_url_rule("/ex", endpoint="example") + + @app.endpoint("example") + def example(): + ... + + :param endpoint: The endpoint name to associate with the view + function. + """ + + def decorator(f: F) -> F: + self.view_functions[endpoint] = f + return f + + return decorator + + @setupmethod + def before_request(self, f: T_before_request) -> T_before_request: + """Register a function to run before each request. + + For example, this can be used to open a database connection, or + to load the logged in user from the session. + + .. code-block:: python + + @app.before_request + def load_user(): + if "user_id" in session: + g.user = db.session.get(session["user_id"]) + + The function will be called without any arguments. If it returns + a non-``None`` value, the value is handled as if it was the + return value from the view, and further request handling is + stopped. + + This is available on both app and blueprint objects. When used on an app, this + executes before every request. When used on a blueprint, this executes before + every request that the blueprint handles. To register with a blueprint and + execute before every request, use :meth:`.Blueprint.before_app_request`. + """ + self.before_request_funcs.setdefault(None, []).append(f) + return f + + @setupmethod + def after_request(self, f: T_after_request) -> T_after_request: + """Register a function to run after each request to this object. + + The function is called with the response object, and must return + a response object. This allows the functions to modify or + replace the response before it is sent. + + If a function raises an exception, any remaining + ``after_request`` functions will not be called. Therefore, this + should not be used for actions that must execute, such as to + close resources. Use :meth:`teardown_request` for that. + + This is available on both app and blueprint objects. When used on an app, this + executes after every request. When used on a blueprint, this executes after + every request that the blueprint handles. To register with a blueprint and + execute after every request, use :meth:`.Blueprint.after_app_request`. + """ + self.after_request_funcs.setdefault(None, []).append(f) + return f + + @setupmethod + def teardown_request(self, f: T_teardown) -> T_teardown: + """Register a function to be called when the request context is + popped. Typically this happens at the end of each request, but + contexts may be pushed manually as well during testing. + + .. code-block:: python + + with app.test_request_context(): + ... + + When the ``with`` block exits (or ``ctx.pop()`` is called), the + teardown functions are called just before the request context is + made inactive. + + When a teardown function was called because of an unhandled + exception it will be passed an error object. If an + :meth:`errorhandler` is registered, it will handle the exception + and the teardown will not receive it. + + Teardown functions must avoid raising exceptions. If they + execute code that might fail they must surround that code with a + ``try``/``except`` block and log any errors. + + The return values of teardown functions are ignored. + + This is available on both app and blueprint objects. When used on an app, this + executes after every request. When used on a blueprint, this executes after + every request that the blueprint handles. To register with a blueprint and + execute after every request, use :meth:`.Blueprint.teardown_app_request`. + """ + self.teardown_request_funcs.setdefault(None, []).append(f) + return f + + @setupmethod + def context_processor( + self, + f: T_template_context_processor, + ) -> T_template_context_processor: + """Registers a template context processor function. These functions run before + rendering a template. The keys of the returned dict are added as variables + available in the template. + + This is available on both app and blueprint objects. When used on an app, this + is called for every rendered template. When used on a blueprint, this is called + for templates rendered from the blueprint's views. To register with a blueprint + and affect every template, use :meth:`.Blueprint.app_context_processor`. + """ + self.template_context_processors[None].append(f) + return f + + @setupmethod + def url_value_preprocessor( + self, + f: T_url_value_preprocessor, + ) -> T_url_value_preprocessor: + """Register a URL value preprocessor function for all view + functions in the application. These functions will be called before the + :meth:`before_request` functions. + + The function can modify the values captured from the matched url before + they are passed to the view. For example, this can be used to pop a + common language code value and place it in ``g`` rather than pass it to + every view. + + The function is passed the endpoint name and values dict. The return + value is ignored. + + This is available on both app and blueprint objects. When used on an app, this + is called for every request. When used on a blueprint, this is called for + requests that the blueprint handles. To register with a blueprint and affect + every request, use :meth:`.Blueprint.app_url_value_preprocessor`. + """ + self.url_value_preprocessors[None].append(f) + return f + + @setupmethod + def url_defaults(self, f: T_url_defaults) -> T_url_defaults: + """Callback function for URL defaults for all view functions of the + application. It's called with the endpoint and values and should + update the values passed in place. + + This is available on both app and blueprint objects. When used on an app, this + is called for every request. When used on a blueprint, this is called for + requests that the blueprint handles. To register with a blueprint and affect + every request, use :meth:`.Blueprint.app_url_defaults`. + """ + self.url_default_functions[None].append(f) + return f + + @setupmethod + def errorhandler( + self, code_or_exception: type[Exception] | int + ) -> t.Callable[[T_error_handler], T_error_handler]: + """Register a function to handle errors by code or exception class. + + A decorator that is used to register a function given an + error code. Example:: + + @app.errorhandler(404) + def page_not_found(error): + return 'This page does not exist', 404 + + You can also register handlers for arbitrary exceptions:: + + @app.errorhandler(DatabaseError) + def special_exception_handler(error): + return 'Database connection failed', 500 + + This is available on both app and blueprint objects. When used on an app, this + can handle errors from every request. When used on a blueprint, this can handle + errors from requests that the blueprint handles. To register with a blueprint + and affect every request, use :meth:`.Blueprint.app_errorhandler`. + + .. versionadded:: 0.7 + Use :meth:`register_error_handler` instead of modifying + :attr:`error_handler_spec` directly, for application wide error + handlers. + + .. versionadded:: 0.7 + One can now additionally also register custom exception types + that do not necessarily have to be a subclass of the + :class:`~werkzeug.exceptions.HTTPException` class. + + :param code_or_exception: the code as integer for the handler, or + an arbitrary exception + """ + + def decorator(f: T_error_handler) -> T_error_handler: + self.register_error_handler(code_or_exception, f) + return f + + return decorator + + @setupmethod + def register_error_handler( + self, + code_or_exception: type[Exception] | int, + f: ft.ErrorHandlerCallable, + ) -> None: + """Alternative error attach function to the :meth:`errorhandler` + decorator that is more straightforward to use for non decorator + usage. + + .. versionadded:: 0.7 + """ + exc_class, code = self._get_exc_class_and_code(code_or_exception) + self.error_handler_spec[None][code][exc_class] = f + + @staticmethod + def _get_exc_class_and_code( + exc_class_or_code: type[Exception] | int, + ) -> tuple[type[Exception], int | None]: + """Get the exception class being handled. For HTTP status codes + or ``HTTPException`` subclasses, return both the exception and + status code. + + :param exc_class_or_code: Any exception class, or an HTTP status + code as an integer. + """ + exc_class: type[Exception] + + if isinstance(exc_class_or_code, int): + try: + exc_class = default_exceptions[exc_class_or_code] + except KeyError: + raise ValueError( + f"'{exc_class_or_code}' is not a recognized HTTP" + " error code. Use a subclass of HTTPException with" + " that code instead." + ) from None + else: + exc_class = exc_class_or_code + + if isinstance(exc_class, Exception): + raise TypeError( + f"{exc_class!r} is an instance, not a class. Handlers" + " can only be registered for Exception classes or HTTP" + " error codes." + ) + + if not issubclass(exc_class, Exception): + raise ValueError( + f"'{exc_class.__name__}' is not a subclass of Exception." + " Handlers can only be registered for Exception classes" + " or HTTP error codes." + ) + + if issubclass(exc_class, HTTPException): + return exc_class, exc_class.code + else: + return exc_class, None + + +def _endpoint_from_view_func(view_func: ft.RouteCallable) -> str: + """Internal helper that returns the default endpoint for a given + function. This always is the function name. + """ + assert view_func is not None, "expected view func if endpoint is not provided." + return view_func.__name__ + + +def _path_is_relative_to(path: pathlib.PurePath, base: str) -> bool: + # Path.is_relative_to doesn't exist until Python 3.9 + try: + path.relative_to(base) + return True + except ValueError: + return False + + +def _find_package_path(import_name: str) -> str: + """Find the path that contains the package or module.""" + root_mod_name, _, _ = import_name.partition(".") + + try: + root_spec = importlib.util.find_spec(root_mod_name) + + if root_spec is None: + raise ValueError("not found") + except (ImportError, ValueError): + # ImportError: the machinery told us it does not exist + # ValueError: + # - the module name was invalid + # - the module name is __main__ + # - we raised `ValueError` due to `root_spec` being `None` + return os.getcwd() + + if root_spec.submodule_search_locations: + if root_spec.origin is None or root_spec.origin == "namespace": + # namespace package + package_spec = importlib.util.find_spec(import_name) + + if package_spec is not None and package_spec.submodule_search_locations: + # Pick the path in the namespace that contains the submodule. + package_path = pathlib.Path( + os.path.commonpath(package_spec.submodule_search_locations) + ) + search_location = next( + location + for location in root_spec.submodule_search_locations + if _path_is_relative_to(package_path, location) + ) + else: + # Pick the first path. + search_location = root_spec.submodule_search_locations[0] + + return os.path.dirname(search_location) + else: + # package with __init__.py + return os.path.dirname(os.path.dirname(root_spec.origin)) + else: + # module + return os.path.dirname(root_spec.origin) # type: ignore[type-var, return-value] + + +def find_package(import_name: str) -> tuple[str | None, str]: + """Find the prefix that a package is installed under, and the path + that it would be imported from. + + The prefix is the directory containing the standard directory + hierarchy (lib, bin, etc.). If the package is not installed to the + system (:attr:`sys.prefix`) or a virtualenv (``site-packages``), + ``None`` is returned. + + The path is the entry in :attr:`sys.path` that contains the package + for import. If the package is not installed, it's assumed that the + package was imported from the current working directory. + """ + package_path = _find_package_path(import_name) + py_prefix = os.path.abspath(sys.prefix) + + # installed to the system + if _path_is_relative_to(pathlib.PurePath(package_path), py_prefix): + return py_prefix, package_path + + site_parent, site_folder = os.path.split(package_path) + + # installed to a virtualenv + if site_folder.lower() == "site-packages": + parent, folder = os.path.split(site_parent) + + # Windows (prefix/lib/site-packages) + if folder.lower() == "lib": + return parent, package_path + + # Unix (prefix/lib/pythonX.Y/site-packages) + if os.path.basename(parent).lower() == "lib": + return os.path.dirname(parent), package_path + + # something else (prefix/site-packages) + return site_parent, package_path + + # not installed + return None, package_path diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/sessions.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sessions.py new file mode 100644 index 000000000..ee19ad638 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/sessions.py @@ -0,0 +1,379 @@ +from __future__ import annotations + +import hashlib +import typing as t +from collections.abc import MutableMapping +from datetime import datetime +from datetime import timezone + +from itsdangerous import BadSignature +from itsdangerous import URLSafeTimedSerializer +from werkzeug.datastructures import CallbackDict + +from .json.tag import TaggedJSONSerializer + +if t.TYPE_CHECKING: # pragma: no cover + import typing_extensions as te + + from .app import Flask + from .wrappers import Request + from .wrappers import Response + + +# TODO generic when Python > 3.8 +class SessionMixin(MutableMapping): # type: ignore[type-arg] + """Expands a basic dictionary with session attributes.""" + + @property + def permanent(self) -> bool: + """This reflects the ``'_permanent'`` key in the dict.""" + return self.get("_permanent", False) + + @permanent.setter + def permanent(self, value: bool) -> None: + self["_permanent"] = bool(value) + + #: Some implementations can detect whether a session is newly + #: created, but that is not guaranteed. Use with caution. The mixin + # default is hard-coded ``False``. + new = False + + #: Some implementations can detect changes to the session and set + #: this when that happens. The mixin default is hard coded to + #: ``True``. + modified = True + + #: Some implementations can detect when session data is read or + #: written and set this when that happens. The mixin default is hard + #: coded to ``True``. + accessed = True + + +# TODO generic when Python > 3.8 +class SecureCookieSession(CallbackDict, SessionMixin): # type: ignore[type-arg] + """Base class for sessions based on signed cookies. + + This session backend will set the :attr:`modified` and + :attr:`accessed` attributes. It cannot reliably track whether a + session is new (vs. empty), so :attr:`new` remains hard coded to + ``False``. + """ + + #: When data is changed, this is set to ``True``. Only the session + #: dictionary itself is tracked; if the session contains mutable + #: data (for example a nested dict) then this must be set to + #: ``True`` manually when modifying that data. The session cookie + #: will only be written to the response if this is ``True``. + modified = False + + #: When data is read or written, this is set to ``True``. Used by + # :class:`.SecureCookieSessionInterface` to add a ``Vary: Cookie`` + #: header, which allows caching proxies to cache different pages for + #: different users. + accessed = False + + def __init__(self, initial: t.Any = None) -> None: + def on_update(self: te.Self) -> None: + self.modified = True + self.accessed = True + + super().__init__(initial, on_update) + + def __getitem__(self, key: str) -> t.Any: + self.accessed = True + return super().__getitem__(key) + + def get(self, key: str, default: t.Any = None) -> t.Any: + self.accessed = True + return super().get(key, default) + + def setdefault(self, key: str, default: t.Any = None) -> t.Any: + self.accessed = True + return super().setdefault(key, default) + + +class NullSession(SecureCookieSession): + """Class used to generate nicer error messages if sessions are not + available. Will still allow read-only access to the empty session + but fail on setting. + """ + + def _fail(self, *args: t.Any, **kwargs: t.Any) -> t.NoReturn: + raise RuntimeError( + "The session is unavailable because no secret " + "key was set. Set the secret_key on the " + "application to something unique and secret." + ) + + __setitem__ = __delitem__ = clear = pop = popitem = update = setdefault = _fail # type: ignore # noqa: B950 + del _fail + + +class SessionInterface: + """The basic interface you have to implement in order to replace the + default session interface which uses werkzeug's securecookie + implementation. The only methods you have to implement are + :meth:`open_session` and :meth:`save_session`, the others have + useful defaults which you don't need to change. + + The session object returned by the :meth:`open_session` method has to + provide a dictionary like interface plus the properties and methods + from the :class:`SessionMixin`. We recommend just subclassing a dict + and adding that mixin:: + + class Session(dict, SessionMixin): + pass + + If :meth:`open_session` returns ``None`` Flask will call into + :meth:`make_null_session` to create a session that acts as replacement + if the session support cannot work because some requirement is not + fulfilled. The default :class:`NullSession` class that is created + will complain that the secret key was not set. + + To replace the session interface on an application all you have to do + is to assign :attr:`flask.Flask.session_interface`:: + + app = Flask(__name__) + app.session_interface = MySessionInterface() + + Multiple requests with the same session may be sent and handled + concurrently. When implementing a new session interface, consider + whether reads or writes to the backing store must be synchronized. + There is no guarantee on the order in which the session for each + request is opened or saved, it will occur in the order that requests + begin and end processing. + + .. versionadded:: 0.8 + """ + + #: :meth:`make_null_session` will look here for the class that should + #: be created when a null session is requested. Likewise the + #: :meth:`is_null_session` method will perform a typecheck against + #: this type. + null_session_class = NullSession + + #: A flag that indicates if the session interface is pickle based. + #: This can be used by Flask extensions to make a decision in regards + #: to how to deal with the session object. + #: + #: .. versionadded:: 0.10 + pickle_based = False + + def make_null_session(self, app: Flask) -> NullSession: + """Creates a null session which acts as a replacement object if the + real session support could not be loaded due to a configuration + error. This mainly aids the user experience because the job of the + null session is to still support lookup without complaining but + modifications are answered with a helpful error message of what + failed. + + This creates an instance of :attr:`null_session_class` by default. + """ + return self.null_session_class() + + def is_null_session(self, obj: object) -> bool: + """Checks if a given object is a null session. Null sessions are + not asked to be saved. + + This checks if the object is an instance of :attr:`null_session_class` + by default. + """ + return isinstance(obj, self.null_session_class) + + def get_cookie_name(self, app: Flask) -> str: + """The name of the session cookie. Uses``app.config["SESSION_COOKIE_NAME"]``.""" + return app.config["SESSION_COOKIE_NAME"] # type: ignore[no-any-return] + + def get_cookie_domain(self, app: Flask) -> str | None: + """The value of the ``Domain`` parameter on the session cookie. If not set, + browsers will only send the cookie to the exact domain it was set from. + Otherwise, they will send it to any subdomain of the given value as well. + + Uses the :data:`SESSION_COOKIE_DOMAIN` config. + + .. versionchanged:: 2.3 + Not set by default, does not fall back to ``SERVER_NAME``. + """ + return app.config["SESSION_COOKIE_DOMAIN"] # type: ignore[no-any-return] + + def get_cookie_path(self, app: Flask) -> str: + """Returns the path for which the cookie should be valid. The + default implementation uses the value from the ``SESSION_COOKIE_PATH`` + config var if it's set, and falls back to ``APPLICATION_ROOT`` or + uses ``/`` if it's ``None``. + """ + return app.config["SESSION_COOKIE_PATH"] or app.config["APPLICATION_ROOT"] # type: ignore[no-any-return] + + def get_cookie_httponly(self, app: Flask) -> bool: + """Returns True if the session cookie should be httponly. This + currently just returns the value of the ``SESSION_COOKIE_HTTPONLY`` + config var. + """ + return app.config["SESSION_COOKIE_HTTPONLY"] # type: ignore[no-any-return] + + def get_cookie_secure(self, app: Flask) -> bool: + """Returns True if the cookie should be secure. This currently + just returns the value of the ``SESSION_COOKIE_SECURE`` setting. + """ + return app.config["SESSION_COOKIE_SECURE"] # type: ignore[no-any-return] + + def get_cookie_samesite(self, app: Flask) -> str | None: + """Return ``'Strict'`` or ``'Lax'`` if the cookie should use the + ``SameSite`` attribute. This currently just returns the value of + the :data:`SESSION_COOKIE_SAMESITE` setting. + """ + return app.config["SESSION_COOKIE_SAMESITE"] # type: ignore[no-any-return] + + def get_expiration_time(self, app: Flask, session: SessionMixin) -> datetime | None: + """A helper method that returns an expiration date for the session + or ``None`` if the session is linked to the browser session. The + default implementation returns now + the permanent session + lifetime configured on the application. + """ + if session.permanent: + return datetime.now(timezone.utc) + app.permanent_session_lifetime + return None + + def should_set_cookie(self, app: Flask, session: SessionMixin) -> bool: + """Used by session backends to determine if a ``Set-Cookie`` header + should be set for this session cookie for this response. If the session + has been modified, the cookie is set. If the session is permanent and + the ``SESSION_REFRESH_EACH_REQUEST`` config is true, the cookie is + always set. + + This check is usually skipped if the session was deleted. + + .. versionadded:: 0.11 + """ + + return session.modified or ( + session.permanent and app.config["SESSION_REFRESH_EACH_REQUEST"] + ) + + def open_session(self, app: Flask, request: Request) -> SessionMixin | None: + """This is called at the beginning of each request, after + pushing the request context, before matching the URL. + + This must return an object which implements a dictionary-like + interface as well as the :class:`SessionMixin` interface. + + This will return ``None`` to indicate that loading failed in + some way that is not immediately an error. The request + context will fall back to using :meth:`make_null_session` + in this case. + """ + raise NotImplementedError() + + def save_session( + self, app: Flask, session: SessionMixin, response: Response + ) -> None: + """This is called at the end of each request, after generating + a response, before removing the request context. It is skipped + if :meth:`is_null_session` returns ``True``. + """ + raise NotImplementedError() + + +session_json_serializer = TaggedJSONSerializer() + + +def _lazy_sha1(string: bytes = b"") -> t.Any: + """Don't access ``hashlib.sha1`` until runtime. FIPS builds may not include + SHA-1, in which case the import and use as a default would fail before the + developer can configure something else. + """ + return hashlib.sha1(string) + + +class SecureCookieSessionInterface(SessionInterface): + """The default session interface that stores sessions in signed cookies + through the :mod:`itsdangerous` module. + """ + + #: the salt that should be applied on top of the secret key for the + #: signing of cookie based sessions. + salt = "cookie-session" + #: the hash function to use for the signature. The default is sha1 + digest_method = staticmethod(_lazy_sha1) + #: the name of the itsdangerous supported key derivation. The default + #: is hmac. + key_derivation = "hmac" + #: A python serializer for the payload. The default is a compact + #: JSON derived serializer with support for some extra Python types + #: such as datetime objects or tuples. + serializer = session_json_serializer + session_class = SecureCookieSession + + def get_signing_serializer(self, app: Flask) -> URLSafeTimedSerializer | None: + if not app.secret_key: + return None + signer_kwargs = dict( + key_derivation=self.key_derivation, digest_method=self.digest_method + ) + return URLSafeTimedSerializer( + app.secret_key, + salt=self.salt, + serializer=self.serializer, + signer_kwargs=signer_kwargs, + ) + + def open_session(self, app: Flask, request: Request) -> SecureCookieSession | None: + s = self.get_signing_serializer(app) + if s is None: + return None + val = request.cookies.get(self.get_cookie_name(app)) + if not val: + return self.session_class() + max_age = int(app.permanent_session_lifetime.total_seconds()) + try: + data = s.loads(val, max_age=max_age) + return self.session_class(data) + except BadSignature: + return self.session_class() + + def save_session( + self, app: Flask, session: SessionMixin, response: Response + ) -> None: + name = self.get_cookie_name(app) + domain = self.get_cookie_domain(app) + path = self.get_cookie_path(app) + secure = self.get_cookie_secure(app) + samesite = self.get_cookie_samesite(app) + httponly = self.get_cookie_httponly(app) + + # Add a "Vary: Cookie" header if the session was accessed at all. + if session.accessed: + response.vary.add("Cookie") + + # If the session is modified to be empty, remove the cookie. + # If the session is empty, return without setting the cookie. + if not session: + if session.modified: + response.delete_cookie( + name, + domain=domain, + path=path, + secure=secure, + samesite=samesite, + httponly=httponly, + ) + response.vary.add("Cookie") + + return + + if not self.should_set_cookie(app, session): + return + + expires = self.get_expiration_time(app, session) + val = self.get_signing_serializer(app).dumps(dict(session)) # type: ignore + response.set_cookie( + name, + val, # type: ignore + expires=expires, + httponly=httponly, + domain=domain, + path=path, + secure=secure, + samesite=samesite, + ) + response.vary.add("Cookie") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/signals.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/signals.py new file mode 100644 index 000000000..444fda998 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/signals.py @@ -0,0 +1,17 @@ +from __future__ import annotations + +from blinker import Namespace + +# This namespace is only for signals provided by Flask itself. +_signals = Namespace() + +template_rendered = _signals.signal("template-rendered") +before_render_template = _signals.signal("before-render-template") +request_started = _signals.signal("request-started") +request_finished = _signals.signal("request-finished") +request_tearing_down = _signals.signal("request-tearing-down") +got_request_exception = _signals.signal("got-request-exception") +appcontext_tearing_down = _signals.signal("appcontext-tearing-down") +appcontext_pushed = _signals.signal("appcontext-pushed") +appcontext_popped = _signals.signal("appcontext-popped") +message_flashed = _signals.signal("message-flashed") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/templating.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/templating.py new file mode 100644 index 000000000..618a3b35d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/templating.py @@ -0,0 +1,219 @@ +from __future__ import annotations + +import typing as t + +from jinja2 import BaseLoader +from jinja2 import Environment as BaseEnvironment +from jinja2 import Template +from jinja2 import TemplateNotFound + +from .globals import _cv_app +from .globals import _cv_request +from .globals import current_app +from .globals import request +from .helpers import stream_with_context +from .signals import before_render_template +from .signals import template_rendered + +if t.TYPE_CHECKING: # pragma: no cover + from .app import Flask + from .sansio.app import App + from .sansio.scaffold import Scaffold + + +def _default_template_ctx_processor() -> dict[str, t.Any]: + """Default template context processor. Injects `request`, + `session` and `g`. + """ + appctx = _cv_app.get(None) + reqctx = _cv_request.get(None) + rv: dict[str, t.Any] = {} + if appctx is not None: + rv["g"] = appctx.g + if reqctx is not None: + rv["request"] = reqctx.request + rv["session"] = reqctx.session + return rv + + +class Environment(BaseEnvironment): + """Works like a regular Jinja2 environment but has some additional + knowledge of how Flask's blueprint works so that it can prepend the + name of the blueprint to referenced templates if necessary. + """ + + def __init__(self, app: App, **options: t.Any) -> None: + if "loader" not in options: + options["loader"] = app.create_global_jinja_loader() + BaseEnvironment.__init__(self, **options) + self.app = app + + +class DispatchingJinjaLoader(BaseLoader): + """A loader that looks for templates in the application and all + the blueprint folders. + """ + + def __init__(self, app: App) -> None: + self.app = app + + def get_source( + self, environment: BaseEnvironment, template: str + ) -> tuple[str, str | None, t.Callable[[], bool] | None]: + if self.app.config["EXPLAIN_TEMPLATE_LOADING"]: + return self._get_source_explained(environment, template) + return self._get_source_fast(environment, template) + + def _get_source_explained( + self, environment: BaseEnvironment, template: str + ) -> tuple[str, str | None, t.Callable[[], bool] | None]: + attempts = [] + rv: tuple[str, str | None, t.Callable[[], bool] | None] | None + trv: None | (tuple[str, str | None, t.Callable[[], bool] | None]) = None + + for srcobj, loader in self._iter_loaders(template): + try: + rv = loader.get_source(environment, template) + if trv is None: + trv = rv + except TemplateNotFound: + rv = None + attempts.append((loader, srcobj, rv)) + + from .debughelpers import explain_template_loading_attempts + + explain_template_loading_attempts(self.app, template, attempts) + + if trv is not None: + return trv + raise TemplateNotFound(template) + + def _get_source_fast( + self, environment: BaseEnvironment, template: str + ) -> tuple[str, str | None, t.Callable[[], bool] | None]: + for _srcobj, loader in self._iter_loaders(template): + try: + return loader.get_source(environment, template) + except TemplateNotFound: + continue + raise TemplateNotFound(template) + + def _iter_loaders(self, template: str) -> t.Iterator[tuple[Scaffold, BaseLoader]]: + loader = self.app.jinja_loader + if loader is not None: + yield self.app, loader + + for blueprint in self.app.iter_blueprints(): + loader = blueprint.jinja_loader + if loader is not None: + yield blueprint, loader + + def list_templates(self) -> list[str]: + result = set() + loader = self.app.jinja_loader + if loader is not None: + result.update(loader.list_templates()) + + for blueprint in self.app.iter_blueprints(): + loader = blueprint.jinja_loader + if loader is not None: + for template in loader.list_templates(): + result.add(template) + + return list(result) + + +def _render(app: Flask, template: Template, context: dict[str, t.Any]) -> str: + app.update_template_context(context) + before_render_template.send( + app, _async_wrapper=app.ensure_sync, template=template, context=context + ) + rv = template.render(context) + template_rendered.send( + app, _async_wrapper=app.ensure_sync, template=template, context=context + ) + return rv + + +def render_template( + template_name_or_list: str | Template | list[str | Template], + **context: t.Any, +) -> str: + """Render a template by name with the given context. + + :param template_name_or_list: The name of the template to render. If + a list is given, the first name to exist will be rendered. + :param context: The variables to make available in the template. + """ + app = current_app._get_current_object() # type: ignore[attr-defined] + template = app.jinja_env.get_or_select_template(template_name_or_list) + return _render(app, template, context) + + +def render_template_string(source: str, **context: t.Any) -> str: + """Render a template from the given source string with the given + context. + + :param source: The source code of the template to render. + :param context: The variables to make available in the template. + """ + app = current_app._get_current_object() # type: ignore[attr-defined] + template = app.jinja_env.from_string(source) + return _render(app, template, context) + + +def _stream( + app: Flask, template: Template, context: dict[str, t.Any] +) -> t.Iterator[str]: + app.update_template_context(context) + before_render_template.send( + app, _async_wrapper=app.ensure_sync, template=template, context=context + ) + + def generate() -> t.Iterator[str]: + yield from template.generate(context) + template_rendered.send( + app, _async_wrapper=app.ensure_sync, template=template, context=context + ) + + rv = generate() + + # If a request context is active, keep it while generating. + if request: + rv = stream_with_context(rv) + + return rv + + +def stream_template( + template_name_or_list: str | Template | list[str | Template], + **context: t.Any, +) -> t.Iterator[str]: + """Render a template by name with the given context as a stream. + This returns an iterator of strings, which can be used as a + streaming response from a view. + + :param template_name_or_list: The name of the template to render. If + a list is given, the first name to exist will be rendered. + :param context: The variables to make available in the template. + + .. versionadded:: 2.2 + """ + app = current_app._get_current_object() # type: ignore[attr-defined] + template = app.jinja_env.get_or_select_template(template_name_or_list) + return _stream(app, template, context) + + +def stream_template_string(source: str, **context: t.Any) -> t.Iterator[str]: + """Render a template from the given source string with the given + context as a stream. This returns an iterator of strings, which can + be used as a streaming response from a view. + + :param source: The source code of the template to render. + :param context: The variables to make available in the template. + + .. versionadded:: 2.2 + """ + app = current_app._get_current_object() # type: ignore[attr-defined] + template = app.jinja_env.from_string(source) + return _stream(app, template, context) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/testing.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/testing.py new file mode 100644 index 000000000..a27b7c8fe --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/testing.py @@ -0,0 +1,298 @@ +from __future__ import annotations + +import importlib.metadata +import typing as t +from contextlib import contextmanager +from contextlib import ExitStack +from copy import copy +from types import TracebackType +from urllib.parse import urlsplit + +import werkzeug.test +from click.testing import CliRunner +from werkzeug.test import Client +from werkzeug.wrappers import Request as BaseRequest + +from .cli import ScriptInfo +from .sessions import SessionMixin + +if t.TYPE_CHECKING: # pragma: no cover + from _typeshed.wsgi import WSGIEnvironment + from werkzeug.test import TestResponse + + from .app import Flask + + +class EnvironBuilder(werkzeug.test.EnvironBuilder): + """An :class:`~werkzeug.test.EnvironBuilder`, that takes defaults from the + application. + + :param app: The Flask application to configure the environment from. + :param path: URL path being requested. + :param base_url: Base URL where the app is being served, which + ``path`` is relative to. If not given, built from + :data:`PREFERRED_URL_SCHEME`, ``subdomain``, + :data:`SERVER_NAME`, and :data:`APPLICATION_ROOT`. + :param subdomain: Subdomain name to append to :data:`SERVER_NAME`. + :param url_scheme: Scheme to use instead of + :data:`PREFERRED_URL_SCHEME`. + :param json: If given, this is serialized as JSON and passed as + ``data``. Also defaults ``content_type`` to + ``application/json``. + :param args: other positional arguments passed to + :class:`~werkzeug.test.EnvironBuilder`. + :param kwargs: other keyword arguments passed to + :class:`~werkzeug.test.EnvironBuilder`. + """ + + def __init__( + self, + app: Flask, + path: str = "/", + base_url: str | None = None, + subdomain: str | None = None, + url_scheme: str | None = None, + *args: t.Any, + **kwargs: t.Any, + ) -> None: + assert not (base_url or subdomain or url_scheme) or ( + base_url is not None + ) != bool( + subdomain or url_scheme + ), 'Cannot pass "subdomain" or "url_scheme" with "base_url".' + + if base_url is None: + http_host = app.config.get("SERVER_NAME") or "localhost" + app_root = app.config["APPLICATION_ROOT"] + + if subdomain: + http_host = f"{subdomain}.{http_host}" + + if url_scheme is None: + url_scheme = app.config["PREFERRED_URL_SCHEME"] + + url = urlsplit(path) + base_url = ( + f"{url.scheme or url_scheme}://{url.netloc or http_host}" + f"/{app_root.lstrip('/')}" + ) + path = url.path + + if url.query: + sep = b"?" if isinstance(url.query, bytes) else "?" + path += sep + url.query + + self.app = app + super().__init__(path, base_url, *args, **kwargs) + + def json_dumps(self, obj: t.Any, **kwargs: t.Any) -> str: # type: ignore + """Serialize ``obj`` to a JSON-formatted string. + + The serialization will be configured according to the config associated + with this EnvironBuilder's ``app``. + """ + return self.app.json.dumps(obj, **kwargs) + + +_werkzeug_version = "" + + +def _get_werkzeug_version() -> str: + global _werkzeug_version + + if not _werkzeug_version: + _werkzeug_version = importlib.metadata.version("werkzeug") + + return _werkzeug_version + + +class FlaskClient(Client): + """Works like a regular Werkzeug test client but has knowledge about + Flask's contexts to defer the cleanup of the request context until + the end of a ``with`` block. For general information about how to + use this class refer to :class:`werkzeug.test.Client`. + + .. versionchanged:: 0.12 + `app.test_client()` includes preset default environment, which can be + set after instantiation of the `app.test_client()` object in + `client.environ_base`. + + Basic usage is outlined in the :doc:`/testing` chapter. + """ + + application: Flask + + def __init__(self, *args: t.Any, **kwargs: t.Any) -> None: + super().__init__(*args, **kwargs) + self.preserve_context = False + self._new_contexts: list[t.ContextManager[t.Any]] = [] + self._context_stack = ExitStack() + self.environ_base = { + "REMOTE_ADDR": "127.0.0.1", + "HTTP_USER_AGENT": f"Werkzeug/{_get_werkzeug_version()}", + } + + @contextmanager + def session_transaction( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Iterator[SessionMixin]: + """When used in combination with a ``with`` statement this opens a + session transaction. This can be used to modify the session that + the test client uses. Once the ``with`` block is left the session is + stored back. + + :: + + with client.session_transaction() as session: + session['value'] = 42 + + Internally this is implemented by going through a temporary test + request context and since session handling could depend on + request variables this function accepts the same arguments as + :meth:`~flask.Flask.test_request_context` which are directly + passed through. + """ + if self._cookies is None: + raise TypeError( + "Cookies are disabled. Create a client with 'use_cookies=True'." + ) + + app = self.application + ctx = app.test_request_context(*args, **kwargs) + self._add_cookies_to_wsgi(ctx.request.environ) + + with ctx: + sess = app.session_interface.open_session(app, ctx.request) + + if sess is None: + raise RuntimeError("Session backend did not open a session.") + + yield sess + resp = app.response_class() + + if app.session_interface.is_null_session(sess): + return + + with ctx: + app.session_interface.save_session(app, sess, resp) + + self._update_cookies_from_response( + ctx.request.host.partition(":")[0], + ctx.request.path, + resp.headers.getlist("Set-Cookie"), + ) + + def _copy_environ(self, other: WSGIEnvironment) -> WSGIEnvironment: + out = {**self.environ_base, **other} + + if self.preserve_context: + out["werkzeug.debug.preserve_context"] = self._new_contexts.append + + return out + + def _request_from_builder_args( + self, args: tuple[t.Any, ...], kwargs: dict[str, t.Any] + ) -> BaseRequest: + kwargs["environ_base"] = self._copy_environ(kwargs.get("environ_base", {})) + builder = EnvironBuilder(self.application, *args, **kwargs) + + try: + return builder.get_request() + finally: + builder.close() + + def open( + self, + *args: t.Any, + buffered: bool = False, + follow_redirects: bool = False, + **kwargs: t.Any, + ) -> TestResponse: + if args and isinstance( + args[0], (werkzeug.test.EnvironBuilder, dict, BaseRequest) + ): + if isinstance(args[0], werkzeug.test.EnvironBuilder): + builder = copy(args[0]) + builder.environ_base = self._copy_environ(builder.environ_base or {}) # type: ignore[arg-type] + request = builder.get_request() + elif isinstance(args[0], dict): + request = EnvironBuilder.from_environ( + args[0], app=self.application, environ_base=self._copy_environ({}) + ).get_request() + else: + # isinstance(args[0], BaseRequest) + request = copy(args[0]) + request.environ = self._copy_environ(request.environ) + else: + # request is None + request = self._request_from_builder_args(args, kwargs) + + # Pop any previously preserved contexts. This prevents contexts + # from being preserved across redirects or multiple requests + # within a single block. + self._context_stack.close() + + response = super().open( + request, + buffered=buffered, + follow_redirects=follow_redirects, + ) + response.json_module = self.application.json # type: ignore[assignment] + + # Re-push contexts that were preserved during the request. + while self._new_contexts: + cm = self._new_contexts.pop() + self._context_stack.enter_context(cm) + + return response + + def __enter__(self) -> FlaskClient: + if self.preserve_context: + raise RuntimeError("Cannot nest client invocations") + self.preserve_context = True + return self + + def __exit__( + self, + exc_type: type | None, + exc_value: BaseException | None, + tb: TracebackType | None, + ) -> None: + self.preserve_context = False + self._context_stack.close() + + +class FlaskCliRunner(CliRunner): + """A :class:`~click.testing.CliRunner` for testing a Flask app's + CLI commands. Typically created using + :meth:`~flask.Flask.test_cli_runner`. See :ref:`testing-cli`. + """ + + def __init__(self, app: Flask, **kwargs: t.Any) -> None: + self.app = app + super().__init__(**kwargs) + + def invoke( # type: ignore + self, cli: t.Any = None, args: t.Any = None, **kwargs: t.Any + ) -> t.Any: + """Invokes a CLI command in an isolated environment. See + :meth:`CliRunner.invoke ` for + full method documentation. See :ref:`testing-cli` for examples. + + If the ``obj`` argument is not given, passes an instance of + :class:`~flask.cli.ScriptInfo` that knows how to load the Flask + app being tested. + + :param cli: Command object to invoke. Default is the app's + :attr:`~flask.app.Flask.cli` group. + :param args: List of strings to invoke the command with. + + :return: a :class:`~click.testing.Result` object. + """ + if cli is None: + cli = self.app.cli + + if "obj" not in kwargs: + kwargs["obj"] = ScriptInfo(create_app=lambda: self.app) + + return super().invoke(cli, args, **kwargs) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/typing.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/typing.py new file mode 100644 index 000000000..cf6d4ae6d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/typing.py @@ -0,0 +1,90 @@ +from __future__ import annotations + +import typing as t + +if t.TYPE_CHECKING: # pragma: no cover + from _typeshed.wsgi import WSGIApplication # noqa: F401 + from werkzeug.datastructures import Headers # noqa: F401 + from werkzeug.sansio.response import Response # noqa: F401 + +# The possible types that are directly convertible or are a Response object. +ResponseValue = t.Union[ + "Response", + str, + bytes, + t.List[t.Any], + # Only dict is actually accepted, but Mapping allows for TypedDict. + t.Mapping[str, t.Any], + t.Iterator[str], + t.Iterator[bytes], +] + +# the possible types for an individual HTTP header +# This should be a Union, but mypy doesn't pass unless it's a TypeVar. +HeaderValue = t.Union[str, t.List[str], t.Tuple[str, ...]] + +# the possible types for HTTP headers +HeadersValue = t.Union[ + "Headers", + t.Mapping[str, HeaderValue], + t.Sequence[t.Tuple[str, HeaderValue]], +] + +# The possible types returned by a route function. +ResponseReturnValue = t.Union[ + ResponseValue, + t.Tuple[ResponseValue, HeadersValue], + t.Tuple[ResponseValue, int], + t.Tuple[ResponseValue, int, HeadersValue], + "WSGIApplication", +] + +# Allow any subclass of werkzeug.Response, such as the one from Flask, +# as a callback argument. Using werkzeug.Response directly makes a +# callback annotated with flask.Response fail type checking. +ResponseClass = t.TypeVar("ResponseClass", bound="Response") + +AppOrBlueprintKey = t.Optional[str] # The App key is None, whereas blueprints are named +AfterRequestCallable = t.Union[ + t.Callable[[ResponseClass], ResponseClass], + t.Callable[[ResponseClass], t.Awaitable[ResponseClass]], +] +BeforeFirstRequestCallable = t.Union[ + t.Callable[[], None], t.Callable[[], t.Awaitable[None]] +] +BeforeRequestCallable = t.Union[ + t.Callable[[], t.Optional[ResponseReturnValue]], + t.Callable[[], t.Awaitable[t.Optional[ResponseReturnValue]]], +] +ShellContextProcessorCallable = t.Callable[[], t.Dict[str, t.Any]] +TeardownCallable = t.Union[ + t.Callable[[t.Optional[BaseException]], None], + t.Callable[[t.Optional[BaseException]], t.Awaitable[None]], +] +TemplateContextProcessorCallable = t.Union[ + t.Callable[[], t.Dict[str, t.Any]], + t.Callable[[], t.Awaitable[t.Dict[str, t.Any]]], +] +TemplateFilterCallable = t.Callable[..., t.Any] +TemplateGlobalCallable = t.Callable[..., t.Any] +TemplateTestCallable = t.Callable[..., bool] +URLDefaultCallable = t.Callable[[str, t.Dict[str, t.Any]], None] +URLValuePreprocessorCallable = t.Callable[ + [t.Optional[str], t.Optional[t.Dict[str, t.Any]]], None +] + +# This should take Exception, but that either breaks typing the argument +# with a specific exception, or decorating multiple times with different +# exceptions (and using a union type on the argument). +# https://github.com/pallets/flask/issues/4095 +# https://github.com/pallets/flask/issues/4295 +# https://github.com/pallets/flask/issues/4297 +ErrorHandlerCallable = t.Union[ + t.Callable[[t.Any], ResponseReturnValue], + t.Callable[[t.Any], t.Awaitable[ResponseReturnValue]], +] + +RouteCallable = t.Union[ + t.Callable[..., ResponseReturnValue], + t.Callable[..., t.Awaitable[ResponseReturnValue]], +] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/views.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/views.py new file mode 100644 index 000000000..794fdc06c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/views.py @@ -0,0 +1,191 @@ +from __future__ import annotations + +import typing as t + +from . import typing as ft +from .globals import current_app +from .globals import request + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) + +http_method_funcs = frozenset( + ["get", "post", "head", "options", "delete", "put", "trace", "patch"] +) + + +class View: + """Subclass this class and override :meth:`dispatch_request` to + create a generic class-based view. Call :meth:`as_view` to create a + view function that creates an instance of the class with the given + arguments and calls its ``dispatch_request`` method with any URL + variables. + + See :doc:`views` for a detailed guide. + + .. code-block:: python + + class Hello(View): + init_every_request = False + + def dispatch_request(self, name): + return f"Hello, {name}!" + + app.add_url_rule( + "/hello/", view_func=Hello.as_view("hello") + ) + + Set :attr:`methods` on the class to change what methods the view + accepts. + + Set :attr:`decorators` on the class to apply a list of decorators to + the generated view function. Decorators applied to the class itself + will not be applied to the generated view function! + + Set :attr:`init_every_request` to ``False`` for efficiency, unless + you need to store request-global data on ``self``. + """ + + #: The methods this view is registered for. Uses the same default + #: (``["GET", "HEAD", "OPTIONS"]``) as ``route`` and + #: ``add_url_rule`` by default. + methods: t.ClassVar[t.Collection[str] | None] = None + + #: Control whether the ``OPTIONS`` method is handled automatically. + #: Uses the same default (``True``) as ``route`` and + #: ``add_url_rule`` by default. + provide_automatic_options: t.ClassVar[bool | None] = None + + #: A list of decorators to apply, in order, to the generated view + #: function. Remember that ``@decorator`` syntax is applied bottom + #: to top, so the first decorator in the list would be the bottom + #: decorator. + #: + #: .. versionadded:: 0.8 + decorators: t.ClassVar[list[t.Callable[[F], F]]] = [] + + #: Create a new instance of this view class for every request by + #: default. If a view subclass sets this to ``False``, the same + #: instance is used for every request. + #: + #: A single instance is more efficient, especially if complex setup + #: is done during init. However, storing data on ``self`` is no + #: longer safe across requests, and :data:`~flask.g` should be used + #: instead. + #: + #: .. versionadded:: 2.2 + init_every_request: t.ClassVar[bool] = True + + def dispatch_request(self) -> ft.ResponseReturnValue: + """The actual view function behavior. Subclasses must override + this and return a valid response. Any variables from the URL + rule are passed as keyword arguments. + """ + raise NotImplementedError() + + @classmethod + def as_view( + cls, name: str, *class_args: t.Any, **class_kwargs: t.Any + ) -> ft.RouteCallable: + """Convert the class into a view function that can be registered + for a route. + + By default, the generated view will create a new instance of the + view class for every request and call its + :meth:`dispatch_request` method. If the view class sets + :attr:`init_every_request` to ``False``, the same instance will + be used for every request. + + Except for ``name``, all other arguments passed to this method + are forwarded to the view class ``__init__`` method. + + .. versionchanged:: 2.2 + Added the ``init_every_request`` class attribute. + """ + if cls.init_every_request: + + def view(**kwargs: t.Any) -> ft.ResponseReturnValue: + self = view.view_class( # type: ignore[attr-defined] + *class_args, **class_kwargs + ) + return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return] + + else: + self = cls(*class_args, **class_kwargs) + + def view(**kwargs: t.Any) -> ft.ResponseReturnValue: + return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return] + + if cls.decorators: + view.__name__ = name + view.__module__ = cls.__module__ + for decorator in cls.decorators: + view = decorator(view) + + # We attach the view class to the view function for two reasons: + # first of all it allows us to easily figure out what class-based + # view this thing came from, secondly it's also used for instantiating + # the view class so you can actually replace it with something else + # for testing purposes and debugging. + view.view_class = cls # type: ignore + view.__name__ = name + view.__doc__ = cls.__doc__ + view.__module__ = cls.__module__ + view.methods = cls.methods # type: ignore + view.provide_automatic_options = cls.provide_automatic_options # type: ignore + return view + + +class MethodView(View): + """Dispatches request methods to the corresponding instance methods. + For example, if you implement a ``get`` method, it will be used to + handle ``GET`` requests. + + This can be useful for defining a REST API. + + :attr:`methods` is automatically set based on the methods defined on + the class. + + See :doc:`views` for a detailed guide. + + .. code-block:: python + + class CounterAPI(MethodView): + def get(self): + return str(session.get("counter", 0)) + + def post(self): + session["counter"] = session.get("counter", 0) + 1 + return redirect(url_for("counter")) + + app.add_url_rule( + "/counter", view_func=CounterAPI.as_view("counter") + ) + """ + + def __init_subclass__(cls, **kwargs: t.Any) -> None: + super().__init_subclass__(**kwargs) + + if "methods" not in cls.__dict__: + methods = set() + + for base in cls.__bases__: + if getattr(base, "methods", None): + methods.update(base.methods) # type: ignore[attr-defined] + + for key in http_method_funcs: + if hasattr(cls, key): + methods.add(key.upper()) + + if methods: + cls.methods = methods + + def dispatch_request(self, **kwargs: t.Any) -> ft.ResponseReturnValue: + meth = getattr(self, request.method.lower(), None) + + # If the request method is HEAD and we don't have a handler for it + # retry with GET. + if meth is None and request.method == "HEAD": + meth = getattr(self, "get", None) + + assert meth is not None, f"Unimplemented method {request.method!r}" + return current_app.ensure_sync(meth)(**kwargs) # type: ignore[no-any-return] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask/wrappers.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask/wrappers.py new file mode 100644 index 000000000..c1eca8078 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask/wrappers.py @@ -0,0 +1,174 @@ +from __future__ import annotations + +import typing as t + +from werkzeug.exceptions import BadRequest +from werkzeug.exceptions import HTTPException +from werkzeug.wrappers import Request as RequestBase +from werkzeug.wrappers import Response as ResponseBase + +from . import json +from .globals import current_app +from .helpers import _split_blueprint_path + +if t.TYPE_CHECKING: # pragma: no cover + from werkzeug.routing import Rule + + +class Request(RequestBase): + """The request object used by default in Flask. Remembers the + matched endpoint and view arguments. + + It is what ends up as :class:`~flask.request`. If you want to replace + the request object used you can subclass this and set + :attr:`~flask.Flask.request_class` to your subclass. + + The request object is a :class:`~werkzeug.wrappers.Request` subclass and + provides all of the attributes Werkzeug defines plus a few Flask + specific ones. + """ + + json_module: t.Any = json + + #: The internal URL rule that matched the request. This can be + #: useful to inspect which methods are allowed for the URL from + #: a before/after handler (``request.url_rule.methods``) etc. + #: Though if the request's method was invalid for the URL rule, + #: the valid list is available in ``routing_exception.valid_methods`` + #: instead (an attribute of the Werkzeug exception + #: :exc:`~werkzeug.exceptions.MethodNotAllowed`) + #: because the request was never internally bound. + #: + #: .. versionadded:: 0.6 + url_rule: Rule | None = None + + #: A dict of view arguments that matched the request. If an exception + #: happened when matching, this will be ``None``. + view_args: dict[str, t.Any] | None = None + + #: If matching the URL failed, this is the exception that will be + #: raised / was raised as part of the request handling. This is + #: usually a :exc:`~werkzeug.exceptions.NotFound` exception or + #: something similar. + routing_exception: HTTPException | None = None + + @property + def max_content_length(self) -> int | None: # type: ignore[override] + """Read-only view of the ``MAX_CONTENT_LENGTH`` config key.""" + if current_app: + return current_app.config["MAX_CONTENT_LENGTH"] # type: ignore[no-any-return] + else: + return None + + @property + def endpoint(self) -> str | None: + """The endpoint that matched the request URL. + + This will be ``None`` if matching failed or has not been + performed yet. + + This in combination with :attr:`view_args` can be used to + reconstruct the same URL or a modified URL. + """ + if self.url_rule is not None: + return self.url_rule.endpoint + + return None + + @property + def blueprint(self) -> str | None: + """The registered name of the current blueprint. + + This will be ``None`` if the endpoint is not part of a + blueprint, or if URL matching failed or has not been performed + yet. + + This does not necessarily match the name the blueprint was + created with. It may have been nested, or registered with a + different name. + """ + endpoint = self.endpoint + + if endpoint is not None and "." in endpoint: + return endpoint.rpartition(".")[0] + + return None + + @property + def blueprints(self) -> list[str]: + """The registered names of the current blueprint upwards through + parent blueprints. + + This will be an empty list if there is no current blueprint, or + if URL matching failed. + + .. versionadded:: 2.0.1 + """ + name = self.blueprint + + if name is None: + return [] + + return _split_blueprint_path(name) + + def _load_form_data(self) -> None: + super()._load_form_data() + + # In debug mode we're replacing the files multidict with an ad-hoc + # subclass that raises a different error for key errors. + if ( + current_app + and current_app.debug + and self.mimetype != "multipart/form-data" + and not self.files + ): + from .debughelpers import attach_enctype_error_multidict + + attach_enctype_error_multidict(self) + + def on_json_loading_failed(self, e: ValueError | None) -> t.Any: + try: + return super().on_json_loading_failed(e) + except BadRequest as e: + if current_app and current_app.debug: + raise + + raise BadRequest() from e + + +class Response(ResponseBase): + """The response object that is used by default in Flask. Works like the + response object from Werkzeug but is set to have an HTML mimetype by + default. Quite often you don't have to create this object yourself because + :meth:`~flask.Flask.make_response` will take care of that for you. + + If you want to replace the response object used you can subclass this and + set :attr:`~flask.Flask.response_class` to your subclass. + + .. versionchanged:: 1.0 + JSON support is added to the response, like the request. This is useful + when testing to get the test client response data as JSON. + + .. versionchanged:: 1.0 + + Added :attr:`max_cookie_size`. + """ + + default_mimetype: str | None = "text/html" + + json_module = json + + autocorrect_location_header = False + + @property + def max_cookie_size(self) -> int: # type: ignore + """Read-only view of the :data:`MAX_COOKIE_SIZE` config key. + + See :attr:`~werkzeug.wrappers.Response.max_cookie_size` in + Werkzeug's docs. + """ + if current_app: + return current_app.config["MAX_COOKIE_SIZE"] # type: ignore[no-any-return] + + # return Werkzeug's default when not in an app context + return super().max_cookie_size diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_bcrypt.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_bcrypt.py new file mode 100644 index 000000000..994985a18 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_bcrypt.py @@ -0,0 +1,225 @@ +''' + flaskext.bcrypt + --------------- + + A Flask extension providing bcrypt hashing and comparison facilities. + + :copyright: (c) 2011 by Max Countryman. + :license: BSD, see LICENSE for more details. +''' + +from __future__ import absolute_import +from __future__ import print_function + +__version_info__ = ('1', '0', '1') +__version__ = '.'.join(__version_info__) +__author__ = 'Max Countryman' +__license__ = 'BSD' +__copyright__ = '(c) 2011 by Max Countryman' +__all__ = ['Bcrypt', 'check_password_hash', 'generate_password_hash'] + +import hmac + +try: + import bcrypt +except ImportError as e: + print('bcrypt is required to use Flask-Bcrypt') + raise e + +import hashlib + + +def generate_password_hash(password, rounds=None): + '''This helper function wraps the eponymous method of :class:`Bcrypt`. It + is intended to be used as a helper function at the expense of the + configuration variable provided when passing back the app object. In other + words this shortcut does not make use of the app object at all. + + To use this function, simply import it from the module and use it in a + similar fashion as the original method would be used. Here is a quick + example:: + + from flask_bcrypt import generate_password_hash + pw_hash = generate_password_hash('hunter2', 10) + + :param password: The password to be hashed. + :param rounds: The optional number of rounds. + ''' + return Bcrypt().generate_password_hash(password, rounds) + + +def check_password_hash(pw_hash, password): + '''This helper function wraps the eponymous method of :class:`Bcrypt.` It + is intended to be used as a helper function at the expense of the + configuration variable provided when passing back the app object. In other + words this shortcut does not make use of the app object at all. + + To use this function, simply import it from the module and use it in a + similar fashion as the original method would be used. Here is a quick + example:: + + from flask_bcrypt import check_password_hash + check_password_hash(pw_hash, 'hunter2') # returns True + + :param pw_hash: The hash to be compared against. + :param password: The password to compare. + ''' + return Bcrypt().check_password_hash(pw_hash, password) + + +class Bcrypt(object): + '''Bcrypt class container for password hashing and checking logic using + bcrypt, of course. This class may be used to intialize your Flask app + object. The purpose is to provide a simple interface for overriding + Werkzeug's built-in password hashing utilities. + + Although such methods are not actually overriden, the API is intentionally + made similar so that existing applications which make use of the previous + hashing functions might be easily adapted to the stronger facility of + bcrypt. + + To get started you will wrap your application's app object something like + this:: + + app = Flask(__name__) + bcrypt = Bcrypt(app) + + Now the two primary utility methods are exposed via this object, `bcrypt`. + So in the context of the application, important data, such as passwords, + could be hashed using this syntax:: + + password = 'hunter2' + pw_hash = bcrypt.generate_password_hash(password) + + Once hashed, the value is irreversible. However in the case of validating + logins a simple hashing of candidate password and subsequent comparison. + Importantly a comparison should be done in constant time. This helps + prevent timing attacks. A simple utility method is provided for this:: + + candidate = 'secret' + bcrypt.check_password_hash(pw_hash, candidate) + + If both the candidate and the existing password hash are a match + `check_password_hash` returns True. Otherwise, it returns False. + + .. admonition:: Namespacing Issues + + It's worth noting that if you use the format, `bcrypt = Bcrypt(app)` + you are effectively overriding the bcrypt module. Though it's unlikely + you would need to access the module outside of the scope of the + extension be aware that it's overriden. + + Alternatively consider using a different name, such as `flask_bcrypt + = Bcrypt(app)` to prevent naming collisions. + + Additionally a configuration value for `BCRYPT_LOG_ROUNDS` may be set in + the configuration of the Flask app. If none is provided this will + internally be assigned to 12. (This value is used in determining the + complexity of the encryption, see bcrypt for more details.) + + You may also set the hash version using the `BCRYPT_HASH_PREFIX` field in + the configuration of the Flask app. If not set, this will default to `2b`. + (See bcrypt for more details) + + By default, the bcrypt algorithm has a maximum password length of 72 bytes + and ignores any bytes beyond that. A common workaround is to hash the + given password using a cryptographic hash (such as `sha256`), take its + hexdigest to prevent NULL byte problems, and hash the result with bcrypt. + If the `BCRYPT_HANDLE_LONG_PASSWORDS` configuration value is set to `True`, + the workaround described above will be enabled. + **Warning: do not enable this option on a project that is already using + Flask-Bcrypt, or you will break password checking.** + **Warning: if this option is enabled on an existing project, disabling it + will break password checking.** + + :param app: The Flask application object. Defaults to None. + ''' + + _log_rounds = 12 + _prefix = '2b' + _handle_long_passwords = False + + def __init__(self, app=None): + if app is not None: + self.init_app(app) + + def init_app(self, app): + '''Initalizes the application with the extension. + + :param app: The Flask application object. + ''' + self._log_rounds = app.config.get('BCRYPT_LOG_ROUNDS', 12) + self._prefix = app.config.get('BCRYPT_HASH_PREFIX', '2b') + self._handle_long_passwords = app.config.get( + 'BCRYPT_HANDLE_LONG_PASSWORDS', False) + + def _unicode_to_bytes(self, unicode_string): + '''Converts a unicode string to a bytes object. + + :param unicode_string: The unicode string to convert.''' + if isinstance(unicode_string, str): + bytes_object = bytes(unicode_string, 'utf-8') + else: + bytes_object = unicode_string + return bytes_object + + def generate_password_hash(self, password, rounds=None, prefix=None): + '''Generates a password hash using bcrypt. Specifying `rounds` + sets the log_rounds parameter of `bcrypt.gensalt()` which determines + the complexity of the salt. 12 is the default value. Specifying `prefix` + sets the `prefix` parameter of `bcrypt.gensalt()` which determines the + version of the algorithm used to create the hash. + + Example usage of :class:`generate_password_hash` might look something + like this:: + + pw_hash = bcrypt.generate_password_hash('secret', 10) + + :param password: The password to be hashed. + :param rounds: The optional number of rounds. + :param prefix: The algorithm version to use. + ''' + + if not password: + raise ValueError('Password must be non-empty.') + + if rounds is None: + rounds = self._log_rounds + if prefix is None: + prefix = self._prefix + + # Python 3 unicode strings must be encoded as bytes before hashing. + password = self._unicode_to_bytes(password) + prefix = self._unicode_to_bytes(prefix) + + if self._handle_long_passwords: + password = hashlib.sha256(password).hexdigest() + password = self._unicode_to_bytes(password) + + salt = bcrypt.gensalt(rounds=rounds, prefix=prefix) + return bcrypt.hashpw(password, salt) + + def check_password_hash(self, pw_hash, password): + '''Tests a password hash against a candidate password. The candidate + password is first hashed and then subsequently compared in constant + time to the existing hash. This will either return `True` or `False`. + + Example usage of :class:`check_password_hash` would look something + like this:: + + pw_hash = bcrypt.generate_password_hash('secret', 10) + bcrypt.check_password_hash(pw_hash, 'secret') # returns True + + :param pw_hash: The hash to be compared against. + :param password: The password to compare. + ''' + + # Python 3 unicode strings must be encoded as bytes before hashing. + pw_hash = self._unicode_to_bytes(pw_hash) + password = self._unicode_to_bytes(password) + + if self._handle_long_passwords: + password = hashlib.sha256(password).hexdigest() + password = self._unicode_to_bytes(password) + + return hmac.compare_digest(bcrypt.hashpw(password, pw_hash), pw_hash) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/METADATA new file mode 100644 index 000000000..786a01c15 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/METADATA @@ -0,0 +1,143 @@ +Metadata-Version: 2.4 +Name: flask-cors +Version: 6.0.1 +Summary: A Flask extension simplifying CORS support +Author-email: Cory Dolphin +Project-URL: Homepage, https://corydolphin.github.io/flask-cors/ +Project-URL: Repository, https://github.com/corydolphin/flask-cors +Project-URL: Documentation, https://corydolphin.github.io/flask-cors/ +Keywords: python +Classifier: Intended Audience :: Developers +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: <4.0,>=3.9 +Description-Content-Type: text/x-rst +Requires-Dist: flask>=0.9 +Requires-Dist: Werkzeug>=0.7 + +Flask-CORS +========== + +|Build Status| |Latest Version| |Supported Python versions| +|License| + +A Flask extension for handling Cross Origin Resource Sharing (CORS), making cross-origin AJAX possible. + +This package has a simple philosophy: when you want to enable CORS, you wish to enable it for all use cases on a domain. +This means no mucking around with different allowed headers, methods, etc. + +By default, submission of cookies across domains is disabled due to the security implications. +Please see the documentation for how to enable credential'ed requests, and please make sure you add some sort of `CSRF `__ protection before doing so! + +Installation +------------ + +Install the extension with using pip, or easy\_install. + +.. code:: bash + + $ pip install -U flask-cors + +Usage +----- + +This package exposes a Flask extension which by default enables CORS support on all routes, for all origins and methods. +It allows parameterization of all CORS headers on a per-resource level. +The package also contains a decorator, for those who prefer this approach. + +Simple Usage +~~~~~~~~~~~~ + +In the simplest case, initialize the Flask-Cors extension with default arguments in order to allow CORS for all domains on all routes. +See the full list of options in the `documentation `__. + +.. code:: python + + + from flask import Flask + from flask_cors import CORS + + app = Flask(__name__) + CORS(app) + + @app.route("/") + def helloWorld(): + return "Hello, cross-origin-world!" + +Resource specific CORS +^^^^^^^^^^^^^^^^^^^^^^ + +Alternatively, you can specify CORS options on a resource and origin level of granularity by passing a dictionary as the `resources` option, mapping paths to a set of options. +See the full list of options in the `documentation `__. + +.. code:: python + + app = Flask(__name__) + cors = CORS(app, resources={r"/api/*": {"origins": "*"}}) + + @app.route("/api/v1/users") + def list_users(): + return "user example" + +Route specific CORS via decorator +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This extension also exposes a simple decorator to decorate flask routes with. +Simply add ``@cross_origin()`` below a call to Flask's ``@app.route(..)`` to allow CORS on a given route. +See the full list of options in the `decorator documentation `__. + +.. code:: python + + @app.route("/") + @cross_origin() + def helloWorld(): + return "Hello, cross-origin-world!" + +Documentation +------------- + +For a full list of options, please see the full `documentation `__ + +Troubleshooting +--------------- + +If things aren't working as you expect, enable logging to help understand what is going on under the hood, and why. + +.. code:: python + + logging.getLogger('flask_cors').level = logging.DEBUG + + + +Set Up Your Development Environment +--- +The development environment uses `uv` for Python version management as well as dependency management. +There are helpful Makefile targets to do everything you need. Use `make test` to get started! + + +Contributing +------------ + +Questions, comments or improvements? +Please create an issue on `Github `__, tweet at `@corydolphin `__ or send me an email. +I do my best to include every contribution proposed in any way that I can. + +Credits +------- + +This Flask extension is based upon the `Decorator for the HTTP Access Control `__ written by Armin Ronacher. + +.. |Build Status| image:: https://github.com/corydolphin/flask-cors/actions/workflows/unittests.yaml/badge.svg + :target: https://travis-ci.org/corydolphin/flask-cors +.. |Latest Version| image:: https://img.shields.io/pypi/v/Flask-Cors.svg + :target: https://pypi.python.org/pypi/Flask-Cors/ +.. |Supported Python versions| image:: https://img.shields.io/pypi/pyversions/Flask-Cors.svg + :target: https://img.shields.io/pypi/pyversions/Flask-Cors.svg +.. |License| image:: http://img.shields.io/:license-mit-blue.svg + :target: https://pypi.python.org/pypi/Flask-Cors/ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/RECORD new file mode 100644 index 000000000..4af3db3fa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/RECORD @@ -0,0 +1,16 @@ +flask_cors-6.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +flask_cors-6.0.1.dist-info/METADATA,sha256=vdM_HWOTtWgELoF6BIMIUN4R-VH0ACm8zKIxqyFfuUk,5303 +flask_cors-6.0.1.dist-info/RECORD,, +flask_cors-6.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask_cors-6.0.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91 +flask_cors-6.0.1.dist-info/top_level.txt,sha256=aWye_0QNZPp_QtPF4ZluLHqnyVLT9CPJsfiGhwqkWuo,11 +flask_cors/__init__.py,sha256=Hnwpy4nxEz4tlcB3enWNXUBctcjgxRnUqa5B5zDabQ4,522 +flask_cors/__pycache__/__init__.cpython-310.pyc,, +flask_cors/__pycache__/core.cpython-310.pyc,, +flask_cors/__pycache__/decorator.cpython-310.pyc,, +flask_cors/__pycache__/extension.cpython-310.pyc,, +flask_cors/__pycache__/version.cpython-310.pyc,, +flask_cors/core.py,sha256=S8Wv7dz9e0FcR386TkBT-T8_VRsjFQGhNiR0jIpUYiY,14416 +flask_cors/decorator.py,sha256=YnPIDyWAfz60t2yxLgoOZLzgWWjZEvxQAPTTV4qfsss,4706 +flask_cors/extension.py,sha256=ONk6eNJiYT7HPEtmmySG8KgjzvAs9xtcJHNYvWLAS9s,8268 +flask_cors/version.py,sha256=unmO3xtN2S1e9meUbIjG3AZVyKxUe4HSZNXDOcpZRJg,22 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/WHEEL new file mode 100644 index 000000000..e7fa31b6f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (80.9.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/top_level.txt new file mode 100644 index 000000000..27af98818 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors-6.0.1.dist-info/top_level.txt @@ -0,0 +1 @@ +flask_cors diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/__init__.py new file mode 100644 index 000000000..732d401fe --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/__init__.py @@ -0,0 +1,17 @@ +from .decorator import cross_origin +from .extension import CORS +from .version import __version__ + +__all__ = ["CORS", "__version__", "cross_origin"] + +# Set default logging handler to avoid "No handler found" warnings. +import logging +from logging import NullHandler + +# Set initial level to WARN. Users must manually enable logging for +# flask_cors to see our logging. +rootlogger = logging.getLogger(__name__) +rootlogger.addHandler(NullHandler()) + +if rootlogger.level == logging.NOTSET: + rootlogger.setLevel(logging.WARN) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/core.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/core.py new file mode 100644 index 000000000..8df343c93 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/core.py @@ -0,0 +1,399 @@ +import logging +import re +from collections.abc import Iterable +from datetime import timedelta + +from flask import current_app, request +from werkzeug.datastructures import Headers, MultiDict + +LOG = logging.getLogger(__name__) + +# Response Headers +ACL_ORIGIN = "Access-Control-Allow-Origin" +ACL_METHODS = "Access-Control-Allow-Methods" +ACL_ALLOW_HEADERS = "Access-Control-Allow-Headers" +ACL_EXPOSE_HEADERS = "Access-Control-Expose-Headers" +ACL_CREDENTIALS = "Access-Control-Allow-Credentials" +ACL_MAX_AGE = "Access-Control-Max-Age" +ACL_RESPONSE_PRIVATE_NETWORK = "Access-Control-Allow-Private-Network" + +# Request Header +ACL_REQUEST_METHOD = "Access-Control-Request-Method" +ACL_REQUEST_HEADERS = "Access-Control-Request-Headers" +ACL_REQUEST_HEADER_PRIVATE_NETWORK = "Access-Control-Request-Private-Network" + +ALL_METHODS = ["GET", "HEAD", "POST", "OPTIONS", "PUT", "PATCH", "DELETE"] +CONFIG_OPTIONS = [ + "CORS_ORIGINS", + "CORS_METHODS", + "CORS_ALLOW_HEADERS", + "CORS_EXPOSE_HEADERS", + "CORS_SUPPORTS_CREDENTIALS", + "CORS_MAX_AGE", + "CORS_SEND_WILDCARD", + "CORS_AUTOMATIC_OPTIONS", + "CORS_VARY_HEADER", + "CORS_RESOURCES", + "CORS_INTERCEPT_EXCEPTIONS", + "CORS_ALWAYS_SEND", + "CORS_ALLOW_PRIVATE_NETWORK", +] +# Attribute added to request object by decorator to indicate that CORS +# was evaluated, in case the decorator and extension are both applied +# to a view. +FLASK_CORS_EVALUATED = "_FLASK_CORS_EVALUATED" + +# Strange, but this gets the type of a compiled regex, which is otherwise not +# exposed in a public API. +RegexObject = type(re.compile("")) +DEFAULT_OPTIONS = dict( + origins="*", + methods=ALL_METHODS, + allow_headers="*", + expose_headers=None, + supports_credentials=False, + max_age=None, + send_wildcard=False, + automatic_options=True, + vary_header=True, + resources=r"/*", + intercept_exceptions=True, + always_send=True, + allow_private_network=False, +) + + +def parse_resources(resources): + if isinstance(resources, dict): + # To make the API more consistent with the decorator, allow a + # resource of '*', which is not actually a valid regexp. + resources = [(re_fix(k), v) for k, v in resources.items()] + + # Sort patterns with static (literal) paths first, then by regex specificity + def sort_key(pair): + pattern, _ = pair + if isinstance(pattern, RegexObject): + return (1, 0, -pattern.pattern.count("/"), -len(pattern.pattern)) + elif probably_regex(pattern): + return (1, 1, -pattern.count("/"), -len(pattern)) + else: + return (0, 0, -pattern.count("/"), -len(pattern)) + + return sorted(resources, key=sort_key) + + elif isinstance(resources, str): + return [(re_fix(resources), {})] + + elif isinstance(resources, Iterable): + return [(re_fix(r), {}) for r in resources] + + # Type of compiled regex is not part of the public API. Test for this + # at runtime. + elif isinstance(resources, RegexObject): + return [(re_fix(resources), {})] + + else: + raise ValueError("Unexpected value for resources argument.") + + +def get_regexp_pattern(regexp): + """ + Helper that returns regexp pattern from given value. + + :param regexp: regular expression to stringify + :type regexp: _sre.SRE_Pattern or str + :returns: string representation of given regexp pattern + :rtype: str + """ + try: + return regexp.pattern + except AttributeError: + return str(regexp) + + +def get_cors_origins(options, request_origin): + origins = options.get("origins") + wildcard = r".*" in origins + + # If the Origin header is not present terminate this set of steps. + # The request is outside the scope of this specification.-- W3Spec + if request_origin: + LOG.debug("CORS request received with 'Origin' %s", request_origin) + + # If the allowed origins is an asterisk or 'wildcard', always match + if wildcard and options.get("send_wildcard"): + LOG.debug("Allowed origins are set to '*'. Sending wildcard CORS header.") + return ["*"] + # If the value of the Origin header is a case-insensitive match + # for any of the values in list of origins. + # NOTE: Per RFC 1035 and RFC 4343 schemes and hostnames are case insensitive. + elif try_match_any_pattern(request_origin, origins, caseSensitive=False): + LOG.debug( + "The request's Origin header matches. Sending CORS headers.", + ) + # Add a single Access-Control-Allow-Origin header, with either + # the value of the Origin header or the string "*" as value. + # -- W3Spec + return [request_origin] + else: + LOG.debug("The request's Origin header does not match any of allowed origins.") + return None + + elif options.get("always_send"): + if wildcard: + # If wildcard is in the origins, even if 'send_wildcard' is False, + # simply send the wildcard. Unless supports_credentials is True, + # since that is forbidden by the spec.. + # It is the most-likely to be correct thing to do (the only other + # option is to return nothing, which almost certainly not what + # the developer wants if the '*' origin was specified. + if options.get("supports_credentials"): + return None + else: + return ["*"] + else: + # Return all origins that are not regexes. + return sorted([o for o in origins if not probably_regex(o)]) + + # Terminate these steps, return the original request untouched. + else: + LOG.debug( + "The request did not contain an 'Origin' header. This means the browser or client did not request CORS, ensure the Origin Header is set." + ) + return None + + +def get_allow_headers(options, acl_request_headers): + if acl_request_headers: + request_headers = [h.strip() for h in acl_request_headers.split(",")] + + # any header that matches in the allow_headers + matching_headers = filter(lambda h: try_match_any_pattern(h, options.get("allow_headers"), caseSensitive=False), request_headers) + + return ", ".join(sorted(matching_headers)) + + return None + + +def get_cors_headers(options, request_headers, request_method): + origins_to_set = get_cors_origins(options, request_headers.get("Origin")) + headers = MultiDict() + + if not origins_to_set: # CORS is not enabled for this route + return headers + + for origin in origins_to_set: + headers.add(ACL_ORIGIN, origin) + + headers[ACL_EXPOSE_HEADERS] = options.get("expose_headers") + + if options.get("supports_credentials"): + headers[ACL_CREDENTIALS] = "true" # case sensitive + + if ( + ACL_REQUEST_HEADER_PRIVATE_NETWORK in request_headers + and request_headers.get(ACL_REQUEST_HEADER_PRIVATE_NETWORK) == "true" + ): + allow_private_network = "true" if options.get("allow_private_network") else "false" + headers[ACL_RESPONSE_PRIVATE_NETWORK] = allow_private_network + + # This is a preflight request + # http://www.w3.org/TR/cors/#resource-preflight-requests + if request_method == "OPTIONS": + acl_request_method = request_headers.get(ACL_REQUEST_METHOD, "").upper() + + # If there is no Access-Control-Request-Method header or if parsing + # failed, do not set any additional headers + if acl_request_method and acl_request_method in options.get("methods"): + # If method is not a case-sensitive match for any of the values in + # list of methods do not set any additional headers and terminate + # this set of steps. + headers[ACL_ALLOW_HEADERS] = get_allow_headers(options, request_headers.get(ACL_REQUEST_HEADERS)) + headers[ACL_MAX_AGE] = options.get("max_age") + headers[ACL_METHODS] = options.get("methods") + else: + LOG.info( + "The request's Access-Control-Request-Method header does not match allowed methods. CORS headers will not be applied." + ) + + # http://www.w3.org/TR/cors/#resource-implementation + if options.get("vary_header"): + # Only set header if the origin returned will vary dynamically, + # i.e. if we are not returning an asterisk, and there are multiple + # origins that can be matched. + if headers[ACL_ORIGIN] == "*": + pass + elif ( + len(options.get("origins")) > 1 + or len(origins_to_set) > 1 + or any(map(probably_regex, options.get("origins"))) + ): + headers.add("Vary", "Origin") + + return MultiDict((k, v) for k, v in headers.items() if v) + + +def set_cors_headers(resp, options): + """ + Performs the actual evaluation of Flask-CORS options and actually + modifies the response object. + + This function is used both in the decorator and the after_request + callback + """ + + # If CORS has already been evaluated via the decorator, skip + if hasattr(resp, FLASK_CORS_EVALUATED): + LOG.debug("CORS have been already evaluated, skipping") + return resp + + # Some libraries, like OAuthlib, set resp.headers to non Multidict + # objects (Werkzeug Headers work as well). This is a problem because + # headers allow repeated values. + if not isinstance(resp.headers, Headers) and not isinstance(resp.headers, MultiDict): + resp.headers = MultiDict(resp.headers) + + headers_to_set = get_cors_headers(options, request.headers, request.method) + + LOG.debug("Settings CORS headers: %s", str(headers_to_set)) + + for k, v in headers_to_set.items(): + resp.headers.add(k, v) + + return resp + + +def probably_regex(maybe_regex): + if isinstance(maybe_regex, RegexObject): + return True + else: + common_regex_chars = ["*", "\\", "]", "?", "$", "^", "[", "]", "(", ")"] + # Use common characters used in regular expressions as a proxy + # for if this string is in fact a regex. + return any(c in maybe_regex for c in common_regex_chars) + + +def re_fix(reg): + """ + Replace the invalid regex r'*' with the valid, wildcard regex r'/.*' to + enable the CORS app extension to have a more user friendly api. + """ + return r".*" if reg == r"*" else reg + + +def try_match_any_pattern(inst, patterns, caseSensitive=True): + return any(try_match_pattern(inst, pattern, caseSensitive) for pattern in patterns) + +def try_match_pattern(value, pattern, caseSensitive=True): + """ + Safely attempts to match a pattern or string to a value. This + function can be used to match request origins, headers, or paths. + The value of caseSensitive should be set in accordance to the + data being compared e.g. origins and headers are case insensitive + whereas paths are case-sensitive + """ + if isinstance(pattern, RegexObject): + return re.match(pattern, value) + if probably_regex(pattern): + flags = 0 if caseSensitive else re.IGNORECASE + try: + return re.match(pattern, value, flags=flags) + except re.error: + return False + try: + v = str(value) + p = str(pattern) + return v == p if caseSensitive else v.casefold() == p.casefold() + except Exception: + return value == pattern + +def get_cors_options(appInstance, *dicts): + """ + Compute CORS options for an application by combining the DEFAULT_OPTIONS, + the app's configuration-specified options and any dictionaries passed. The + last specified option wins. + """ + options = DEFAULT_OPTIONS.copy() + options.update(get_app_kwarg_dict(appInstance)) + if dicts: + for d in dicts: + options.update(d) + + return serialize_options(options) + + +def get_app_kwarg_dict(appInstance=None): + """Returns the dictionary of CORS specific app configurations.""" + app = appInstance or current_app + + # In order to support blueprints which do not have a config attribute + app_config = getattr(app, "config", {}) + + return {k.lower().replace("cors_", ""): app_config.get(k) for k in CONFIG_OPTIONS if app_config.get(k) is not None} + + +def flexible_str(obj): + """ + A more flexible str function which intelligently handles stringifying + strings, lists and other iterables. The results are lexographically sorted + to ensure generated responses are consistent when iterables such as Set + are used. + """ + if obj is None: + return None + elif not isinstance(obj, str) and isinstance(obj, Iterable): + return ", ".join(str(item) for item in sorted(obj)) + else: + return str(obj) + + +def serialize_option(options_dict, key, upper=False): + if key in options_dict: + value = flexible_str(options_dict[key]) + options_dict[key] = value.upper() if upper else value + + +def ensure_iterable(inst): + """ + Wraps scalars or string types as a list, or returns the iterable instance. + """ + if isinstance(inst, str) or not isinstance(inst, Iterable): + return [inst] + else: + return inst + + +def sanitize_regex_param(param): + return [re_fix(x) for x in ensure_iterable(param)] + + +def serialize_options(opts): + """ + A helper method to serialize and processes the options dictionary. + """ + options = (opts or {}).copy() + + for key in opts.keys(): + if key not in DEFAULT_OPTIONS: + LOG.warning("Unknown option passed to Flask-CORS: %s", key) + + # Ensure origins is a list of allowed origins with at least one entry. + options["origins"] = sanitize_regex_param(options.get("origins")) + options["allow_headers"] = sanitize_regex_param(options.get("allow_headers")) + + # This is expressly forbidden by the spec. Raise a value error so people + # don't get burned in production. + if r".*" in options["origins"] and options["supports_credentials"] and options["send_wildcard"]: + raise ValueError( + "Cannot use supports_credentials in conjunction with" + "an origin string of '*'. See: " + "http://www.w3.org/TR/cors/#resource-requests" + ) + + serialize_option(options, "expose_headers") + serialize_option(options, "methods", upper=True) + + if isinstance(options.get("max_age"), timedelta): + options["max_age"] = str(int(options["max_age"].total_seconds())) + + return options diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/decorator.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/decorator.py new file mode 100644 index 000000000..f8915b62d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/decorator.py @@ -0,0 +1,129 @@ +import logging +from functools import update_wrapper + +from flask import current_app, make_response, request + +from .core import FLASK_CORS_EVALUATED, get_cors_options, set_cors_headers + +LOG = logging.getLogger(__name__) + + +def cross_origin(*args, **kwargs): + """ + This function is the decorator which is used to wrap a Flask route with. + In the simplest case, simply use the default parameters to allow all + origins in what is the most permissive configuration. If this method + modifies state or performs authentication which may be brute-forced, you + should add some degree of protection, such as Cross Site Request Forgery + protection. + + :param origins: + The origin, or list of origins to allow requests from. + The origin(s) may be regular expressions, case-sensitive strings, + or else an asterisk + + Default : '*' + :type origins: list, string or regex + + :param methods: + The method or list of methods which the allowed origins are allowed to + access for non-simple requests. + + Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE] + :type methods: list or string + + :param expose_headers: + The header or list which are safe to expose to the API of a CORS API + specification. + + Default : None + :type expose_headers: list or string + + :param allow_headers: + The header or list of header field names which can be used when this + resource is accessed by allowed origins. The header(s) may be regular + expressions, case-sensitive strings, or else an asterisk. + + Default : '*', allow all headers + :type allow_headers: list, string or regex + + :param supports_credentials: + Allows users to make authenticated requests. If true, injects the + `Access-Control-Allow-Credentials` header in responses. This allows + cookies and credentials to be submitted across domains. + + :note: This option cannot be used in conjunction with a '*' origin + + Default : False + :type supports_credentials: bool + + :param max_age: + The maximum time for which this CORS request maybe cached. This value + is set as the `Access-Control-Max-Age` header. + + Default : None + :type max_age: timedelta, integer, string or None + + :param send_wildcard: If True, and the origins parameter is `*`, a wildcard + `Access-Control-Allow-Origin` header is sent, rather than the + request's `Origin` header. + + Default : False + :type send_wildcard: bool + + :param vary_header: + If True, the header Vary: Origin will be returned as per the W3 + implementation guidelines. + + Setting this header when the `Access-Control-Allow-Origin` is + dynamically generated (e.g. when there is more than one allowed + origin, and an Origin than '*' is returned) informs CDNs and other + caches that the CORS headers are dynamic, and cannot be cached. + + If False, the Vary header will never be injected or altered. + + Default : True + :type vary_header: bool + + :param automatic_options: + Only applies to the `cross_origin` decorator. If True, Flask-CORS will + override Flask's default OPTIONS handling to return CORS headers for + OPTIONS requests. + + Default : True + :type automatic_options: bool + + """ + _options = kwargs + + def decorator(f): + LOG.debug("Enabling %s for cross_origin using options:%s", f, _options) + + # If True, intercept OPTIONS requests by modifying the view function, + # replicating Flask's default behavior, and wrapping the response with + # CORS headers. + # + # If f.provide_automatic_options is unset or True, Flask's route + # decorator (which is actually wraps the function object we return) + # intercepts OPTIONS handling, and requests will not have CORS headers + if _options.get("automatic_options", True): + f.required_methods = getattr(f, "required_methods", set()) + f.required_methods.add("OPTIONS") + f.provide_automatic_options = False + + def wrapped_function(*args, **kwargs): + # Handle setting of Flask-Cors parameters + options = get_cors_options(current_app, _options) + + if options.get("automatic_options") and request.method == "OPTIONS": + resp = current_app.make_default_options_response() + else: + resp = make_response(f(*args, **kwargs)) + + set_cors_headers(resp, options) + setattr(resp, FLASK_CORS_EVALUATED, True) + return resp + + return update_wrapper(wrapped_function, f) + + return decorator diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/extension.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/extension.py new file mode 100644 index 000000000..434f65eaa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/extension.py @@ -0,0 +1,206 @@ +import logging +from urllib.parse import unquote + +from flask import request + +from .core import ACL_ORIGIN, get_cors_options, get_regexp_pattern, parse_resources, set_cors_headers, try_match_pattern + +LOG = logging.getLogger(__name__) + + +class CORS: + """ + Initializes Cross Origin Resource sharing for the application. The + arguments are identical to `cross_origin`, with the addition of a + `resources` parameter. The resources parameter defines a series of regular + expressions for resource paths to match and optionally, the associated + options to be applied to the particular resource. These options are + identical to the arguments to `cross_origin`. + + The settings for CORS are determined in the following order + + 1. Resource level settings (e.g when passed as a dictionary) + 2. Keyword argument settings + 3. App level configuration settings (e.g. CORS_*) + 4. Default settings + + Note: as it is possible for multiple regular expressions to match a + resource path, the regular expressions are first sorted by length, + from longest to shortest, in order to attempt to match the most + specific regular expression. This allows the definition of a + number of specific resource options, with a wildcard fallback + for all other resources. + + :param resources: + The series of regular expression and (optionally) associated CORS + options to be applied to the given resource path. + + If the argument is a dictionary, it's keys must be regular expressions, + and the values must be a dictionary of kwargs, identical to the kwargs + of this function. + + If the argument is a list, it is expected to be a list of regular + expressions, for which the app-wide configured options are applied. + + If the argument is a string, it is expected to be a regular expression + for which the app-wide configured options are applied. + + Default : Match all and apply app-level configuration + + :type resources: dict, iterable or string + + :param origins: + The origin, or list of origins to allow requests from. + The origin(s) may be regular expressions, case-sensitive strings, + or else an asterisk. + + .. note:: + + origins must include the schema and the port (if not port 80), + e.g., + `CORS(app, origins=["http://localhost:8000", "https://example.com"])`. + + Default : '*' + :type origins: list, string or regex + + :param methods: + The method or list of methods which the allowed origins are allowed to + access for non-simple requests. + + Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE] + :type methods: list or string + + :param expose_headers: + The header or list which are safe to expose to the API of a CORS API + specification. + + Default : None + :type expose_headers: list or string + + :param allow_headers: + The header or list of header field names which can be used when this + resource is accessed by allowed origins. The header(s) may be regular + expressions, case-sensitive strings, or else an asterisk. + + Default : '*', allow all headers + :type allow_headers: list, string or regex + + :param supports_credentials: + Allows users to make authenticated requests. If true, injects the + `Access-Control-Allow-Credentials` header in responses. This allows + cookies and credentials to be submitted across domains. + + :note: This option cannot be used in conjunction with a '*' origin + + Default : False + :type supports_credentials: bool + + :param max_age: + The maximum time for which this CORS request maybe cached. This value + is set as the `Access-Control-Max-Age` header. + + Default : None + :type max_age: timedelta, integer, string or None + + :param send_wildcard: If True, and the origins parameter is `*`, a wildcard + `Access-Control-Allow-Origin` header is sent, rather than the + request's `Origin` header. + + Default : False + :type send_wildcard: bool + + :param vary_header: + If True, the header Vary: Origin will be returned as per the W3 + implementation guidelines. + + Setting this header when the `Access-Control-Allow-Origin` is + dynamically generated (e.g. when there is more than one allowed + origin, and an Origin than '*' is returned) informs CDNs and other + caches that the CORS headers are dynamic, and cannot be cached. + + If False, the Vary header will never be injected or altered. + + Default : True + :type vary_header: bool + + :param allow_private_network: + If True, the response header `Access-Control-Allow-Private-Network` + will be set with the value 'true' whenever the request header + `Access-Control-Request-Private-Network` has a value 'true'. + + If False, the response header `Access-Control-Allow-Private-Network` + will be set with the value 'false' whenever the request header + `Access-Control-Request-Private-Network` has a value of 'true'. + + If the request header `Access-Control-Request-Private-Network` is + not present or has a value other than 'true', the response header + `Access-Control-Allow-Private-Network` will not be set. + + Default : True + :type allow_private_network: bool + """ + + def __init__(self, app=None, **kwargs): + self._options = kwargs + if app is not None: + self.init_app(app, **kwargs) + + def init_app(self, app, **kwargs): + # The resources and options may be specified in the App Config, the CORS constructor + # or the kwargs to the call to init_app. + options = get_cors_options(app, self._options, kwargs) + + # Flatten our resources into a list of the form + # (pattern_or_regexp, dictionary_of_options) + resources = parse_resources(options.get("resources")) + + # Compute the options for each resource by combining the options from + # the app's configuration, the constructor, the kwargs to init_app, and + # finally the options specified in the resources dictionary. + resources = [(pattern, get_cors_options(app, options, opts)) for (pattern, opts) in resources] + + # Create a human-readable form of these resources by converting the compiled + # regular expressions into strings. + resources_human = {get_regexp_pattern(pattern): opts for (pattern, opts) in resources} + LOG.debug("Configuring CORS with resources: %s", resources_human) + + cors_after_request = make_after_request_function(resources) + app.after_request(cors_after_request) + + # Wrap exception handlers with cross_origin + # These error handlers will still respect the behavior of the route + if options.get("intercept_exceptions", True): + + def _after_request_decorator(f): + def wrapped_function(*args, **kwargs): + return cors_after_request(app.make_response(f(*args, **kwargs))) + + return wrapped_function + + if hasattr(app, "handle_exception"): + app.handle_exception = _after_request_decorator(app.handle_exception) + app.handle_user_exception = _after_request_decorator(app.handle_user_exception) + + +def make_after_request_function(resources): + def cors_after_request(resp): + # If CORS headers are set in a view decorator, pass + if resp.headers is not None and resp.headers.get(ACL_ORIGIN): + LOG.debug("CORS have been already evaluated, skipping") + return resp + normalized_path = unquote(request.path) + for res_regex, res_options in resources: + if try_match_pattern(normalized_path, res_regex, caseSensitive=True): + LOG.debug( + "Request to '%r' matches CORS resource '%s'. Using options: %s", + request.path, + get_regexp_pattern(res_regex), + res_options, + ) + set_cors_headers(resp, res_options) + break + else: + LOG.debug("No CORS rule matches") + return resp + + return cors_after_request diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/version.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/version.py new file mode 100644 index 000000000..79a961b4f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_cors/version.py @@ -0,0 +1 @@ +__version__ = "6.0.1" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/__init__.py new file mode 100644 index 000000000..e7fa12560 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/__init__.py @@ -0,0 +1,22 @@ +from .jwt_manager import JWTManager as JWTManager +from .utils import create_access_token as create_access_token +from .utils import create_refresh_token as create_refresh_token +from .utils import current_user as current_user +from .utils import decode_token as decode_token +from .utils import get_csrf_token as get_csrf_token +from .utils import get_current_user as get_current_user +from .utils import get_jti as get_jti +from .utils import get_jwt as get_jwt +from .utils import get_jwt_header as get_jwt_header +from .utils import get_jwt_identity as get_jwt_identity +from .utils import get_jwt_request_location as get_jwt_request_location +from .utils import get_unverified_jwt_headers as get_unverified_jwt_headers +from .utils import set_access_cookies as set_access_cookies +from .utils import set_refresh_cookies as set_refresh_cookies +from .utils import unset_access_cookies as unset_access_cookies +from .utils import unset_jwt_cookies as unset_jwt_cookies +from .utils import unset_refresh_cookies as unset_refresh_cookies +from .view_decorators import jwt_required as jwt_required +from .view_decorators import verify_jwt_in_request as verify_jwt_in_request + +__version__ = "4.7.1" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/config.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/config.py new file mode 100644 index 000000000..7d2565165 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/config.py @@ -0,0 +1,319 @@ +from datetime import datetime +from datetime import timedelta +from datetime import timezone +from json import JSONEncoder +from typing import Iterable +from typing import List +from typing import Optional +from typing import Sequence +from typing import Type +from typing import Union + +from flask import current_app +from jwt.algorithms import requires_cryptography + +from flask_jwt_extended.internal_utils import get_json_encoder +from flask_jwt_extended.typing import ExpiresDelta + + +class _Config(object): + """ + Helper object for accessing and verifying options in this extension. This + is meant for internal use of the application; modifying config options + should be done with flasks ```app.config```. + + Default values for the configuration options are set in the jwt_manager + object. All of these values are read only. This is simply a loose wrapper + with some helper functionality for flasks `app.config`. + """ + + @property + def is_asymmetric(self) -> bool: + return self.algorithm in requires_cryptography + + @property + def encode_key(self) -> str: + return self._private_key if self.is_asymmetric else self._secret_key + + @property + def decode_key(self) -> str: + return self._public_key if self.is_asymmetric else self._secret_key + + @property + def token_location(self) -> Sequence[str]: + locations = current_app.config["JWT_TOKEN_LOCATION"] + if isinstance(locations, str): + locations = (locations,) + elif not isinstance(locations, Iterable): + raise RuntimeError("JWT_TOKEN_LOCATION must be a sequence or a set") + elif not locations: + raise RuntimeError( + "JWT_TOKEN_LOCATION must contain at least one " + 'of "headers", "cookies", "query_string", or "json"' + ) + for location in locations: + if location not in ("headers", "cookies", "query_string", "json"): + raise RuntimeError( + "JWT_TOKEN_LOCATION can only contain " + '"headers", "cookies", "query_string", or "json"' + ) + return locations + + @property + def jwt_in_cookies(self) -> bool: + return "cookies" in self.token_location + + @property + def jwt_in_headers(self) -> bool: + return "headers" in self.token_location + + @property + def jwt_in_query_string(self) -> bool: + return "query_string" in self.token_location + + @property + def jwt_in_json(self) -> bool: + return "json" in self.token_location + + @property + def header_name(self) -> str: + name = current_app.config["JWT_HEADER_NAME"] + if not name: + raise RuntimeError("JWT_ACCESS_HEADER_NAME cannot be empty") + return name + + @property + def header_type(self) -> str: + return current_app.config["JWT_HEADER_TYPE"] + + @property + def query_string_name(self) -> str: + return current_app.config["JWT_QUERY_STRING_NAME"] + + @property + def query_string_value_prefix(self) -> str: + return current_app.config["JWT_QUERY_STRING_VALUE_PREFIX"] + + @property + def access_cookie_name(self) -> str: + return current_app.config["JWT_ACCESS_COOKIE_NAME"] + + @property + def refresh_cookie_name(self) -> str: + return current_app.config["JWT_REFRESH_COOKIE_NAME"] + + @property + def access_cookie_path(self) -> str: + return current_app.config["JWT_ACCESS_COOKIE_PATH"] + + @property + def refresh_cookie_path(self) -> str: + return current_app.config["JWT_REFRESH_COOKIE_PATH"] + + @property + def cookie_secure(self) -> bool: + return current_app.config["JWT_COOKIE_SECURE"] + + @property + def cookie_domain(self) -> str: + return current_app.config["JWT_COOKIE_DOMAIN"] + + @property + def session_cookie(self) -> bool: + return current_app.config["JWT_SESSION_COOKIE"] + + @property + def cookie_samesite(self) -> str: + return current_app.config["JWT_COOKIE_SAMESITE"] + + @property + def json_key(self) -> str: + return current_app.config["JWT_JSON_KEY"] + + @property + def refresh_json_key(self) -> str: + return current_app.config["JWT_REFRESH_JSON_KEY"] + + @property + def cookie_csrf_protect(self) -> bool: + return current_app.config["JWT_COOKIE_CSRF_PROTECT"] + + @property + def csrf_request_methods(self) -> Iterable[str]: + return current_app.config["JWT_CSRF_METHODS"] + + @property + def csrf_in_cookies(self) -> bool: + return current_app.config["JWT_CSRF_IN_COOKIES"] + + @property + def access_csrf_cookie_name(self) -> str: + return current_app.config["JWT_ACCESS_CSRF_COOKIE_NAME"] + + @property + def refresh_csrf_cookie_name(self) -> str: + return current_app.config["JWT_REFRESH_CSRF_COOKIE_NAME"] + + @property + def access_csrf_cookie_path(self) -> str: + return current_app.config["JWT_ACCESS_CSRF_COOKIE_PATH"] + + @property + def refresh_csrf_cookie_path(self) -> str: + return current_app.config["JWT_REFRESH_CSRF_COOKIE_PATH"] + + @property + def access_csrf_header_name(self) -> str: + return current_app.config["JWT_ACCESS_CSRF_HEADER_NAME"] + + @property + def refresh_csrf_header_name(self) -> str: + return current_app.config["JWT_REFRESH_CSRF_HEADER_NAME"] + + @property + def csrf_check_form(self) -> bool: + return current_app.config["JWT_CSRF_CHECK_FORM"] + + @property + def access_csrf_field_name(self) -> str: + return current_app.config["JWT_ACCESS_CSRF_FIELD_NAME"] + + @property + def refresh_csrf_field_name(self) -> str: + return current_app.config["JWT_REFRESH_CSRF_FIELD_NAME"] + + @property + def access_expires(self) -> ExpiresDelta: + delta = current_app.config["JWT_ACCESS_TOKEN_EXPIRES"] + if type(delta) is int: + delta = timedelta(seconds=delta) + if delta is not False: + try: + # Basically runtime typechecking. Probably a better way to do + # this with proper type checking + delta + datetime.now(timezone.utc) + except TypeError as e: + err = ( + "must be able to add JWT_ACCESS_TOKEN_EXPIRES to datetime.datetime" + ) + raise RuntimeError(err) from e + return delta + + @property + def refresh_expires(self) -> ExpiresDelta: + delta = current_app.config["JWT_REFRESH_TOKEN_EXPIRES"] + if type(delta) is int: + delta = timedelta(seconds=delta) + if delta is not False: + # Basically runtime typechecking. Probably a better way to do + # this with proper type checking + try: + delta + datetime.now(timezone.utc) + except TypeError as e: + err = ( + "must be able to add JWT_REFRESH_TOKEN_EXPIRES to datetime.datetime" + ) + raise RuntimeError(err) from e + return delta + + @property + def algorithm(self) -> str: + return current_app.config["JWT_ALGORITHM"] + + @property + def decode_algorithms(self) -> List[str]: + algorithms = current_app.config["JWT_DECODE_ALGORITHMS"] + if not algorithms: + return [self.algorithm] + if self.algorithm not in algorithms: + algorithms.append(self.algorithm) + return algorithms + + @property + def _secret_key(self) -> str: + key = current_app.config["JWT_SECRET_KEY"] + if not key: + key = current_app.config.get("SECRET_KEY", None) + if not key: + raise RuntimeError( + "JWT_SECRET_KEY or flask SECRET_KEY " + "must be set when using symmetric " + 'algorithm "{}"'.format(self.algorithm) + ) + return key + + @property + def _public_key(self) -> str: + key = current_app.config["JWT_PUBLIC_KEY"] + if not key: + raise RuntimeError( + "JWT_PUBLIC_KEY must be set to use " + "asymmetric cryptography algorithm " + '"{}"'.format(self.algorithm) + ) + return key + + @property + def _private_key(self) -> str: + key = current_app.config["JWT_PRIVATE_KEY"] + if not key: + raise RuntimeError( + "JWT_PRIVATE_KEY must be set to use " + "asymmetric cryptography algorithm " + '"{}"'.format(self.algorithm) + ) + return key + + @property + def cookie_max_age(self) -> Optional[int]: + # Returns the appropiate value for max_age for flask set_cookies. If + # session cookie is true, return None, otherwise return a number of + # seconds 1 year in the future + return None if self.session_cookie else 31540000 # 1 year + + @property + def identity_claim_key(self) -> str: + return current_app.config["JWT_IDENTITY_CLAIM"] + + @property + def exempt_methods(self) -> Iterable[str]: + return {"OPTIONS"} + + @property + def error_msg_key(self) -> str: + return current_app.config["JWT_ERROR_MESSAGE_KEY"] + + @property + def json_encoder(self) -> Type[JSONEncoder]: + return get_json_encoder(current_app) + + @property + def decode_audience(self) -> Union[str, Iterable[str]]: + return current_app.config["JWT_DECODE_AUDIENCE"] + + @property + def encode_audience(self) -> Union[str, Iterable[str]]: + return current_app.config["JWT_ENCODE_AUDIENCE"] + + @property + def encode_issuer(self) -> str: + return current_app.config["JWT_ENCODE_ISSUER"] + + @property + def decode_issuer(self) -> str: + return current_app.config["JWT_DECODE_ISSUER"] + + @property + def leeway(self) -> int: + return current_app.config["JWT_DECODE_LEEWAY"] + + @property + def verify_sub(self) -> bool: + return current_app.config["JWT_VERIFY_SUB"] + + @property + def encode_nbf(self) -> bool: + return current_app.config["JWT_ENCODE_NBF"] + + +config = _Config() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/default_callbacks.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/default_callbacks.py new file mode 100644 index 000000000..6f10e5da2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/default_callbacks.py @@ -0,0 +1,161 @@ +""" +These are the default methods implementations that are used in this extension. +All of these can be updated on an app by app basis using the JWTManager +loader decorators. For further information, check out the following links: + +http://flask-jwt-extended.readthedocs.io/en/latest/changing_default_behavior.html +http://flask-jwt-extended.readthedocs.io/en/latest/tokens_from_complex_object.html +""" +from http import HTTPStatus +from typing import Any + +from flask import jsonify +from flask.typing import ResponseReturnValue + +from flask_jwt_extended.config import config + + +def default_additional_claims_callback(userdata: Any) -> dict: + """ + By default, we add no additional claims to the access tokens. + + :param userdata: data passed in as the ```identity``` argument to the + ```create_access_token``` and ```create_refresh_token``` + functions + """ + return {} + + +def default_blocklist_callback(jwt_headers: dict, jwt_data: dict) -> bool: + return False + + +def default_jwt_headers_callback(default_headers) -> dict: + """ + By default header typically consists of two parts: the type of the token, + which is JWT, and the signing algorithm being used, such as HMAC SHA256 + or RSA. But we don't set the default header here we set it as empty which + further by default set while encoding the token + :return: default we set None here + """ + return {} + + +def default_user_identity_callback(userdata: Any) -> str: + """ + By default, we use the passed in object directly as the jwt identity. + See this for additional info: + + :param userdata: data passed in as the ```identity``` argument to the + ```create_access_token``` and ```create_refresh_token``` + functions + """ + return userdata + + +def default_expired_token_callback( + _expired_jwt_header: dict, _expired_jwt_data: dict +) -> ResponseReturnValue: + """ + By default, if an expired token attempts to access a protected endpoint, + we return a generic error message with a 401 status + """ + return jsonify({config.error_msg_key: "Token has expired"}), HTTPStatus.UNAUTHORIZED + + +def default_invalid_token_callback(error_string: str) -> ResponseReturnValue: + """ + By default, if an invalid token attempts to access a protected endpoint, we + return the error string for why it is not valid with a 422 status code + + :param error_string: String indicating why the token is invalid + """ + return ( + jsonify({config.error_msg_key: error_string}), + HTTPStatus.UNPROCESSABLE_ENTITY, + ) + + +def default_unauthorized_callback(error_string: str) -> ResponseReturnValue: + """ + By default, if a protected endpoint is accessed without a JWT, we return + the error string indicating why this is unauthorized, with a 401 status code + + :param error_string: String indicating why this request is unauthorized + """ + return jsonify({config.error_msg_key: error_string}), HTTPStatus.UNAUTHORIZED + + +def default_needs_fresh_token_callback( + jwt_header: dict, jwt_data: dict +) -> ResponseReturnValue: + """ + By default, if a non-fresh jwt is used to access a ```fresh_jwt_required``` + endpoint, we return a general error message with a 401 status code + """ + return ( + jsonify({config.error_msg_key: "Fresh token required"}), + HTTPStatus.UNAUTHORIZED, + ) + + +def default_revoked_token_callback( + jwt_header: dict, jwt_data: dict +) -> ResponseReturnValue: + """ + By default, if a revoked token is used to access a protected endpoint, we + return a general error message with a 401 status code + """ + return ( + jsonify({config.error_msg_key: "Token has been revoked"}), + HTTPStatus.UNAUTHORIZED, + ) + + +def default_user_lookup_error_callback( + _jwt_header: dict, jwt_data: dict +) -> ResponseReturnValue: + """ + By default, if a user_lookup callback is defined and the callback + function returns None, we return a general error message with a 401 + status code + """ + identity = jwt_data[config.identity_claim_key] + result = {config.error_msg_key: f"Error loading the user {identity}"} + return jsonify(result), HTTPStatus.UNAUTHORIZED + + +def default_token_verification_callback(_jwt_header: dict, _jwt_data: dict) -> bool: + """ + By default, we do not do any verification of the user claims. + """ + return True + + +def default_token_verification_failed_callback( + _jwt_header: dict, _jwt_data: dict +) -> ResponseReturnValue: + """ + By default, if the user claims verification failed, we return a generic + error message with a 400 status code + """ + return ( + jsonify({config.error_msg_key: "User claims verification failed"}), + HTTPStatus.BAD_REQUEST, + ) + + +def default_decode_key_callback(jwt_header: dict, jwt_data: dict) -> str: + """ + By default, the decode key specified via the JWT_SECRET_KEY or + JWT_PUBLIC_KEY settings will be used to decode all tokens + """ + return config.decode_key + + +def default_encode_key_callback(identity: Any) -> str: + """ + By default, the encode key specified via the JWT_SECRET_KEY or + JWT_PRIVATE_KEY settings will be used to encode all tokens + """ + return config.encode_key diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/exceptions.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/exceptions.py new file mode 100644 index 000000000..2ad414a85 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/exceptions.py @@ -0,0 +1,102 @@ +class JWTExtendedException(Exception): + """ + Base except which all flask_jwt_extended errors extend + """ + + pass + + +class JWTDecodeError(JWTExtendedException): + """ + An error decoding a JWT + """ + + pass + + +class InvalidHeaderError(JWTExtendedException): + """ + An error getting header information from a request + """ + + pass + + +class InvalidQueryParamError(JWTExtendedException): + """ + An error when a query string param is not in the correct format + """ + + pass + + +class NoAuthorizationError(JWTExtendedException): + """ + An error raised when no authorization token was found in a protected endpoint + """ + + pass + + +class CSRFError(JWTExtendedException): + """ + An error with CSRF protection + """ + + pass + + +class WrongTokenError(JWTExtendedException): + """ + Error raised when attempting to use a refresh token to access an endpoint + or vice versa + """ + + pass + + +class RevokedTokenError(JWTExtendedException): + """ + Error raised when a revoked token attempt to access a protected endpoint + """ + + def __init__(self, jwt_header: dict, jwt_data: dict) -> None: + super().__init__("Token has been revoked") + self.jwt_header = jwt_header + self.jwt_data = jwt_data + + +class FreshTokenRequired(JWTExtendedException): + """ + Error raised when a valid, non-fresh JWT attempt to access an endpoint + protected by fresh_jwt_required + """ + + def __init__(self, message, jwt_header: dict, jwt_data: dict) -> None: + super().__init__(message) + self.jwt_header = jwt_header + self.jwt_data = jwt_data + + +class UserLookupError(JWTExtendedException): + """ + Error raised when a user_lookup callback function returns None, indicating + that it cannot or will not load a user for the given identity. + """ + + def __init__(self, message, jwt_header: dict, jwt_data: dict) -> None: + super().__init__(message) + self.jwt_header = jwt_header + self.jwt_data = jwt_data + + +class UserClaimsVerificationError(JWTExtendedException): + """ + Error raised when the claims_verification_callback function returns False, + indicating that the expected user claims are invalid + """ + + def __init__(self, message, jwt_header: dict, jwt_data: dict) -> None: + super().__init__(message) + self.jwt_header = jwt_header + self.jwt_data = jwt_data diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/internal_utils.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/internal_utils.py new file mode 100644 index 000000000..dfeaa9ed3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/internal_utils.py @@ -0,0 +1,98 @@ +import json +from typing import Any +from typing import Type +from typing import TYPE_CHECKING + +from flask import current_app +from flask import Flask + +from flask_jwt_extended.exceptions import RevokedTokenError +from flask_jwt_extended.exceptions import UserClaimsVerificationError +from flask_jwt_extended.exceptions import WrongTokenError + +try: + from flask.json.provider import DefaultJSONProvider + + HAS_JSON_PROVIDER = True +except ModuleNotFoundError: # pragma: no cover + # The flask.json.provider module was added in Flask 2.2. + # Further details are handled in get_json_encoder. + HAS_JSON_PROVIDER = False + + +if TYPE_CHECKING: # pragma: no cover + from flask_jwt_extended import JWTManager + + +def get_jwt_manager() -> "JWTManager": + try: + return current_app.extensions["flask-jwt-extended"] + except KeyError: # pragma: no cover + raise RuntimeError( + "You must initialize a JWTManager with this flask " + "application before using this method" + ) from None + + +def has_user_lookup() -> bool: + jwt_manager = get_jwt_manager() + return jwt_manager._user_lookup_callback is not None + + +def user_lookup(*args, **kwargs) -> Any: + jwt_manager = get_jwt_manager() + return jwt_manager._user_lookup_callback and jwt_manager._user_lookup_callback( + *args, **kwargs + ) + + +def verify_token_type(decoded_token: dict, refresh: bool) -> None: + if not refresh and decoded_token["type"] == "refresh": + raise WrongTokenError("Only non-refresh tokens are allowed") + elif refresh and decoded_token["type"] != "refresh": + raise WrongTokenError("Only refresh tokens are allowed") + + +def verify_token_not_blocklisted(jwt_header: dict, jwt_data: dict) -> None: + jwt_manager = get_jwt_manager() + if jwt_manager._token_in_blocklist_callback(jwt_header, jwt_data): + raise RevokedTokenError(jwt_header, jwt_data) + + +def custom_verification_for_token(jwt_header: dict, jwt_data: dict) -> None: + jwt_manager = get_jwt_manager() + if not jwt_manager._token_verification_callback(jwt_header, jwt_data): + error_msg = "User claims verification failed" + raise UserClaimsVerificationError(error_msg, jwt_header, jwt_data) + + +class JSONEncoder(json.JSONEncoder): + """A JSON encoder which uses the app.json_provider_class for the default""" + + def default(self, o: Any) -> Any: + # If the registered JSON provider does not implement a default classmethod + # use the method defined by the DefaultJSONProvider + default = getattr( + current_app.json_provider_class, "default", DefaultJSONProvider.default + ) + return default(o) + + +def get_json_encoder(app: Flask) -> Type[json.JSONEncoder]: + """Get the JSON Encoder for the provided flask app + + Starting with flask version 2.2 the flask application provides a + interface to register a custom JSON Encoder/Decoder under the json_provider_class. + As this interface is not compatible with the standard JSONEncoder, the `default` + method of the class is wrapped. + + Lookup Order: + - app.json_encoder - For Flask < 2.2 + - app.json_provider_class.default + - flask.json.provider.DefaultJSONProvider.default + + """ + if not HAS_JSON_PROVIDER: # pragma: no cover + return app.json_encoder # type: ignore + + return JSONEncoder diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/jwt_manager.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/jwt_manager.py new file mode 100644 index 000000000..7f0d967d2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/jwt_manager.py @@ -0,0 +1,564 @@ +import datetime +from typing import Any +from typing import Callable +from typing import Optional + +import jwt +from flask import Flask +from jwt import DecodeError +from jwt import ExpiredSignatureError +from jwt import InvalidAudienceError +from jwt import InvalidIssuerError +from jwt import InvalidTokenError +from jwt import MissingRequiredClaimError + +from flask_jwt_extended.config import config +from flask_jwt_extended.default_callbacks import default_additional_claims_callback +from flask_jwt_extended.default_callbacks import default_blocklist_callback +from flask_jwt_extended.default_callbacks import default_decode_key_callback +from flask_jwt_extended.default_callbacks import default_encode_key_callback +from flask_jwt_extended.default_callbacks import default_expired_token_callback +from flask_jwt_extended.default_callbacks import default_invalid_token_callback +from flask_jwt_extended.default_callbacks import default_jwt_headers_callback +from flask_jwt_extended.default_callbacks import default_needs_fresh_token_callback +from flask_jwt_extended.default_callbacks import default_revoked_token_callback +from flask_jwt_extended.default_callbacks import default_token_verification_callback +from flask_jwt_extended.default_callbacks import ( + default_token_verification_failed_callback, +) +from flask_jwt_extended.default_callbacks import default_unauthorized_callback +from flask_jwt_extended.default_callbacks import default_user_identity_callback +from flask_jwt_extended.default_callbacks import default_user_lookup_error_callback +from flask_jwt_extended.exceptions import CSRFError +from flask_jwt_extended.exceptions import FreshTokenRequired +from flask_jwt_extended.exceptions import InvalidHeaderError +from flask_jwt_extended.exceptions import InvalidQueryParamError +from flask_jwt_extended.exceptions import JWTDecodeError +from flask_jwt_extended.exceptions import NoAuthorizationError +from flask_jwt_extended.exceptions import RevokedTokenError +from flask_jwt_extended.exceptions import UserClaimsVerificationError +from flask_jwt_extended.exceptions import UserLookupError +from flask_jwt_extended.exceptions import WrongTokenError +from flask_jwt_extended.tokens import _decode_jwt +from flask_jwt_extended.tokens import _encode_jwt +from flask_jwt_extended.typing import ExpiresDelta +from flask_jwt_extended.typing import Fresh +from flask_jwt_extended.utils import current_user_context_processor + + +class JWTManager(object): + """ + An object used to hold JWT settings and callback functions for the + Flask-JWT-Extended extension. + + Instances of :class:`JWTManager` are *not* bound to specific apps, so + you can create one in the main body of your code and then bind it + to your app in a factory function. + """ + + def __init__( + self, app: Optional[Flask] = None, add_context_processor: bool = False + ) -> None: + """ + Create the JWTManager instance. You can either pass a flask application + in directly here to register this extension with the flask app, or + call init_app after creating this object (in a factory pattern). + + :param app: + The Flask Application object + :param add_context_processor: + Controls if `current_user` is should be added to flasks template + context (and thus be available for use in Jinja templates). Defaults + to ``False``. + """ + # Register the default error handler callback methods. These can be + # overridden with the appropriate loader decorators + self._decode_key_callback = default_decode_key_callback + self._encode_key_callback = default_encode_key_callback + self._expired_token_callback = default_expired_token_callback + self._invalid_token_callback = default_invalid_token_callback + self._jwt_additional_header_callback = default_jwt_headers_callback + self._needs_fresh_token_callback = default_needs_fresh_token_callback + self._revoked_token_callback = default_revoked_token_callback + self._token_in_blocklist_callback = default_blocklist_callback + self._token_verification_callback = default_token_verification_callback + self._unauthorized_callback = default_unauthorized_callback + self._user_claims_callback = default_additional_claims_callback + self._user_identity_callback = default_user_identity_callback + self._user_lookup_callback: Optional[Callable] = None + self._user_lookup_error_callback = default_user_lookup_error_callback + self._token_verification_failed_callback = ( + default_token_verification_failed_callback + ) + + # Register this extension with the flask app now (if it is provided) + if app is not None: + self.init_app(app, add_context_processor) + + def init_app(self, app: Flask, add_context_processor: bool = False) -> None: + """ + Register this extension with the flask app. + + :param app: + The Flask Application object + :param add_context_processor: + Controls if `current_user` is should be added to flasks template + context (and thus be available for use in Jinja templates). Defaults + to ``False``. + """ + # Save this so we can use it later in the extension + if not hasattr(app, "extensions"): # pragma: no cover + app.extensions = {} + app.extensions["flask-jwt-extended"] = self + + if add_context_processor: + app.context_processor(current_user_context_processor) + + # Set all the default configurations for this extension + self._set_default_configuration_options(app) + self._set_error_handler_callbacks(app) + + def _set_error_handler_callbacks(self, app: Flask) -> None: + @app.errorhandler(CSRFError) + def handle_csrf_error(e): + return self._unauthorized_callback(str(e)) + + @app.errorhandler(DecodeError) + def handle_decode_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(ExpiredSignatureError) + def handle_expired_error(e): + return self._expired_token_callback(e.jwt_header, e.jwt_data) + + @app.errorhandler(FreshTokenRequired) + def handle_fresh_token_required(e): + return self._needs_fresh_token_callback(e.jwt_header, e.jwt_data) + + @app.errorhandler(MissingRequiredClaimError) + def handle_missing_required_claim_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(InvalidAudienceError) + def handle_invalid_audience_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(InvalidIssuerError) + def handle_invalid_issuer_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(InvalidHeaderError) + def handle_invalid_header_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(InvalidTokenError) + def handle_invalid_token_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(JWTDecodeError) + def handle_jwt_decode_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(NoAuthorizationError) + def handle_auth_error(e): + return self._unauthorized_callback(str(e)) + + @app.errorhandler(InvalidQueryParamError) + def handle_invalid_query_param_error(e): + return self._invalid_token_callback(str(e)) + + @app.errorhandler(RevokedTokenError) + def handle_revoked_token_error(e): + return self._revoked_token_callback(e.jwt_header, e.jwt_data) + + @app.errorhandler(UserClaimsVerificationError) + def handle_failed_token_verification(e): + return self._token_verification_failed_callback(e.jwt_header, e.jwt_data) + + @app.errorhandler(UserLookupError) + def handler_user_lookup_error(e): + return self._user_lookup_error_callback(e.jwt_header, e.jwt_data) + + @app.errorhandler(WrongTokenError) + def handle_wrong_token_error(e): + return self._invalid_token_callback(str(e)) + + @staticmethod + def _set_default_configuration_options(app: Flask) -> None: + app.config.setdefault( + "JWT_ACCESS_TOKEN_EXPIRES", datetime.timedelta(minutes=15) + ) + app.config.setdefault("JWT_ACCESS_COOKIE_NAME", "access_token_cookie") + app.config.setdefault("JWT_ACCESS_COOKIE_PATH", "/") + app.config.setdefault("JWT_ACCESS_CSRF_COOKIE_NAME", "csrf_access_token") + app.config.setdefault("JWT_ACCESS_CSRF_COOKIE_PATH", "/") + app.config.setdefault("JWT_ACCESS_CSRF_FIELD_NAME", "csrf_token") + app.config.setdefault("JWT_ACCESS_CSRF_HEADER_NAME", "X-CSRF-TOKEN") + app.config.setdefault("JWT_ALGORITHM", "HS256") + app.config.setdefault("JWT_COOKIE_CSRF_PROTECT", True) + app.config.setdefault("JWT_COOKIE_DOMAIN", None) + app.config.setdefault("JWT_COOKIE_SAMESITE", None) + app.config.setdefault("JWT_COOKIE_SECURE", False) + app.config.setdefault("JWT_CSRF_CHECK_FORM", False) + app.config.setdefault("JWT_CSRF_IN_COOKIES", True) + app.config.setdefault("JWT_CSRF_METHODS", ["POST", "PUT", "PATCH", "DELETE"]) + app.config.setdefault("JWT_DECODE_ALGORITHMS", None) + app.config.setdefault("JWT_DECODE_AUDIENCE", None) + app.config.setdefault("JWT_DECODE_ISSUER", None) + app.config.setdefault("JWT_DECODE_LEEWAY", 0) + app.config.setdefault("JWT_ENCODE_AUDIENCE", None) + app.config.setdefault("JWT_ENCODE_ISSUER", None) + app.config.setdefault("JWT_ERROR_MESSAGE_KEY", "msg") + app.config.setdefault("JWT_HEADER_NAME", "Authorization") + app.config.setdefault("JWT_HEADER_TYPE", "Bearer") + app.config.setdefault("JWT_IDENTITY_CLAIM", "sub") + app.config.setdefault("JWT_JSON_KEY", "access_token") + app.config.setdefault("JWT_PRIVATE_KEY", None) + app.config.setdefault("JWT_PUBLIC_KEY", None) + app.config.setdefault("JWT_QUERY_STRING_NAME", "jwt") + app.config.setdefault("JWT_QUERY_STRING_VALUE_PREFIX", "") + app.config.setdefault("JWT_REFRESH_COOKIE_NAME", "refresh_token_cookie") + app.config.setdefault("JWT_REFRESH_COOKIE_PATH", "/") + app.config.setdefault("JWT_REFRESH_CSRF_COOKIE_NAME", "csrf_refresh_token") + app.config.setdefault("JWT_REFRESH_CSRF_COOKIE_PATH", "/") + app.config.setdefault("JWT_REFRESH_CSRF_FIELD_NAME", "csrf_token") + app.config.setdefault("JWT_REFRESH_CSRF_HEADER_NAME", "X-CSRF-TOKEN") + app.config.setdefault("JWT_REFRESH_JSON_KEY", "refresh_token") + app.config.setdefault("JWT_REFRESH_TOKEN_EXPIRES", datetime.timedelta(days=30)) + app.config.setdefault("JWT_SECRET_KEY", None) + app.config.setdefault("JWT_SESSION_COOKIE", True) + app.config.setdefault("JWT_TOKEN_LOCATION", ("headers",)) + app.config.setdefault("JWT_VERIFY_SUB", True) + app.config.setdefault("JWT_ENCODE_NBF", True) + + def additional_claims_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to add additional claims + when creating a JWT. The claims returned by this function will be merged + with any claims passed in via the ``additional_claims`` argument to + :func:`~flask_jwt_extended.create_access_token` or + :func:`~flask_jwt_extended.create_refresh_token`. + + The decorated function must take **one** argument. + + The argument is the identity that was used when creating a JWT. + + The decorated function must return a dictionary of claims to add to the JWT. + """ + self._user_claims_callback = callback + return callback + + def additional_headers_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to add additional headers + when creating a JWT. The headers returned by this function will be merged + with any headers passed in via the ``additional_headers`` argument to + :func:`~flask_jwt_extended.create_access_token` or + :func:`~flask_jwt_extended.create_refresh_token`. + + The decorated function must take **one** argument. + + The argument is the identity that was used when creating a JWT. + + The decorated function must return a dictionary of headers to add to the JWT. + """ + self._jwt_additional_header_callback = callback + return callback + + def decode_key_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for dynamically setting the JWT + decode key based on the **UNVERIFIED** contents of the token. Think + carefully before using this functionality, in most cases you probably + don't need it. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the + unverified JWT. + + The second argument is a dictionary containing the payload data of the + unverified JWT. + + The decorated function must return a *string* that is used to decode and + verify the token. + """ + self._decode_key_callback = callback + return callback + + def encode_key_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for dynamically setting the JWT + encode key based on the tokens identity. Think carefully before using this + functionality, in most cases you probably don't need it. + + The decorated function must take **one** argument. + + The argument is the identity used to create this JWT. + + The decorated function must return a *string* which is the secrete key used to + encode the JWT. + """ + self._encode_key_callback = callback + return callback + + def expired_token_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for returning a custom + response when an expired JWT is encountered. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return a Flask Response. + """ + self._expired_token_callback = callback + return callback + + def invalid_token_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for returning a custom + response when an invalid JWT is encountered. + + This decorator sets the callback function that will be used if an + invalid JWT attempts to access a protected endpoint. + + The decorated function must take **one** argument. + + The argument is a string which contains the reason why a token is invalid. + + The decorated function must return a Flask Response. + """ + self._invalid_token_callback = callback + return callback + + def needs_fresh_token_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for returning a custom + response when a valid and non-fresh token is used on an endpoint + that is marked as ``fresh=True``. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return a Flask Response. + """ + self._needs_fresh_token_callback = callback + return callback + + def revoked_token_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function for returning a custom + response when a revoked token is encountered. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return a Flask Response. + """ + self._revoked_token_callback = callback + return callback + + def token_in_blocklist_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to check if a JWT has + been revoked. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must be return ``True`` if the token has been + revoked, ``False`` otherwise. + """ + self._token_in_blocklist_callback = callback + return callback + + def token_verification_failed_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to return a custom + response when the claims verification check fails. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return a Flask Response. + """ + self._token_verification_failed_callback = callback + return callback + + def token_verification_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used for custom verification + of a valid JWT. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return ``True`` if the token is valid, or + ``False`` otherwise. + """ + self._token_verification_callback = callback + return callback + + def unauthorized_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to return a custom + response when no JWT is present. + + The decorated function must take **one** argument. + + The argument is a string that explains why the JWT could not be found. + + The decorated function must return a Flask Response. + """ + self._unauthorized_callback = callback + return callback + + def user_identity_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to convert an identity to + a string when creating JWTs. This is useful for using objects (such as + SQLAlchemy instances) as the identity when creating your tokens. + + The decorated function must take **one** argument. + + The argument is the identity that was used when creating a JWT. + + The decorated function must return a string. + """ + self._user_identity_callback = callback + return callback + + def user_lookup_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to convert a JWT into + a python object that can be used in a protected endpoint. This is useful + for automatically loading a SQLAlchemy instance based on the contents + of the JWT. + + The object returned from this function can be accessed via + :attr:`~flask_jwt_extended.current_user` or + :meth:`~flask_jwt_extended.get_current_user` + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function can return any python object, which can then be + accessed in a protected endpoint. If an object cannot be loaded, for + example if a user has been deleted from your database, ``None`` must be + returned to indicate that an error occurred loading the user. + """ + self._user_lookup_callback = callback + return callback + + def user_lookup_error_loader(self, callback: Callable) -> Callable: + """ + This decorator sets the callback function used to return a custom + response when loading a user via + :meth:`~flask_jwt_extended.JWTManager.user_lookup_loader` fails. + + The decorated function must take **two** arguments. + + The first argument is a dictionary containing the header data of the JWT. + + The second argument is a dictionary containing the payload data of the JWT. + + The decorated function must return a Flask Response. + """ + self._user_lookup_error_callback = callback + return callback + + def _encode_jwt_from_config( + self, + identity: Any, + token_type: str, + claims=None, + fresh: Fresh = False, + expires_delta: Optional[ExpiresDelta] = None, + headers=None, + ) -> str: + header_overrides = self._jwt_additional_header_callback(identity) + if headers is not None: + header_overrides.update(headers) + + claim_overrides = self._user_claims_callback(identity) + if claims is not None: + claim_overrides.update(claims) + + if expires_delta is None: + if token_type == "access": + expires_delta = config.access_expires + else: + expires_delta = config.refresh_expires + + return _encode_jwt( + algorithm=config.algorithm, + audience=config.encode_audience, + claim_overrides=claim_overrides, + csrf=config.cookie_csrf_protect, + expires_delta=expires_delta, + fresh=fresh, + header_overrides=header_overrides, + identity=self._user_identity_callback(identity), + identity_claim_key=config.identity_claim_key, + issuer=config.encode_issuer, + json_encoder=config.json_encoder, + secret=self._encode_key_callback(identity), + token_type=token_type, + nbf=config.encode_nbf, + ) + + def _decode_jwt_from_config( + self, encoded_token: str, csrf_value=None, allow_expired: bool = False + ) -> dict: + unverified_claims = jwt.decode( + encoded_token, + algorithms=config.decode_algorithms, + options={"verify_signature": False}, + ) + unverified_headers = jwt.get_unverified_header(encoded_token) + secret = self._decode_key_callback(unverified_headers, unverified_claims) + + kwargs = { + "algorithms": config.decode_algorithms, + "audience": config.decode_audience, + "csrf_value": csrf_value, + "encoded_token": encoded_token, + "identity_claim_key": config.identity_claim_key, + "issuer": config.decode_issuer, + "leeway": config.leeway, + "secret": secret, + "verify_aud": config.decode_audience is not None, + "verify_sub": config.verify_sub, + } + + try: + return _decode_jwt(**kwargs, allow_expired=allow_expired) + except ExpiredSignatureError as e: + # TODO: If we ever do another breaking change, don't raise this pyjwt + # error directly, instead raise a custom error of ours from this + # error. + e.jwt_header = unverified_headers # type: ignore + e.jwt_data = _decode_jwt(**kwargs, allow_expired=True) # type: ignore + raise diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/tokens.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/tokens.py new file mode 100644 index 000000000..836101c33 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/tokens.py @@ -0,0 +1,125 @@ +import uuid +from datetime import datetime +from datetime import timedelta +from datetime import timezone +from hmac import compare_digest +from json import JSONEncoder +from typing import Any +from typing import Iterable +from typing import List +from typing import Type +from typing import Union + +import jwt + +from flask_jwt_extended.exceptions import CSRFError +from flask_jwt_extended.exceptions import JWTDecodeError +from flask_jwt_extended.typing import ExpiresDelta +from flask_jwt_extended.typing import Fresh + + +def _encode_jwt( + algorithm: str, + audience: Union[str, Iterable[str]], + claim_overrides: dict, + csrf: bool, + expires_delta: ExpiresDelta, + fresh: Fresh, + header_overrides: dict, + identity: Any, + identity_claim_key: str, + issuer: str, + json_encoder: Type[JSONEncoder], + secret: str, + token_type: str, + nbf: bool, +) -> str: + now = datetime.now(timezone.utc) + + if isinstance(fresh, timedelta): + fresh = datetime.timestamp(now + fresh) + + token_data = { + "fresh": fresh, + "iat": now, + "jti": str(uuid.uuid4()), + "type": token_type, + identity_claim_key: identity, + } + + if nbf: + token_data["nbf"] = now + + if csrf: + token_data["csrf"] = str(uuid.uuid4()) + + if audience: + token_data["aud"] = audience + + if issuer: + token_data["iss"] = issuer + + if expires_delta: + token_data["exp"] = now + expires_delta + + if claim_overrides: + token_data.update(claim_overrides) + + return jwt.encode( + token_data, + secret, + algorithm, + json_encoder=json_encoder, # type: ignore + headers=header_overrides, + ) + + +def _decode_jwt( + algorithms: List, + allow_expired: bool, + audience: Union[str, Iterable[str]], + csrf_value: str, + encoded_token: str, + identity_claim_key: str, + issuer: str, + leeway: int, + secret: str, + verify_aud: bool, + verify_sub: bool, +) -> dict: + options = {"verify_aud": verify_aud, "verify_sub": verify_sub} + if allow_expired: + options["verify_exp"] = False + + # This call verifies the ext, iat, and nbf claims + # This optionally verifies the exp and aud claims if enabled + decoded_token = jwt.decode( + encoded_token, + secret, + algorithms=algorithms, + audience=audience, + issuer=issuer, + leeway=leeway, + options=options, + ) + + # Make sure that any custom claims we expect in the token are present + if identity_claim_key not in decoded_token: + raise JWTDecodeError("Missing claim: {}".format(identity_claim_key)) + + if "type" not in decoded_token: + decoded_token["type"] = "access" + + if "fresh" not in decoded_token: + decoded_token["fresh"] = False + + if "jti" not in decoded_token: + decoded_token["jti"] = None + + if csrf_value: + if "csrf" not in decoded_token: + raise JWTDecodeError("Missing claim: csrf") + if not compare_digest(decoded_token["csrf"], csrf_value): + raise CSRFError("CSRF double submit tokens do not match") + + return decoded_token diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/typing.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/typing.py new file mode 100644 index 000000000..cc36d7bf8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/typing.py @@ -0,0 +1,6 @@ +from datetime import timedelta +from typing import Literal +from typing import Union + +ExpiresDelta = Union[Literal[False], timedelta] +Fresh = Union[bool, float, timedelta] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/utils.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/utils.py new file mode 100644 index 000000000..d42f582a9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/utils.py @@ -0,0 +1,461 @@ +from typing import Any +from typing import Optional + +import jwt +from flask import g +from flask import Response +from werkzeug.local import LocalProxy + +from flask_jwt_extended.config import config +from flask_jwt_extended.internal_utils import get_jwt_manager +from flask_jwt_extended.typing import ExpiresDelta +from flask_jwt_extended.typing import Fresh + +# Proxy to access the current user +current_user: Any = LocalProxy(lambda: get_current_user()) + + +def get_jwt() -> dict: + """ + In a protected endpoint, this will return the python dictionary which has + the payload of the JWT that is accessing the endpoint. If no JWT is present + due to ``jwt_required(optional=True)``, an empty dictionary is returned. + + :return: + The payload (claims) of the JWT in the current request + """ + decoded_jwt = g.get("_jwt_extended_jwt", None) + if decoded_jwt is None: + raise RuntimeError( + "You must call `@jwt_required()` or `verify_jwt_in_request()` " + "before using this method" + ) + return decoded_jwt + + +def get_jwt_header() -> dict: + """ + In a protected endpoint, this will return the python dictionary which has + the header of the JWT that is accessing the endpoint. If no JWT is present + due to ``jwt_required(optional=True)``, an empty dictionary is returned. + + :return: + The headers of the JWT in the current request + """ + decoded_header = g.get("_jwt_extended_jwt_header", None) + if decoded_header is None: + raise RuntimeError( + "You must call `@jwt_required()` or `verify_jwt_in_request()` " + "before using this method" + ) + return decoded_header + + +def get_jwt_identity() -> Any: + """ + In a protected endpoint, this will return the identity of the JWT that is + accessing the endpoint. If no JWT is present due to + ``jwt_required(optional=True)``, ``None`` is returned. + + :return: + The identity of the JWT in the current request + """ + return get_jwt().get(config.identity_claim_key, None) + + +def get_jwt_request_location() -> Optional[str]: + """ + In a protected endpoint, this will return the "location" at which the JWT + that is accessing the endpoint was found--e.g., "cookies", "query-string", + "headers", or "json". If no JWT is present due to ``jwt_required(optional=True)``, + None is returned. + + :return: + The location of the JWT in the current request; e.g., "cookies", + "query-string", "headers", or "json" + """ + return g.get("_jwt_extended_jwt_location", None) + + +def get_current_user() -> Any: + """ + In a protected endpoint, this will return the user object for the JWT that + is accessing the endpoint. + + This is only usable if :meth:`~flask_jwt_extended.JWTManager.user_lookup_loader` + is configured. If the user loader callback is not being used, this will + raise an error. + + If no JWT is present due to ``jwt_required(optional=True)``, ``None`` is returned. + + :return: + The current user object for the JWT in the current request + """ + get_jwt() # Raise an error if not in a decorated context + jwt_user_dict = g.get("_jwt_extended_jwt_user", None) + if jwt_user_dict is None: + raise RuntimeError( + "You must provide a `@jwt.user_lookup_loader` callback to use " + "this method" + ) + return jwt_user_dict["loaded_user"] + + +def decode_token( + encoded_token: str, csrf_value: Optional[str] = None, allow_expired: bool = False +) -> dict: + """ + Returns the decoded token (python dict) from an encoded JWT. This does all + the checks to ensure that the decoded token is valid before returning it. + + This will not fire the user loader callbacks, save the token for access + in protected endpoints, checked if a token is revoked, etc. This is puerly + used to ensure that a JWT is valid. + + :param encoded_token: + The encoded JWT to decode. + + :param csrf_value: + Expected CSRF double submit value (optional). + + :param allow_expired: + If ``True``, do not raise an error if the JWT is expired. Defaults to ``False`` + + :return: + Dictionary containing the payload of the JWT decoded JWT. + """ + jwt_manager = get_jwt_manager() + return jwt_manager._decode_jwt_from_config(encoded_token, csrf_value, allow_expired) + + +def create_access_token( + identity: Any, + fresh: Fresh = False, + expires_delta: Optional[ExpiresDelta] = None, + additional_claims=None, + additional_headers=None, +): + """ + Create a new access token. + + :param identity: + The identity of this token. This must either be a string, or you must have + defined :meth:`~flask_jwt_extended.JWTManager.user_identity_loader` in order + to convert the object you passed in into a string. + + :param fresh: + If this token should be marked as fresh, and can thus access endpoints + protected with ``@jwt_required(fresh=True)``. Defaults to ``False``. + + This value can also be a ``datetime.timedelta``, which indicate + how long this token will be considered fresh. + + :param expires_delta: + A ``datetime.timedelta`` for how long this token should last before it + expires. Set to False to disable expiration. If this is None, it will use + the ``JWT_ACCESS_TOKEN_EXPIRES`` config value (see :ref:`Configuration Options`) + + :param additional_claims: + Optional. A hash of claims to include in the access token. These claims are + merged into the default claims (exp, iat, etc) and claims returned from the + :meth:`~flask_jwt_extended.JWTManager.additional_claims_loader` callback. + On conflict, these claims take precedence. + + :param headers: + Optional. A hash of headers to include in the access token. These headers + are merged into the default headers (alg, typ) and headers returned from + the :meth:`~flask_jwt_extended.JWTManager.additional_headers_loader` + callback. On conflict, these headers take precedence. + + :return: + An encoded access token + """ + jwt_manager = get_jwt_manager() + return jwt_manager._encode_jwt_from_config( + claims=additional_claims, + expires_delta=expires_delta, + fresh=fresh, + headers=additional_headers, + identity=identity, + token_type="access", + ) + + +def create_refresh_token( + identity: Any, + expires_delta: Optional[ExpiresDelta] = None, + additional_claims=None, + additional_headers=None, +): + """ + Create a new refresh token. + + :param identity: + The identity of this token. This must either be a string, or you must have + defined :meth:`~flask_jwt_extended.JWTManager.user_identity_loader` in order + to convert the object you passed in into a string. + + :param expires_delta: + A ``datetime.timedelta`` for how long this token should last before it expires. + Set to False to disable expiration. If this is None, it will use the + ``JWT_REFRESH_TOKEN_EXPIRES`` config value (see :ref:`Configuration Options`) + + :param additional_claims: + Optional. A hash of claims to include in the refresh token. These claims are + merged into the default claims (exp, iat, etc) and claims returned from the + :meth:`~flask_jwt_extended.JWTManager.additional_claims_loader` callback. + On conflict, these claims take precedence. + + :param headers: + Optional. A hash of headers to include in the refresh token. These headers + are merged into the default headers (alg, typ) and headers returned from the + :meth:`~flask_jwt_extended.JWTManager.additional_headers_loader` callback. + On conflict, these headers take precedence. + + :return: + An encoded refresh token + """ + jwt_manager = get_jwt_manager() + return jwt_manager._encode_jwt_from_config( + claims=additional_claims, + expires_delta=expires_delta, + fresh=False, + headers=additional_headers, + identity=identity, + token_type="refresh", + ) + + +def get_unverified_jwt_headers(encoded_token: str) -> dict: + """ + Returns the Headers of an encoded JWT without verifying the signature of the JWT. + + :param encoded_token: + The encoded JWT to get the Header from. + + :return: + JWT header parameters as python dict() + """ + return jwt.get_unverified_header(encoded_token) + + +def get_jti(encoded_token: str) -> Optional[str]: + """ + Returns the JTI (unique identifier) of an encoded JWT + + :param encoded_token: + The encoded JWT to get the JTI from. + + :return: + The JTI (unique identifier) of a JWT, if it is present. + """ + return decode_token(encoded_token).get("jti") + + +def get_csrf_token(encoded_token: str) -> str: + """ + Returns the CSRF double submit token from an encoded JWT. + + :param encoded_token: + The encoded JWT + + :return: + The CSRF double submit token (string) + """ + token = decode_token(encoded_token) + return token["csrf"] + + +def set_access_cookies( + response: Response, encoded_access_token: str, max_age=None, domain=None +) -> None: + """ + Modifiy a Flask Response to set a cookie containing the access JWT. + Also sets the corresponding CSRF cookies if ``JWT_CSRF_IN_COOKIES`` is ``True`` + (see :ref:`Configuration Options`) + + :param response: + A Flask Response object. + + :param encoded_access_token: + The encoded access token to set in the cookies. + + :param max_age: + The max age of the cookie. If this is None, it will use the + ``JWT_SESSION_COOKIE`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``max-age`` and the JWT_SESSION_COOKIE option + will be ignored. Values should be the number of seconds (as an integer). + + :param domain: + The domain of the cookie. If this is None, it will use the + ``JWT_COOKIE_DOMAIN`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``domain`` and the JWT_COOKIE_DOMAIN option + will be ignored. + """ + response.set_cookie( + config.access_cookie_name, + value=encoded_access_token, + max_age=max_age or config.cookie_max_age, + secure=config.cookie_secure, + httponly=True, + domain=domain or config.cookie_domain, + path=config.access_cookie_path, + samesite=config.cookie_samesite, + ) + + if config.cookie_csrf_protect and config.csrf_in_cookies: + response.set_cookie( + config.access_csrf_cookie_name, + value=get_csrf_token(encoded_access_token), + max_age=max_age or config.cookie_max_age, + secure=config.cookie_secure, + httponly=False, + domain=domain or config.cookie_domain, + path=config.access_csrf_cookie_path, + samesite=config.cookie_samesite, + ) + + +def set_refresh_cookies( + response: Response, + encoded_refresh_token: str, + max_age: Optional[int] = None, + domain: Optional[str] = None, +) -> None: + """ + Modifiy a Flask Response to set a cookie containing the refresh JWT. + Also sets the corresponding CSRF cookies if ``JWT_CSRF_IN_COOKIES`` is ``True`` + (see :ref:`Configuration Options`) + + :param response: + A Flask Response object. + + :param encoded_refresh_token: + The encoded refresh token to set in the cookies. + + :param max_age: + The max age of the cookie. If this is None, it will use the + ``JWT_SESSION_COOKIE`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``max-age`` and the JWT_SESSION_COOKIE option + will be ignored. Values should be the number of seconds (as an integer). + + :param domain: + The domain of the cookie. If this is None, it will use the + ``JWT_COOKIE_DOMAIN`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``domain`` and the JWT_COOKIE_DOMAIN option + will be ignored. + """ + response.set_cookie( + config.refresh_cookie_name, + value=encoded_refresh_token, + max_age=max_age or config.cookie_max_age, + secure=config.cookie_secure, + httponly=True, + domain=domain or config.cookie_domain, + path=config.refresh_cookie_path, + samesite=config.cookie_samesite, + ) + + if config.cookie_csrf_protect and config.csrf_in_cookies: + response.set_cookie( + config.refresh_csrf_cookie_name, + value=get_csrf_token(encoded_refresh_token), + max_age=max_age or config.cookie_max_age, + secure=config.cookie_secure, + httponly=False, + domain=domain or config.cookie_domain, + path=config.refresh_csrf_cookie_path, + samesite=config.cookie_samesite, + ) + + +def unset_jwt_cookies(response: Response, domain: Optional[str] = None) -> None: + """ + Modifiy a Flask Response to delete the cookies containing access or refresh + JWTs. Also deletes the corresponding CSRF cookies if applicable. + + :param response: + A Flask Response object + """ + unset_access_cookies(response, domain) + unset_refresh_cookies(response, domain) + + +def unset_access_cookies(response: Response, domain: Optional[str] = None) -> None: + """ + Modifiy a Flask Response to delete the cookie containing an access JWT. + Also deletes the corresponding CSRF cookie if applicable. + + :param response: + A Flask Response object + + :param domain: + The domain of the cookie. If this is None, it will use the + ``JWT_COOKIE_DOMAIN`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``domain`` and the JWT_COOKIE_DOMAIN option + will be ignored. + """ + response.set_cookie( + config.access_cookie_name, + value="", + expires=0, + secure=config.cookie_secure, + httponly=True, + domain=domain or config.cookie_domain, + path=config.access_cookie_path, + samesite=config.cookie_samesite, + ) + + if config.cookie_csrf_protect and config.csrf_in_cookies: + response.set_cookie( + config.access_csrf_cookie_name, + value="", + expires=0, + secure=config.cookie_secure, + httponly=False, + domain=domain or config.cookie_domain, + path=config.access_csrf_cookie_path, + samesite=config.cookie_samesite, + ) + + +def unset_refresh_cookies(response: Response, domain: Optional[str] = None) -> None: + """ + Modifiy a Flask Response to delete the cookie containing a refresh JWT. + Also deletes the corresponding CSRF cookie if applicable. + + :param response: + A Flask Response object + + :param domain: + The domain of the cookie. If this is None, it will use the + ``JWT_COOKIE_DOMAIN`` option (see :ref:`Configuration Options`). Otherwise, + it will use this as the cookies ``domain`` and the JWT_COOKIE_DOMAIN option + will be ignored. + """ + response.set_cookie( + config.refresh_cookie_name, + value="", + expires=0, + secure=config.cookie_secure, + httponly=True, + domain=domain or config.cookie_domain, + path=config.refresh_cookie_path, + samesite=config.cookie_samesite, + ) + + if config.cookie_csrf_protect and config.csrf_in_cookies: + response.set_cookie( + config.refresh_csrf_cookie_name, + value="", + expires=0, + secure=config.cookie_secure, + httponly=False, + domain=domain or config.cookie_domain, + path=config.refresh_csrf_cookie_path, + samesite=config.cookie_samesite, + ) + + +def current_user_context_processor() -> Any: + return {"current_user": get_current_user()} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/view_decorators.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/view_decorators.py new file mode 100644 index 000000000..407bea4de --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_jwt_extended/view_decorators.py @@ -0,0 +1,372 @@ +from datetime import datetime +from datetime import timezone +from functools import wraps +from re import split +from typing import Any +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import Union + +from flask import current_app +from flask import g +from flask import request +from werkzeug.exceptions import BadRequest + +from flask_jwt_extended.config import config +from flask_jwt_extended.exceptions import CSRFError +from flask_jwt_extended.exceptions import FreshTokenRequired +from flask_jwt_extended.exceptions import InvalidHeaderError +from flask_jwt_extended.exceptions import InvalidQueryParamError +from flask_jwt_extended.exceptions import NoAuthorizationError +from flask_jwt_extended.exceptions import UserLookupError +from flask_jwt_extended.internal_utils import custom_verification_for_token +from flask_jwt_extended.internal_utils import has_user_lookup +from flask_jwt_extended.internal_utils import user_lookup +from flask_jwt_extended.internal_utils import verify_token_not_blocklisted +from flask_jwt_extended.internal_utils import verify_token_type +from flask_jwt_extended.utils import decode_token +from flask_jwt_extended.utils import get_unverified_jwt_headers + +LocationType = Union[str, Sequence, None] + + +def _verify_token_is_fresh(jwt_header: dict, jwt_data: dict) -> None: + fresh = jwt_data["fresh"] + if isinstance(fresh, bool): + if not fresh: + raise FreshTokenRequired("Fresh token required", jwt_header, jwt_data) + else: + now = datetime.timestamp(datetime.now(timezone.utc)) + if fresh < now: + raise FreshTokenRequired("Fresh token required", jwt_header, jwt_data) + + +def verify_jwt_in_request( + optional: bool = False, + fresh: bool = False, + refresh: bool = False, + locations: Optional[LocationType] = None, + verify_type: bool = True, + skip_revocation_check: bool = False, +) -> Optional[Tuple[dict, dict]]: + """ + Verify that a valid JWT is present in the request, unless ``optional=True`` in + which case no JWT is also considered valid. + + :param optional: + If ``True``, do not raise an error if no JWT is present in the request. + Defaults to ``False``. + + :param fresh: + If ``True``, require a JWT marked as ``fresh`` in order to be verified. + Defaults to ``False``. + + :param refresh: + If ``True``, requires a refresh JWT to access this endpoint. If ``False``, + requires an access JWT to access this endpoint. Defaults to ``False`` + + :param locations: + A location or list of locations to look for the JWT in this request, for + example ``'headers'`` or ``['headers', 'cookies']``. Defaults to ``None`` + which indicates that JWTs will be looked for in the locations defined by the + ``JWT_TOKEN_LOCATION`` configuration option. + + :param verify_type: + If ``True``, the token type (access or refresh) will be checked according + to the ``refresh`` argument. If ``False``, type will not be checked and both + access and refresh tokens will be accepted. + + :param skip_revocation_check: + If ``True``, revocation status of the token will be *not* checked. If ``False``, + revocation status of the token will be checked. + + :return: + A tuple containing the jwt_header and the jwt_data if a valid JWT is + present in the request. If ``optional=True`` and no JWT is in the request, + ``None`` will be returned instead. Raise an exception if an invalid JWT + is in the request. + """ + if request.method in config.exempt_methods: + return None + + try: + jwt_data, jwt_header, jwt_location = _decode_jwt_from_request( + locations, + fresh, + refresh=refresh, + verify_type=verify_type, + skip_revocation_check=skip_revocation_check, + ) + + except NoAuthorizationError: + if not optional: + raise + g._jwt_extended_jwt = {} + g._jwt_extended_jwt_header = {} + g._jwt_extended_jwt_user = {"loaded_user": None} + g._jwt_extended_jwt_location = None + return None + + # Save these at the very end so that they are only saved in the requet + # context if the token is valid and all callbacks succeed + g._jwt_extended_jwt_user = _load_user(jwt_header, jwt_data) + g._jwt_extended_jwt_header = jwt_header + g._jwt_extended_jwt = jwt_data + g._jwt_extended_jwt_location = jwt_location + + return jwt_header, jwt_data + + +def jwt_required( + optional: bool = False, + fresh: bool = False, + refresh: bool = False, + locations: Optional[LocationType] = None, + verify_type: bool = True, + skip_revocation_check: bool = False, +) -> Any: + """ + A decorator to protect a Flask endpoint with JSON Web Tokens. + + Any route decorated with this will require a valid JWT to be present in the + request (unless optional=True, in which case no JWT is also valid) before the + endpoint can be called. + + :param optional: + If ``True``, allow the decorated endpoint to be accessed if no JWT is present in + the request. Defaults to ``False``. + + :param fresh: + If ``True``, require a JWT marked with ``fresh`` to be able to access this + endpoint. Defaults to ``False``. + + :param refresh: + If ``True``, requires a refresh JWT to access this endpoint. If ``False``, + requires an access JWT to access this endpoint. Defaults to ``False``. + + :param locations: + A location or list of locations to look for the JWT in this request, for + example ``'headers'`` or ``['headers', 'cookies']``. Defaults to ``None`` + which indicates that JWTs will be looked for in the locations defined by the + ``JWT_TOKEN_LOCATION`` configuration option. + + :param verify_type: + If ``True``, the token type (access or refresh) will be checked according + to the ``refresh`` argument. If ``False``, type will not be checked and both + access and refresh tokens will be accepted. + + :param skip_revocation_check: + If ``True``, revocation status of the token will be *not* checked. If ``False``, + revocation status of the token will be checked. + """ + + def wrapper(fn): + @wraps(fn) + def decorator(*args, **kwargs): + verify_jwt_in_request( + optional, fresh, refresh, locations, verify_type, skip_revocation_check + ) + return current_app.ensure_sync(fn)(*args, **kwargs) + + return decorator + + return wrapper + + +def _load_user(jwt_header: dict, jwt_data: dict) -> Optional[dict]: + if not has_user_lookup(): + return None + + identity = jwt_data[config.identity_claim_key] + user = user_lookup(jwt_header, jwt_data) + if user is None: + error_msg = "user_lookup returned None for {}".format(identity) + raise UserLookupError(error_msg, jwt_header, jwt_data) + return {"loaded_user": user} + + +def _decode_jwt_from_headers() -> Tuple[str, None]: + header_name = config.header_name + header_type = config.header_type + + # Verify we have the auth header + auth_header = request.headers.get(header_name, "").strip().strip(",") + if not auth_header: + raise NoAuthorizationError(f"Missing {header_name} Header") + + # Make sure the header is in a valid format that we are expecting, ie + # : . + # + # Also handle the fact that the header that can be comma delimited, ie + # : , , etc... + if header_type: + field_values = split(r",\s*", auth_header) + jwt_headers = [s for s in field_values if s.split()[0] == header_type] + if len(jwt_headers) != 1: + msg = ( + f"Missing '{header_type}' type in '{header_name}' header. " + f"Expected '{header_name}: {header_type} '" + ) + raise NoAuthorizationError(msg) + + parts = jwt_headers[0].split() + if len(parts) != 2: + msg = ( + f"Bad {header_name} header. " + f"Expected '{header_name}: {header_type} '" + ) + raise InvalidHeaderError(msg) + + encoded_token = parts[1] + else: + parts = auth_header.split() + if len(parts) != 1: + msg = f"Bad {header_name} header. Expected '{header_name}: '" + raise InvalidHeaderError(msg) + + encoded_token = parts[0] + + return encoded_token, None + + +def _decode_jwt_from_cookies(refresh: bool) -> Tuple[str, Optional[str]]: + if refresh: + cookie_key = config.refresh_cookie_name + csrf_header_key = config.refresh_csrf_header_name + csrf_field_key = config.refresh_csrf_field_name + else: + cookie_key = config.access_cookie_name + csrf_header_key = config.access_csrf_header_name + csrf_field_key = config.access_csrf_field_name + + encoded_token = request.cookies.get(cookie_key) + if not encoded_token: + raise NoAuthorizationError('Missing cookie "{}"'.format(cookie_key)) + + if config.cookie_csrf_protect and request.method in config.csrf_request_methods: + csrf_value = request.headers.get(csrf_header_key, None) + if not csrf_value and config.csrf_check_form: + csrf_value = request.form.get(csrf_field_key, None) + if not csrf_value: + raise CSRFError("Missing CSRF token") + else: + csrf_value = None + + return encoded_token, csrf_value + + +def _decode_jwt_from_query_string() -> Tuple[str, None]: + param_name = config.query_string_name + prefix = config.query_string_value_prefix + + value = request.args.get(param_name) + if not value: + raise NoAuthorizationError(f"Missing '{param_name}' query paramater") + + if not value.startswith(prefix): + raise InvalidQueryParamError( + f"Invalid value for query parameter '{param_name}'. " + f"Expected the value to start with '{prefix}'" + ) + + encoded_token = value[len(prefix) :] # noqa: E203 + return encoded_token, None + + +def _decode_jwt_from_json(refresh: bool) -> Tuple[str, None]: + if not request.is_json: + raise NoAuthorizationError("Invalid content-type. Must be application/json.") + + if refresh: + token_key = config.refresh_json_key + else: + token_key = config.json_key + + try: + encoded_token = request.json and request.json.get(token_key, None) + if not encoded_token: + raise BadRequest() + except BadRequest: + raise NoAuthorizationError( + 'Missing "{}" key in json data.'.format(token_key) + ) from None + + return encoded_token, None + + +def _decode_jwt_from_request( + locations: LocationType, + fresh: bool, + refresh: bool = False, + verify_type: bool = True, + skip_revocation_check: bool = False, +) -> Tuple[dict, dict, str]: + # Figure out what locations to look for the JWT in this request + if isinstance(locations, str): + locations = [locations] + + if not locations: + locations = config.token_location + + # Get the decode functions in the order specified by locations. + # Each entry in this list is a tuple (, ) + get_encoded_token_functions = [] + for location in locations: + if location == "cookies": + get_encoded_token_functions.append( + (location, lambda: _decode_jwt_from_cookies(refresh)) + ) + elif location == "query_string": + get_encoded_token_functions.append( + (location, _decode_jwt_from_query_string) + ) + elif location == "headers": + get_encoded_token_functions.append((location, _decode_jwt_from_headers)) + elif location == "json": + get_encoded_token_functions.append( + (location, lambda: _decode_jwt_from_json(refresh)) + ) + else: + raise RuntimeError(f"'{location}' is not a valid location") + + # Try to find the token from one of these locations. It only needs to exist + # in one place to be valid (not every location). + errors = [] + decoded_token = None + for location, get_encoded_token_function in get_encoded_token_functions: + try: + encoded_token, csrf_token = get_encoded_token_function() + decoded_token = decode_token(encoded_token, csrf_token) + jwt_location = location + jwt_header = get_unverified_jwt_headers(encoded_token) + break + except NoAuthorizationError as e: + errors.append(str(e)) + + # Do some work to make a helpful and human readable error message if no + # token was found in any of the expected locations. + if not decoded_token: + if len(locations) > 1: + err_msg = "Missing JWT in {start_locs} or {end_locs} ({details})".format( + start_locs=", ".join(locations[:-1]), + end_locs=locations[-1], + details="; ".join(errors), + ) + raise NoAuthorizationError(err_msg) + else: + raise NoAuthorizationError(errors[0]) + + # Additional verifications provided by this extension + if verify_type: + verify_token_type(decoded_token, refresh) + + if fresh: + _verify_token_is_fresh(jwt_header, decoded_token) + + if not skip_revocation_check: + verify_token_not_blocklisted(jwt_header, decoded_token) + + custom_verification_for_token(jwt_header, decoded_token) + + return decoded_token, jwt_header, jwt_location diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/__init__.py new file mode 100644 index 000000000..197f17bda --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/__init__.py @@ -0,0 +1,266 @@ +import argparse +from functools import wraps +import logging +import os +import sys +from flask import current_app, g +from alembic import __version__ as __alembic_version__ +from alembic.config import Config as AlembicConfig +from alembic import command +from alembic.util import CommandError + +alembic_version = tuple([int(v) for v in __alembic_version__.split('.')[0:3]]) +log = logging.getLogger(__name__) + + +class _MigrateConfig(object): + def __init__(self, migrate, db, **kwargs): + self.migrate = migrate + self.db = db + self.directory = migrate.directory + self.configure_args = kwargs + + @property + def metadata(self): + """ + Backwards compatibility, in old releases app.extensions['migrate'] + was set to db, and env.py accessed app.extensions['migrate'].metadata + """ + return self.db.metadata + + +class Config(AlembicConfig): + def __init__(self, *args, **kwargs): + self.template_directory = kwargs.pop('template_directory', None) + super().__init__(*args, **kwargs) + + def get_template_directory(self): + if self.template_directory: + return self.template_directory + package_dir = os.path.abspath(os.path.dirname(__file__)) + return os.path.join(package_dir, 'templates') + + +class Migrate(object): + def __init__(self, app=None, db=None, directory='migrations', command='db', + compare_type=True, render_as_batch=True, **kwargs): + self.configure_callbacks = [] + self.db = db + self.command = command + self.directory = str(directory) + self.alembic_ctx_kwargs = kwargs + self.alembic_ctx_kwargs['compare_type'] = compare_type + self.alembic_ctx_kwargs['render_as_batch'] = render_as_batch + if app is not None and db is not None: + self.init_app(app, db, directory) + + def init_app(self, app, db=None, directory=None, command=None, + compare_type=None, render_as_batch=None, **kwargs): + self.db = db or self.db + self.command = command or self.command + self.directory = str(directory or self.directory) + self.alembic_ctx_kwargs.update(kwargs) + if compare_type is not None: + self.alembic_ctx_kwargs['compare_type'] = compare_type + if render_as_batch is not None: + self.alembic_ctx_kwargs['render_as_batch'] = render_as_batch + if not hasattr(app, 'extensions'): + app.extensions = {} + app.extensions['migrate'] = _MigrateConfig( + self, self.db, **self.alembic_ctx_kwargs) + + from flask_migrate.cli import db as db_cli_group + app.cli.add_command(db_cli_group, name=self.command) + + def configure(self, f): + self.configure_callbacks.append(f) + return f + + def call_configure_callbacks(self, config): + for f in self.configure_callbacks: + config = f(config) + return config + + def get_config(self, directory=None, x_arg=None, opts=None): + if directory is None: + directory = self.directory + directory = str(directory) + config = Config(os.path.join(directory, 'alembic.ini')) + config.set_main_option('script_location', directory) + if config.cmd_opts is None: + config.cmd_opts = argparse.Namespace() + for opt in opts or []: + setattr(config.cmd_opts, opt, True) + if not hasattr(config.cmd_opts, 'x'): + setattr(config.cmd_opts, 'x', []) + for x in getattr(g, 'x_arg', []): + config.cmd_opts.x.append(x) + if x_arg is not None: + if isinstance(x_arg, list) or isinstance(x_arg, tuple): + for x in x_arg: + config.cmd_opts.x.append(x) + else: + config.cmd_opts.x.append(x_arg) + return self.call_configure_callbacks(config) + + +def catch_errors(f): + @wraps(f) + def wrapped(*args, **kwargs): + try: + f(*args, **kwargs) + except (CommandError, RuntimeError) as exc: + log.error('Error: ' + str(exc)) + sys.exit(1) + return wrapped + + +@catch_errors +def list_templates(): + """List available templates.""" + config = Config() + config.print_stdout("Available templates:\n") + for tempname in sorted(os.listdir(config.get_template_directory())): + with open( + os.path.join(config.get_template_directory(), tempname, "README") + ) as readme: + synopsis = next(readme).strip() + config.print_stdout("%s - %s", tempname, synopsis) + + +@catch_errors +def init(directory=None, multidb=False, template=None, package=False): + """Creates a new migration repository""" + if directory is None: + directory = current_app.extensions['migrate'].directory + template_directory = None + if template is not None and ('/' in template or '\\' in template): + template_directory, template = os.path.split(template) + config = Config(template_directory=template_directory) + config.set_main_option('script_location', directory) + config.config_file_name = os.path.join(directory, 'alembic.ini') + config = current_app.extensions['migrate'].\ + migrate.call_configure_callbacks(config) + if multidb and template is None: + template = 'flask-multidb' + elif template is None: + template = 'flask' + command.init(config, directory, template=template, package=package) + + +@catch_errors +def revision(directory=None, message=None, autogenerate=False, sql=False, + head='head', splice=False, branch_label=None, version_path=None, + rev_id=None): + """Create a new revision file.""" + opts = ['autogenerate'] if autogenerate else None + config = current_app.extensions['migrate'].migrate.get_config( + directory, opts=opts) + command.revision(config, message, autogenerate=autogenerate, sql=sql, + head=head, splice=splice, branch_label=branch_label, + version_path=version_path, rev_id=rev_id) + + +@catch_errors +def migrate(directory=None, message=None, sql=False, head='head', splice=False, + branch_label=None, version_path=None, rev_id=None, x_arg=None): + """Alias for 'revision --autogenerate'""" + config = current_app.extensions['migrate'].migrate.get_config( + directory, opts=['autogenerate'], x_arg=x_arg) + command.revision(config, message, autogenerate=True, sql=sql, + head=head, splice=splice, branch_label=branch_label, + version_path=version_path, rev_id=rev_id) + + +@catch_errors +def edit(directory=None, revision='current'): + """Edit current revision.""" + if alembic_version >= (0, 8, 0): + config = current_app.extensions['migrate'].migrate.get_config( + directory) + command.edit(config, revision) + else: + raise RuntimeError('Alembic 0.8.0 or greater is required') + + +@catch_errors +def merge(directory=None, revisions='', message=None, branch_label=None, + rev_id=None): + """Merge two revisions together. Creates a new migration file""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.merge(config, revisions, message=message, + branch_label=branch_label, rev_id=rev_id) + + +@catch_errors +def upgrade(directory=None, revision='head', sql=False, tag=None, x_arg=None): + """Upgrade to a later version""" + config = current_app.extensions['migrate'].migrate.get_config(directory, + x_arg=x_arg) + command.upgrade(config, revision, sql=sql, tag=tag) + + +@catch_errors +def downgrade(directory=None, revision='-1', sql=False, tag=None, x_arg=None): + """Revert to a previous version""" + config = current_app.extensions['migrate'].migrate.get_config(directory, + x_arg=x_arg) + if sql and revision == '-1': + revision = 'head:-1' + command.downgrade(config, revision, sql=sql, tag=tag) + + +@catch_errors +def show(directory=None, revision='head'): + """Show the revision denoted by the given symbol.""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.show(config, revision) + + +@catch_errors +def history(directory=None, rev_range=None, verbose=False, + indicate_current=False): + """List changeset scripts in chronological order.""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + if alembic_version >= (0, 9, 9): + command.history(config, rev_range, verbose=verbose, + indicate_current=indicate_current) + else: + command.history(config, rev_range, verbose=verbose) + + +@catch_errors +def heads(directory=None, verbose=False, resolve_dependencies=False): + """Show current available heads in the script directory""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.heads(config, verbose=verbose, + resolve_dependencies=resolve_dependencies) + + +@catch_errors +def branches(directory=None, verbose=False): + """Show current branch points""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.branches(config, verbose=verbose) + + +@catch_errors +def current(directory=None, verbose=False): + """Display the current revision for each database.""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.current(config, verbose=verbose) + + +@catch_errors +def stamp(directory=None, revision='head', sql=False, tag=None, purge=False): + """'stamp' the revision table with the given revision; don't run any + migrations""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.stamp(config, revision, sql=sql, tag=tag, purge=purge) + + +@catch_errors +def check(directory=None): + """Check if there are any new operations to migrate""" + config = current_app.extensions['migrate'].migrate.get_config(directory) + command.check(config) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/cli.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/cli.py new file mode 100644 index 000000000..4fabece25 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/cli.py @@ -0,0 +1,261 @@ +import click +from flask import g +from flask.cli import with_appcontext +from flask_migrate import list_templates as _list_templates +from flask_migrate import init as _init +from flask_migrate import revision as _revision +from flask_migrate import migrate as _migrate +from flask_migrate import edit as _edit +from flask_migrate import merge as _merge +from flask_migrate import upgrade as _upgrade +from flask_migrate import downgrade as _downgrade +from flask_migrate import show as _show +from flask_migrate import history as _history +from flask_migrate import heads as _heads +from flask_migrate import branches as _branches +from flask_migrate import current as _current +from flask_migrate import stamp as _stamp +from flask_migrate import check as _check + + +@click.group() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-x', '--x-arg', multiple=True, + help='Additional arguments consumed by custom env.py scripts') +@with_appcontext +def db(directory, x_arg): + """Perform database migrations.""" + g.directory = directory + g.x_arg = x_arg # these will be picked up by Migrate.get_config() + + +@db.command() +@with_appcontext +def list_templates(): + """List available templates.""" + _list_templates() + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('--multidb', is_flag=True, + help=('Support multiple databases')) +@click.option('-t', '--template', default=None, + help=('Repository template to use (default is "flask")')) +@click.option('--package', is_flag=True, + help=('Write empty __init__.py files to the environment and ' + 'version locations')) +@with_appcontext +def init(directory, multidb, template, package): + """Creates a new migration repository.""" + _init(directory or g.directory, multidb, template, package) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-m', '--message', default=None, help='Revision message') +@click.option('--autogenerate', is_flag=True, + help=('Populate revision script with candidate migration ' + 'operations, based on comparison of database to model')) +@click.option('--sql', is_flag=True, + help=('Don\'t emit SQL to database - dump to standard output ' + 'instead')) +@click.option('--head', default='head', + help=('Specify head revision or @head to base new ' + 'revision on')) +@click.option('--splice', is_flag=True, + help=('Allow a non-head revision as the "head" to splice onto')) +@click.option('--branch-label', default=None, + help=('Specify a branch label to apply to the new revision')) +@click.option('--version-path', default=None, + help=('Specify specific path from config for version file')) +@click.option('--rev-id', default=None, + help=('Specify a hardcoded revision id instead of generating ' + 'one')) +@with_appcontext +def revision(directory, message, autogenerate, sql, head, splice, branch_label, + version_path, rev_id): + """Create a new revision file.""" + _revision(directory or g.directory, message, autogenerate, sql, head, + splice, branch_label, version_path, rev_id) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-m', '--message', default=None, help='Revision message') +@click.option('--sql', is_flag=True, + help=('Don\'t emit SQL to database - dump to standard output ' + 'instead')) +@click.option('--head', default='head', + help=('Specify head revision or @head to base new ' + 'revision on')) +@click.option('--splice', is_flag=True, + help=('Allow a non-head revision as the "head" to splice onto')) +@click.option('--branch-label', default=None, + help=('Specify a branch label to apply to the new revision')) +@click.option('--version-path', default=None, + help=('Specify specific path from config for version file')) +@click.option('--rev-id', default=None, + help=('Specify a hardcoded revision id instead of generating ' + 'one')) +@click.option('-x', '--x-arg', multiple=True, + help='Additional arguments consumed by custom env.py scripts') +@with_appcontext +def migrate(directory, message, sql, head, splice, branch_label, version_path, + rev_id, x_arg): + """Autogenerate a new revision file (Alias for + 'revision --autogenerate')""" + _migrate(directory or g.directory, message, sql, head, splice, + branch_label, version_path, rev_id, x_arg or g.x_arg) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.argument('revision', default='head') +@with_appcontext +def edit(directory, revision): + """Edit a revision file""" + _edit(directory or g.directory, revision) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-m', '--message', default=None, help='Merge revision message') +@click.option('--branch-label', default=None, + help=('Specify a branch label to apply to the new revision')) +@click.option('--rev-id', default=None, + help=('Specify a hardcoded revision id instead of generating ' + 'one')) +@click.argument('revisions', nargs=-1) +@with_appcontext +def merge(directory, message, branch_label, rev_id, revisions): + """Merge two revisions together, creating a new revision file""" + _merge(directory or g.directory, revisions, message, branch_label, rev_id) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('--sql', is_flag=True, + help=('Don\'t emit SQL to database - dump to standard output ' + 'instead')) +@click.option('--tag', default=None, + help=('Arbitrary "tag" name - can be used by custom env.py ' + 'scripts')) +@click.option('-x', '--x-arg', multiple=True, + help='Additional arguments consumed by custom env.py scripts') +@click.argument('revision', default='head') +@with_appcontext +def upgrade(directory, sql, tag, x_arg, revision): + """Upgrade to a later version""" + _upgrade(directory or g.directory, revision, sql, tag, x_arg or g.x_arg) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('--sql', is_flag=True, + help=('Don\'t emit SQL to database - dump to standard output ' + 'instead')) +@click.option('--tag', default=None, + help=('Arbitrary "tag" name - can be used by custom env.py ' + 'scripts')) +@click.option('-x', '--x-arg', multiple=True, + help='Additional arguments consumed by custom env.py scripts') +@click.argument('revision', default='-1') +@with_appcontext +def downgrade(directory, sql, tag, x_arg, revision): + """Revert to a previous version""" + _downgrade(directory or g.directory, revision, sql, tag, x_arg or g.x_arg) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.argument('revision', default='head') +@with_appcontext +def show(directory, revision): + """Show the revision denoted by the given symbol.""" + _show(directory or g.directory, revision) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-r', '--rev-range', default=None, + help='Specify a revision range; format is [start]:[end]') +@click.option('-v', '--verbose', is_flag=True, help='Use more verbose output') +@click.option('-i', '--indicate-current', is_flag=True, + help=('Indicate current version (Alembic 0.9.9 or greater is ' + 'required)')) +@with_appcontext +def history(directory, rev_range, verbose, indicate_current): + """List changeset scripts in chronological order.""" + _history(directory or g.directory, rev_range, verbose, indicate_current) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-v', '--verbose', is_flag=True, help='Use more verbose output') +@click.option('--resolve-dependencies', is_flag=True, + help='Treat dependency versions as down revisions') +@with_appcontext +def heads(directory, verbose, resolve_dependencies): + """Show current available heads in the script directory""" + _heads(directory or g.directory, verbose, resolve_dependencies) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-v', '--verbose', is_flag=True, help='Use more verbose output') +@with_appcontext +def branches(directory, verbose): + """Show current branch points""" + _branches(directory or g.directory, verbose) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('-v', '--verbose', is_flag=True, help='Use more verbose output') +@with_appcontext +def current(directory, verbose): + """Display the current revision for each database.""" + _current(directory or g.directory, verbose) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@click.option('--sql', is_flag=True, + help=('Don\'t emit SQL to database - dump to standard output ' + 'instead')) +@click.option('--tag', default=None, + help=('Arbitrary "tag" name - can be used by custom env.py ' + 'scripts')) +@click.option('--purge', is_flag=True, + help=('Delete the version in the alembic_version table before ' + 'stamping')) +@click.argument('revision', default='head') +@with_appcontext +def stamp(directory, sql, tag, revision, purge): + """'stamp' the revision table with the given revision; don't run any + migrations""" + _stamp(directory or g.directory, revision, sql, tag, purge) + + +@db.command() +@click.option('-d', '--directory', default=None, + help=('Migration script directory (default is "migrations")')) +@with_appcontext +def check(directory): + """Check if there are any new operations to migrate""" + _check(directory or g.directory) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/README b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/README new file mode 100644 index 000000000..02cce84ee --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/README @@ -0,0 +1 @@ +Multi-database configuration for aioflask. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/alembic.ini.mako new file mode 100644 index 000000000..ec9d45c26 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/alembic.ini.mako @@ -0,0 +1,50 @@ +# A generic, single database configuration. + +[alembic] +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic,flask_migrate + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[logger_flask_migrate] +level = INFO +handlers = +qualname = flask_migrate + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/env.py new file mode 100644 index 000000000..f14bd6c09 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/env.py @@ -0,0 +1,202 @@ +import asyncio +import logging +from logging.config import fileConfig + +from sqlalchemy import MetaData +from flask import current_app + +from alembic import context + +USE_TWOPHASE = False + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) +logger = logging.getLogger('alembic.env') + + +def get_engine(bind_key=None): + try: + # this works with Flask-SQLAlchemy<3 and Alchemical + return current_app.extensions['migrate'].db.get_engine(bind=bind_key) + except (TypeError, AttributeError): + # this works with Flask-SQLAlchemy>=3 + return current_app.extensions['migrate'].db.engines.get(bind_key) + + +def get_engine_url(bind_key=None): + try: + return get_engine(bind_key).url.render_as_string( + hide_password=False).replace('%', '%%') + except AttributeError: + return str(get_engine(bind_key).url).replace('%', '%%') + + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +config.set_main_option('sqlalchemy.url', get_engine_url()) +bind_names = [] +if current_app.config.get('SQLALCHEMY_BINDS') is not None: + bind_names = list(current_app.config['SQLALCHEMY_BINDS'].keys()) +else: + get_bind_names = getattr(current_app.extensions['migrate'].db, + 'bind_names', None) + if get_bind_names: + bind_names = get_bind_names() +for bind in bind_names: + context.config.set_section_option( + bind, "sqlalchemy.url", get_engine_url(bind_key=bind)) +target_db = current_app.extensions['migrate'].db + + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def get_metadata(bind): + """Return the metadata for a bind.""" + if bind == '': + bind = None + if hasattr(target_db, 'metadatas'): + return target_db.metadatas[bind] + + # legacy, less flexible implementation + m = MetaData() + for t in target_db.metadata.tables.values(): + if t.info.get('bind_key') == bind: + t.tometadata(m) + return m + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + # for the --sql use case, run migrations for each URL into + # individual files. + + engines = { + '': { + 'url': context.config.get_main_option('sqlalchemy.url') + } + } + for name in bind_names: + engines[name] = rec = {} + rec['url'] = context.config.get_section_option(name, "sqlalchemy.url") + + for name, rec in engines.items(): + logger.info("Migrating database %s" % (name or '')) + file_ = "%s.sql" % name + logger.info("Writing output to %s" % file_) + with open(file_, 'w') as buffer: + context.configure( + url=rec['url'], + output_buffer=buffer, + target_metadata=get_metadata(name), + literal_binds=True, + ) + with context.begin_transaction(): + context.run_migrations(engine_name=name) + + +def do_run_migrations(_, engines): + # this callback is used to prevent an auto-migration from being generated + # when there are no changes to the schema + # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html + def process_revision_directives(context, revision, directives): + if getattr(config.cmd_opts, 'autogenerate', False): + script = directives[0] + if len(script.upgrade_ops_list) >= len(bind_names) + 1: + empty = True + for upgrade_ops in script.upgrade_ops_list: + if not upgrade_ops.is_empty(): + empty = False + if empty: + directives[:] = [] + logger.info('No changes in schema detected.') + + conf_args = current_app.extensions['migrate'].configure_args + if conf_args.get("process_revision_directives") is None: + conf_args["process_revision_directives"] = process_revision_directives + + for name, rec in engines.items(): + rec['sync_connection'] = conn = rec['connection']._sync_connection() + if USE_TWOPHASE: + rec['transaction'] = conn.begin_twophase() + else: + rec['transaction'] = conn.begin() + + try: + for name, rec in engines.items(): + logger.info("Migrating database %s" % (name or '')) + context.configure( + connection=rec['sync_connection'], + upgrade_token="%s_upgrades" % name, + downgrade_token="%s_downgrades" % name, + target_metadata=get_metadata(name), + **conf_args + ) + context.run_migrations(engine_name=name) + + if USE_TWOPHASE: + for rec in engines.values(): + rec['transaction'].prepare() + + for rec in engines.values(): + rec['transaction'].commit() + except: # noqa: E722 + for rec in engines.values(): + rec['transaction'].rollback() + raise + finally: + for rec in engines.values(): + rec['sync_connection'].close() + + +async def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + # for the direct-to-DB use case, start a transaction on all + # engines, then run all migrations, then commit all transactions. + engines = { + '': {'engine': get_engine()} + } + for name in bind_names: + engines[name] = rec = {} + rec['engine'] = get_engine(bind_key=name) + + for name, rec in engines.items(): + engine = rec['engine'] + rec['connection'] = await engine.connect().start() + + await engines['']['connection'].run_sync(do_run_migrations, engines) + + for rec in engines.values(): + await rec['connection'].close() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + asyncio.get_event_loop().run_until_complete(run_migrations_online()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/script.py.mako new file mode 100644 index 000000000..3beabc463 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask-multidb/script.py.mako @@ -0,0 +1,53 @@ +<%! +import re + +%>"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade(engine_name): + globals()["upgrade_%s" % engine_name]() + + +def downgrade(engine_name): + globals()["downgrade_%s" % engine_name]() + +<% + from flask import current_app + bind_names = [] + if current_app.config.get('SQLALCHEMY_BINDS') is not None: + bind_names = list(current_app.config['SQLALCHEMY_BINDS'].keys()) + else: + get_bind_names = getattr(current_app.extensions['migrate'].db, 'bind_names', None) + if get_bind_names: + bind_names = get_bind_names() + db_names = [''] + bind_names +%> + +## generate an "upgrade_() / downgrade_()" function +## for each database name in the ini file. + +% for db_name in db_names: + +def upgrade_${db_name}(): + ${context.get("%s_upgrades" % db_name, "pass")} + + +def downgrade_${db_name}(): + ${context.get("%s_downgrades" % db_name, "pass")} + +% endfor diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/README b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/README new file mode 100644 index 000000000..6ed8020e0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/README @@ -0,0 +1 @@ +Single-database configuration for aioflask. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/alembic.ini.mako new file mode 100644 index 000000000..ec9d45c26 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/alembic.ini.mako @@ -0,0 +1,50 @@ +# A generic, single database configuration. + +[alembic] +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic,flask_migrate + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[logger_flask_migrate] +level = INFO +handlers = +qualname = flask_migrate + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/env.py new file mode 100644 index 000000000..3a1ece5ac --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/env.py @@ -0,0 +1,118 @@ +import asyncio +import logging +from logging.config import fileConfig + +from flask import current_app + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) +logger = logging.getLogger('alembic.env') + + +def get_engine(): + try: + # this works with Flask-SQLAlchemy<3 and Alchemical + return current_app.extensions['migrate'].db.get_engine() + except (TypeError, AttributeError): + # this works with Flask-SQLAlchemy>=3 + return current_app.extensions['migrate'].db.engine + + +def get_engine_url(): + try: + return get_engine().url.render_as_string(hide_password=False).replace( + '%', '%%') + except AttributeError: + return str(get_engine().url).replace('%', '%%') + + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +config.set_main_option('sqlalchemy.url', get_engine_url()) +target_db = current_app.extensions['migrate'].db + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def get_metadata(): + if hasattr(target_db, 'metadatas'): + return target_db.metadatas[None] + return target_db.metadata + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, target_metadata=get_metadata(), literal_binds=True + ) + + with context.begin_transaction(): + context.run_migrations() + + +def do_run_migrations(connection): + # this callback is used to prevent an auto-migration from being generated + # when there are no changes to the schema + # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html + def process_revision_directives(context, revision, directives): + if getattr(config.cmd_opts, 'autogenerate', False): + script = directives[0] + if script.upgrade_ops.is_empty(): + directives[:] = [] + logger.info('No changes in schema detected.') + + conf_args = current_app.extensions['migrate'].configure_args + if conf_args.get("process_revision_directives") is None: + conf_args["process_revision_directives"] = process_revision_directives + + context.configure( + connection=connection, + target_metadata=get_metadata(), + **conf_args + ) + + with context.begin_transaction(): + context.run_migrations() + + +async def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + connectable = get_engine() + + async with connectable.connect() as connection: + await connection.run_sync(do_run_migrations) + + +if context.is_offline_mode(): + run_migrations_offline() +else: + asyncio.get_event_loop().run_until_complete(run_migrations_online()) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/script.py.mako new file mode 100644 index 000000000..2c0156303 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/aioflask/script.py.mako @@ -0,0 +1,24 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade(): + ${upgrades if upgrades else "pass"} + + +def downgrade(): + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/README b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/README new file mode 100644 index 000000000..eaae2511a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/README @@ -0,0 +1 @@ +Multi-database configuration for Flask. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/alembic.ini.mako new file mode 100644 index 000000000..ec9d45c26 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/alembic.ini.mako @@ -0,0 +1,50 @@ +# A generic, single database configuration. + +[alembic] +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic,flask_migrate + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[logger_flask_migrate] +level = INFO +handlers = +qualname = flask_migrate + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/env.py new file mode 100644 index 000000000..31b8e7248 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/env.py @@ -0,0 +1,191 @@ +import logging +from logging.config import fileConfig + +from sqlalchemy import MetaData +from flask import current_app + +from alembic import context + +USE_TWOPHASE = False + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) +logger = logging.getLogger('alembic.env') + + +def get_engine(bind_key=None): + try: + # this works with Flask-SQLAlchemy<3 and Alchemical + return current_app.extensions['migrate'].db.get_engine(bind=bind_key) + except (TypeError, AttributeError): + # this works with Flask-SQLAlchemy>=3 + return current_app.extensions['migrate'].db.engines.get(bind_key) + + +def get_engine_url(bind_key=None): + try: + return get_engine(bind_key).url.render_as_string( + hide_password=False).replace('%', '%%') + except AttributeError: + return str(get_engine(bind_key).url).replace('%', '%%') + + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +config.set_main_option('sqlalchemy.url', get_engine_url()) +bind_names = [] +if current_app.config.get('SQLALCHEMY_BINDS') is not None: + bind_names = list(current_app.config['SQLALCHEMY_BINDS'].keys()) +else: + get_bind_names = getattr(current_app.extensions['migrate'].db, + 'bind_names', None) + if get_bind_names: + bind_names = get_bind_names() +for bind in bind_names: + context.config.set_section_option( + bind, "sqlalchemy.url", get_engine_url(bind_key=bind)) +target_db = current_app.extensions['migrate'].db + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def get_metadata(bind): + """Return the metadata for a bind.""" + if bind == '': + bind = None + if hasattr(target_db, 'metadatas'): + return target_db.metadatas[bind] + + # legacy, less flexible implementation + m = MetaData() + for t in target_db.metadata.tables.values(): + if t.info.get('bind_key') == bind: + t.tometadata(m) + return m + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + # for the --sql use case, run migrations for each URL into + # individual files. + + engines = { + '': { + 'url': context.config.get_main_option('sqlalchemy.url') + } + } + for name in bind_names: + engines[name] = rec = {} + rec['url'] = context.config.get_section_option(name, "sqlalchemy.url") + + for name, rec in engines.items(): + logger.info("Migrating database %s" % (name or '')) + file_ = "%s.sql" % name + logger.info("Writing output to %s" % file_) + with open(file_, 'w') as buffer: + context.configure( + url=rec['url'], + output_buffer=buffer, + target_metadata=get_metadata(name), + literal_binds=True, + ) + with context.begin_transaction(): + context.run_migrations(engine_name=name) + + +def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + # this callback is used to prevent an auto-migration from being generated + # when there are no changes to the schema + # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html + def process_revision_directives(context, revision, directives): + if getattr(config.cmd_opts, 'autogenerate', False): + script = directives[0] + if len(script.upgrade_ops_list) >= len(bind_names) + 1: + empty = True + for upgrade_ops in script.upgrade_ops_list: + if not upgrade_ops.is_empty(): + empty = False + if empty: + directives[:] = [] + logger.info('No changes in schema detected.') + + conf_args = current_app.extensions['migrate'].configure_args + if conf_args.get("process_revision_directives") is None: + conf_args["process_revision_directives"] = process_revision_directives + + # for the direct-to-DB use case, start a transaction on all + # engines, then run all migrations, then commit all transactions. + engines = { + '': {'engine': get_engine()} + } + for name in bind_names: + engines[name] = rec = {} + rec['engine'] = get_engine(bind_key=name) + + for name, rec in engines.items(): + engine = rec['engine'] + rec['connection'] = conn = engine.connect() + + if USE_TWOPHASE: + rec['transaction'] = conn.begin_twophase() + else: + rec['transaction'] = conn.begin() + + try: + for name, rec in engines.items(): + logger.info("Migrating database %s" % (name or '')) + context.configure( + connection=rec['connection'], + upgrade_token="%s_upgrades" % name, + downgrade_token="%s_downgrades" % name, + target_metadata=get_metadata(name), + **conf_args + ) + context.run_migrations(engine_name=name) + + if USE_TWOPHASE: + for rec in engines.values(): + rec['transaction'].prepare() + + for rec in engines.values(): + rec['transaction'].commit() + except: # noqa: E722 + for rec in engines.values(): + rec['transaction'].rollback() + raise + finally: + for rec in engines.values(): + rec['connection'].close() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/script.py.mako new file mode 100644 index 000000000..3beabc463 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask-multidb/script.py.mako @@ -0,0 +1,53 @@ +<%! +import re + +%>"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade(engine_name): + globals()["upgrade_%s" % engine_name]() + + +def downgrade(engine_name): + globals()["downgrade_%s" % engine_name]() + +<% + from flask import current_app + bind_names = [] + if current_app.config.get('SQLALCHEMY_BINDS') is not None: + bind_names = list(current_app.config['SQLALCHEMY_BINDS'].keys()) + else: + get_bind_names = getattr(current_app.extensions['migrate'].db, 'bind_names', None) + if get_bind_names: + bind_names = get_bind_names() + db_names = [''] + bind_names +%> + +## generate an "upgrade_() / downgrade_()" function +## for each database name in the ini file. + +% for db_name in db_names: + +def upgrade_${db_name}(): + ${context.get("%s_upgrades" % db_name, "pass")} + + +def downgrade_${db_name}(): + ${context.get("%s_downgrades" % db_name, "pass")} + +% endfor diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/README b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/README new file mode 100644 index 000000000..0e0484415 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/README @@ -0,0 +1 @@ +Single-database configuration for Flask. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/alembic.ini.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/alembic.ini.mako new file mode 100644 index 000000000..ec9d45c26 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/alembic.ini.mako @@ -0,0 +1,50 @@ +# A generic, single database configuration. + +[alembic] +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic,flask_migrate + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[logger_flask_migrate] +level = INFO +handlers = +qualname = flask_migrate + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/env.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/env.py new file mode 100644 index 000000000..4c9709271 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/env.py @@ -0,0 +1,113 @@ +import logging +from logging.config import fileConfig + +from flask import current_app + +from alembic import context + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) +logger = logging.getLogger('alembic.env') + + +def get_engine(): + try: + # this works with Flask-SQLAlchemy<3 and Alchemical + return current_app.extensions['migrate'].db.get_engine() + except (TypeError, AttributeError): + # this works with Flask-SQLAlchemy>=3 + return current_app.extensions['migrate'].db.engine + + +def get_engine_url(): + try: + return get_engine().url.render_as_string(hide_password=False).replace( + '%', '%%') + except AttributeError: + return str(get_engine().url).replace('%', '%%') + + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +config.set_main_option('sqlalchemy.url', get_engine_url()) +target_db = current_app.extensions['migrate'].db + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def get_metadata(): + if hasattr(target_db, 'metadatas'): + return target_db.metadatas[None] + return target_db.metadata + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, target_metadata=get_metadata(), literal_binds=True + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + + # this callback is used to prevent an auto-migration from being generated + # when there are no changes to the schema + # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html + def process_revision_directives(context, revision, directives): + if getattr(config.cmd_opts, 'autogenerate', False): + script = directives[0] + if script.upgrade_ops.is_empty(): + directives[:] = [] + logger.info('No changes in schema detected.') + + conf_args = current_app.extensions['migrate'].configure_args + if conf_args.get("process_revision_directives") is None: + conf_args["process_revision_directives"] = process_revision_directives + + connectable = get_engine() + + with connectable.connect() as connection: + context.configure( + connection=connection, + target_metadata=get_metadata(), + **conf_args + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/script.py.mako b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/script.py.mako new file mode 100644 index 000000000..2c0156303 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_migrate/templates/flask/script.py.mako @@ -0,0 +1,24 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade(): + ${upgrades if upgrades else "pass"} + + +def downgrade(): + ${downgrades if downgrades else "pass"} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/LICENSE.rst b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/LICENSE.rst new file mode 100644 index 000000000..9d227a0cc --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/METADATA new file mode 100644 index 000000000..92f239cd9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/METADATA @@ -0,0 +1,109 @@ +Metadata-Version: 2.1 +Name: Flask-SQLAlchemy +Version: 3.1.1 +Summary: Add SQLAlchemy support to your Flask application. +Maintainer-email: Pallets +Requires-Python: >=3.8 +Description-Content-Type: text/x-rst +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Requires-Dist: flask>=2.2.5 +Requires-Dist: sqlalchemy>=2.0.16 +Project-URL: Changes, https://flask-sqlalchemy.palletsprojects.com/changes/ +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://flask-sqlalchemy.palletsprojects.com +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Issue Tracker, https://github.com/pallets-eco/flask-sqlalchemy/issues/ +Project-URL: Source Code, https://github.com/pallets-eco/flask-sqlalchemy/ + +Flask-SQLAlchemy +================ + +Flask-SQLAlchemy is an extension for `Flask`_ that adds support for +`SQLAlchemy`_ to your application. It aims to simplify using SQLAlchemy +with Flask by providing useful defaults and extra helpers that make it +easier to accomplish common tasks. + +.. _Flask: https://palletsprojects.com/p/flask/ +.. _SQLAlchemy: https://www.sqlalchemy.org + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U Flask-SQLAlchemy + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +A Simple Example +---------------- + +.. code-block:: python + + from flask import Flask + from flask_sqlalchemy import SQLAlchemy + from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column + + app = Flask(__name__) + app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///example.sqlite" + + class Base(DeclarativeBase): + pass + + db = SQLAlchemy(app, model_class=Base) + + class User(db.Model): + id: Mapped[int] = mapped_column(db.Integer, primary_key=True) + username: Mapped[str] = mapped_column(db.String, unique=True, nullable=False) + + with app.app_context(): + db.create_all() + + db.session.add(User(username="example")) + db.session.commit() + + users = db.session.execute(db.select(User)).scalars() + + +Contributing +------------ + +For guidance on setting up a development environment and how to make a +contribution to Flask-SQLAlchemy, see the `contributing guidelines`_. + +.. _contributing guidelines: https://github.com/pallets-eco/flask-sqlalchemy/blob/main/CONTRIBUTING.rst + + +Donate +------ + +The Pallets organization develops and supports Flask-SQLAlchemy and +other popular packages. In order to grow the community of contributors +and users, and allow the maintainers to devote more time to the +projects, `please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://flask-sqlalchemy.palletsprojects.com/ +- Changes: https://flask-sqlalchemy.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/Flask-SQLAlchemy/ +- Source Code: https://github.com/pallets-eco/flask-sqlalchemy/ +- Issue Tracker: https://github.com/pallets-eco/flask-sqlalchemy/issues/ +- Website: https://palletsprojects.com/ +- Twitter: https://twitter.com/PalletsTeam +- Chat: https://discord.gg/pallets + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/RECORD new file mode 100644 index 000000000..bf20e0ee7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/RECORD @@ -0,0 +1,27 @@ +flask_sqlalchemy-3.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +flask_sqlalchemy-3.1.1.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +flask_sqlalchemy-3.1.1.dist-info/METADATA,sha256=lBxR1akBt7n9XBjIVTL2OV52OhCfFrb-Mqtoe0DCbR8,3432 +flask_sqlalchemy-3.1.1.dist-info/RECORD,, +flask_sqlalchemy-3.1.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask_sqlalchemy-3.1.1.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81 +flask_sqlalchemy/__init__.py,sha256=he_w4qQQVS2Z1ms5GCTptDTXNOXBXw0n8zSuWCp8n6Y,653 +flask_sqlalchemy/__pycache__/__init__.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/cli.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/extension.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/model.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/pagination.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/query.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/record_queries.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/session.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/table.cpython-310.pyc,, +flask_sqlalchemy/__pycache__/track_modifications.cpython-310.pyc,, +flask_sqlalchemy/cli.py,sha256=pg3QDxP36GW2qnwe_CpPtkRhPchyVSGM6zlBNWuNCFE,484 +flask_sqlalchemy/extension.py,sha256=71tP_kNtb5VgZdafy_OH1sWdZOA6PaT7cJqX7tKgZ-k,38261 +flask_sqlalchemy/model.py,sha256=_mSisC2Eni0TgTyFWeN_O4LIexTeP_sVTdxh03yMK50,11461 +flask_sqlalchemy/pagination.py,sha256=JFpllrqkRkwacb8DAmQWaz9wsvQa0dypfSkhUDSC2ws,11119 +flask_sqlalchemy/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask_sqlalchemy/query.py,sha256=Uls9qbmnpb9Vba43EDfsRP17eHJ0X4VG7SE22tH5R3g,3748 +flask_sqlalchemy/record_queries.py,sha256=ouS1ayj16h76LJprx13iYdoFZbm6m8OncrOgAVbG1Sk,3520 +flask_sqlalchemy/session.py,sha256=pBbtN8iDc8yuGVt0k18BvZHh2uEI7QPzZXO7eXrRi1g,3426 +flask_sqlalchemy/table.py,sha256=wAPOy8qwyAxpMwOIUJY4iMOultzz2W0D6xvBkQ7U2CE,859 +flask_sqlalchemy/track_modifications.py,sha256=yieyozj7IiVzwnAGZ-ZrgqrzjrUfG0kPrXBfW_hStSU,2755 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/REQUESTED b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/REQUESTED new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/WHEEL new file mode 100644 index 000000000..3b5e64b5e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy-3.1.1.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.9.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py new file mode 100644 index 000000000..c2fa0591c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py @@ -0,0 +1,26 @@ +from __future__ import annotations + +import typing as t + +from .extension import SQLAlchemy + +__all__ = [ + "SQLAlchemy", +] + + +def __getattr__(name: str) -> t.Any: + if name == "__version__": + import importlib.metadata + import warnings + + warnings.warn( + "The '__version__' attribute is deprecated and will be removed in" + " Flask-SQLAlchemy 3.2. Use feature detection or" + " 'importlib.metadata.version(\"flask-sqlalchemy\")' instead.", + DeprecationWarning, + stacklevel=2, + ) + return importlib.metadata.version("flask-sqlalchemy") + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/cli.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/cli.py new file mode 100644 index 000000000..d7d7e4bef --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/cli.py @@ -0,0 +1,16 @@ +from __future__ import annotations + +import typing as t + +from flask import current_app + + +def add_models_to_shell() -> dict[str, t.Any]: + """Registered with :meth:`~flask.Flask.shell_context_processor` if + ``add_models_to_shell`` is enabled. Adds the ``db`` instance and all model classes + to ``flask shell``. + """ + db = current_app.extensions["sqlalchemy"] + out = {m.class_.__name__: m.class_ for m in db.Model._sa_registry.mappers} + out["db"] = db + return out diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/extension.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/extension.py new file mode 100644 index 000000000..43e1b9a44 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/extension.py @@ -0,0 +1,1008 @@ +from __future__ import annotations + +import os +import types +import typing as t +import warnings +from weakref import WeakKeyDictionary + +import sqlalchemy as sa +import sqlalchemy.event as sa_event +import sqlalchemy.exc as sa_exc +import sqlalchemy.orm as sa_orm +from flask import abort +from flask import current_app +from flask import Flask +from flask import has_app_context + +from .model import _QueryProperty +from .model import BindMixin +from .model import DefaultMeta +from .model import DefaultMetaNoName +from .model import Model +from .model import NameMixin +from .pagination import Pagination +from .pagination import SelectPagination +from .query import Query +from .session import _app_ctx_id +from .session import Session +from .table import _Table + +_O = t.TypeVar("_O", bound=object) # Based on sqlalchemy.orm._typing.py + + +# Type accepted for model_class argument +_FSA_MCT = t.TypeVar( + "_FSA_MCT", + bound=t.Union[ + t.Type[Model], + sa_orm.DeclarativeMeta, + t.Type[sa_orm.DeclarativeBase], + t.Type[sa_orm.DeclarativeBaseNoMeta], + ], +) + + +# Type returned by make_declarative_base +class _FSAModel(Model): + metadata: sa.MetaData + + +def _get_2x_declarative_bases( + model_class: _FSA_MCT, +) -> list[t.Type[t.Union[sa_orm.DeclarativeBase, sa_orm.DeclarativeBaseNoMeta]]]: + return [ + b + for b in model_class.__bases__ + if issubclass(b, (sa_orm.DeclarativeBase, sa_orm.DeclarativeBaseNoMeta)) + ] + + +class SQLAlchemy: + """Integrates SQLAlchemy with Flask. This handles setting up one or more engines, + associating tables and models with specific engines, and cleaning up connections and + sessions after each request. + + Only the engine configuration is specific to each application, other things like + the model, table, metadata, and session are shared for all applications using that + extension instance. Call :meth:`init_app` to configure the extension on an + application. + + After creating the extension, create model classes by subclassing :attr:`Model`, and + table classes with :attr:`Table`. These can be accessed before :meth:`init_app` is + called, making it possible to define the models separately from the application. + + Accessing :attr:`session` and :attr:`engine` requires an active Flask application + context. This includes methods like :meth:`create_all` which use the engine. + + This class also provides access to names in SQLAlchemy's ``sqlalchemy`` and + ``sqlalchemy.orm`` modules. For example, you can use ``db.Column`` and + ``db.relationship`` instead of importing ``sqlalchemy.Column`` and + ``sqlalchemy.orm.relationship``. This can be convenient when defining models. + + :param app: Call :meth:`init_app` on this Flask application now. + :param metadata: Use this as the default :class:`sqlalchemy.schema.MetaData`. Useful + for setting a naming convention. + :param session_options: Arguments used by :attr:`session` to create each session + instance. A ``scopefunc`` key will be passed to the scoped session, not the + session instance. See :class:`sqlalchemy.orm.sessionmaker` for a list of + arguments. + :param query_class: Use this as the default query class for models and dynamic + relationships. The query interface is considered legacy in SQLAlchemy. + :param model_class: Use this as the model base class when creating the declarative + model class :attr:`Model`. Can also be a fully created declarative model class + for further customization. + :param engine_options: Default arguments used when creating every engine. These are + lower precedence than application config. See :func:`sqlalchemy.create_engine` + for a list of arguments. + :param add_models_to_shell: Add the ``db`` instance and all model classes to + ``flask shell``. + + .. versionchanged:: 3.1.0 + The ``metadata`` parameter can still be used with SQLAlchemy 1.x classes, + but is ignored when using SQLAlchemy 2.x style of declarative classes. + Instead, specify metadata on your Base class. + + .. versionchanged:: 3.1.0 + Added the ``disable_autonaming`` parameter. + + .. versionchanged:: 3.1.0 + Changed ``model_class`` parameter to accepta SQLAlchemy 2.x + declarative base subclass. + + .. versionchanged:: 3.0 + An active Flask application context is always required to access ``session`` and + ``engine``. + + .. versionchanged:: 3.0 + Separate ``metadata`` are used for each bind key. + + .. versionchanged:: 3.0 + The ``engine_options`` parameter is applied as defaults before per-engine + configuration. + + .. versionchanged:: 3.0 + The session class can be customized in ``session_options``. + + .. versionchanged:: 3.0 + Added the ``add_models_to_shell`` parameter. + + .. versionchanged:: 3.0 + Engines are created when calling ``init_app`` rather than the first time they + are accessed. + + .. versionchanged:: 3.0 + All parameters except ``app`` are keyword-only. + + .. versionchanged:: 3.0 + The extension instance is stored directly as ``app.extensions["sqlalchemy"]``. + + .. versionchanged:: 3.0 + Setup methods are renamed with a leading underscore. They are considered + internal interfaces which may change at any time. + + .. versionchanged:: 3.0 + Removed the ``use_native_unicode`` parameter and config. + + .. versionchanged:: 2.4 + Added the ``engine_options`` parameter. + + .. versionchanged:: 2.1 + Added the ``metadata``, ``query_class``, and ``model_class`` parameters. + + .. versionchanged:: 2.1 + Use the same query class across ``session``, ``Model.query`` and + ``Query``. + + .. versionchanged:: 0.16 + ``scopefunc`` is accepted in ``session_options``. + + .. versionchanged:: 0.10 + Added the ``session_options`` parameter. + """ + + def __init__( + self, + app: Flask | None = None, + *, + metadata: sa.MetaData | None = None, + session_options: dict[str, t.Any] | None = None, + query_class: type[Query] = Query, + model_class: _FSA_MCT = Model, # type: ignore[assignment] + engine_options: dict[str, t.Any] | None = None, + add_models_to_shell: bool = True, + disable_autonaming: bool = False, + ): + if session_options is None: + session_options = {} + + self.Query = query_class + """The default query class used by ``Model.query`` and ``lazy="dynamic"`` + relationships. + + .. warning:: + The query interface is considered legacy in SQLAlchemy. + + Customize this by passing the ``query_class`` parameter to the extension. + """ + + self.session = self._make_scoped_session(session_options) + """A :class:`sqlalchemy.orm.scoping.scoped_session` that creates instances of + :class:`.Session` scoped to the current Flask application context. The session + will be removed, returning the engine connection to the pool, when the + application context exits. + + Customize this by passing ``session_options`` to the extension. + + This requires that a Flask application context is active. + + .. versionchanged:: 3.0 + The session is scoped to the current app context. + """ + + self.metadatas: dict[str | None, sa.MetaData] = {} + """Map of bind keys to :class:`sqlalchemy.schema.MetaData` instances. The + ``None`` key refers to the default metadata, and is available as + :attr:`metadata`. + + Customize the default metadata by passing the ``metadata`` parameter to the + extension. This can be used to set a naming convention. When metadata for + another bind key is created, it copies the default's naming convention. + + .. versionadded:: 3.0 + """ + + if metadata is not None: + if len(_get_2x_declarative_bases(model_class)) > 0: + warnings.warn( + "When using SQLAlchemy 2.x style of declarative classes," + " the `metadata` should be an attribute of the base class." + "The metadata passed into SQLAlchemy() is ignored.", + DeprecationWarning, + stacklevel=2, + ) + else: + metadata.info["bind_key"] = None + self.metadatas[None] = metadata + + self.Table = self._make_table_class() + """A :class:`sqlalchemy.schema.Table` class that chooses a metadata + automatically. + + Unlike the base ``Table``, the ``metadata`` argument is not required. If it is + not given, it is selected based on the ``bind_key`` argument. + + :param bind_key: Used to select a different metadata. + :param args: Arguments passed to the base class. These are typically the table's + name, columns, and constraints. + :param kwargs: Arguments passed to the base class. + + .. versionchanged:: 3.0 + This is a subclass of SQLAlchemy's ``Table`` rather than a function. + """ + + self.Model = self._make_declarative_base( + model_class, disable_autonaming=disable_autonaming + ) + """A SQLAlchemy declarative model class. Subclass this to define database + models. + + If a model does not set ``__tablename__``, it will be generated by converting + the class name from ``CamelCase`` to ``snake_case``. It will not be generated + if the model looks like it uses single-table inheritance. + + If a model or parent class sets ``__bind_key__``, it will use that metadata and + database engine. Otherwise, it will use the default :attr:`metadata` and + :attr:`engine`. This is ignored if the model sets ``metadata`` or ``__table__``. + + For code using the SQLAlchemy 1.x API, customize this model by subclassing + :class:`.Model` and passing the ``model_class`` parameter to the extension. + A fully created declarative model class can be + passed as well, to use a custom metaclass. + + For code using the SQLAlchemy 2.x API, customize this model by subclassing + :class:`sqlalchemy.orm.DeclarativeBase` or + :class:`sqlalchemy.orm.DeclarativeBaseNoMeta` + and passing the ``model_class`` parameter to the extension. + """ + + if engine_options is None: + engine_options = {} + + self._engine_options = engine_options + self._app_engines: WeakKeyDictionary[Flask, dict[str | None, sa.engine.Engine]] + self._app_engines = WeakKeyDictionary() + self._add_models_to_shell = add_models_to_shell + + if app is not None: + self.init_app(app) + + def __repr__(self) -> str: + if not has_app_context(): + return f"<{type(self).__name__}>" + + message = f"{type(self).__name__} {self.engine.url}" + + if len(self.engines) > 1: + message = f"{message} +{len(self.engines) - 1}" + + return f"<{message}>" + + def init_app(self, app: Flask) -> None: + """Initialize a Flask application for use with this extension instance. This + must be called before accessing the database engine or session with the app. + + This sets default configuration values, then configures the extension on the + application and creates the engines for each bind key. Therefore, this must be + called after the application has been configured. Changes to application config + after this call will not be reflected. + + The following keys from ``app.config`` are used: + + - :data:`.SQLALCHEMY_DATABASE_URI` + - :data:`.SQLALCHEMY_ENGINE_OPTIONS` + - :data:`.SQLALCHEMY_ECHO` + - :data:`.SQLALCHEMY_BINDS` + - :data:`.SQLALCHEMY_RECORD_QUERIES` + - :data:`.SQLALCHEMY_TRACK_MODIFICATIONS` + + :param app: The Flask application to initialize. + """ + if "sqlalchemy" in app.extensions: + raise RuntimeError( + "A 'SQLAlchemy' instance has already been registered on this Flask app." + " Import and use that instance instead." + ) + + app.extensions["sqlalchemy"] = self + app.teardown_appcontext(self._teardown_session) + + if self._add_models_to_shell: + from .cli import add_models_to_shell + + app.shell_context_processor(add_models_to_shell) + + basic_uri: str | sa.engine.URL | None = app.config.setdefault( + "SQLALCHEMY_DATABASE_URI", None + ) + basic_engine_options = self._engine_options.copy() + basic_engine_options.update( + app.config.setdefault("SQLALCHEMY_ENGINE_OPTIONS", {}) + ) + echo: bool = app.config.setdefault("SQLALCHEMY_ECHO", False) + config_binds: dict[ + str | None, str | sa.engine.URL | dict[str, t.Any] + ] = app.config.setdefault("SQLALCHEMY_BINDS", {}) + engine_options: dict[str | None, dict[str, t.Any]] = {} + + # Build the engine config for each bind key. + for key, value in config_binds.items(): + engine_options[key] = self._engine_options.copy() + + if isinstance(value, (str, sa.engine.URL)): + engine_options[key]["url"] = value + else: + engine_options[key].update(value) + + # Build the engine config for the default bind key. + if basic_uri is not None: + basic_engine_options["url"] = basic_uri + + if "url" in basic_engine_options: + engine_options.setdefault(None, {}).update(basic_engine_options) + + if not engine_options: + raise RuntimeError( + "Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' must be set." + ) + + engines = self._app_engines.setdefault(app, {}) + + # Dispose existing engines in case init_app is called again. + if engines: + for engine in engines.values(): + engine.dispose() + + engines.clear() + + # Create the metadata and engine for each bind key. + for key, options in engine_options.items(): + self._make_metadata(key) + options.setdefault("echo", echo) + options.setdefault("echo_pool", echo) + self._apply_driver_defaults(options, app) + engines[key] = self._make_engine(key, options, app) + + if app.config.setdefault("SQLALCHEMY_RECORD_QUERIES", False): + from . import record_queries + + for engine in engines.values(): + record_queries._listen(engine) + + if app.config.setdefault("SQLALCHEMY_TRACK_MODIFICATIONS", False): + from . import track_modifications + + track_modifications._listen(self.session) + + def _make_scoped_session( + self, options: dict[str, t.Any] + ) -> sa_orm.scoped_session[Session]: + """Create a :class:`sqlalchemy.orm.scoping.scoped_session` around the factory + from :meth:`_make_session_factory`. The result is available as :attr:`session`. + + The scope function can be customized using the ``scopefunc`` key in the + ``session_options`` parameter to the extension. By default it uses the current + thread or greenlet id. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param options: The ``session_options`` parameter from ``__init__``. Keyword + arguments passed to the session factory. A ``scopefunc`` key is popped. + + .. versionchanged:: 3.0 + The session is scoped to the current app context. + + .. versionchanged:: 3.0 + Renamed from ``create_scoped_session``, this method is internal. + """ + scope = options.pop("scopefunc", _app_ctx_id) + factory = self._make_session_factory(options) + return sa_orm.scoped_session(factory, scope) + + def _make_session_factory( + self, options: dict[str, t.Any] + ) -> sa_orm.sessionmaker[Session]: + """Create the SQLAlchemy :class:`sqlalchemy.orm.sessionmaker` used by + :meth:`_make_scoped_session`. + + To customize, pass the ``session_options`` parameter to :class:`SQLAlchemy`. To + customize the session class, subclass :class:`.Session` and pass it as the + ``class_`` key. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param options: The ``session_options`` parameter from ``__init__``. Keyword + arguments passed to the session factory. + + .. versionchanged:: 3.0 + The session class can be customized. + + .. versionchanged:: 3.0 + Renamed from ``create_session``, this method is internal. + """ + options.setdefault("class_", Session) + options.setdefault("query_cls", self.Query) + return sa_orm.sessionmaker(db=self, **options) + + def _teardown_session(self, exc: BaseException | None) -> None: + """Remove the current session at the end of the request. + + :meta private: + + .. versionadded:: 3.0 + """ + self.session.remove() + + def _make_metadata(self, bind_key: str | None) -> sa.MetaData: + """Get or create a :class:`sqlalchemy.schema.MetaData` for the given bind key. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param bind_key: The name of the metadata being created. + + .. versionadded:: 3.0 + """ + if bind_key in self.metadatas: + return self.metadatas[bind_key] + + if bind_key is not None: + # Copy the naming convention from the default metadata. + naming_convention = self._make_metadata(None).naming_convention + else: + naming_convention = None + + # Set the bind key in info to be used by session.get_bind. + metadata = sa.MetaData( + naming_convention=naming_convention, info={"bind_key": bind_key} + ) + self.metadatas[bind_key] = metadata + return metadata + + def _make_table_class(self) -> type[_Table]: + """Create a SQLAlchemy :class:`sqlalchemy.schema.Table` class that chooses a + metadata automatically based on the ``bind_key``. The result is available as + :attr:`Table`. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + .. versionadded:: 3.0 + """ + + class Table(_Table): + def __new__( + cls, *args: t.Any, bind_key: str | None = None, **kwargs: t.Any + ) -> Table: + # If a metadata arg is passed, go directly to the base Table. Also do + # this for no args so the correct error is shown. + if not args or (len(args) >= 2 and isinstance(args[1], sa.MetaData)): + return super().__new__(cls, *args, **kwargs) + + metadata = self._make_metadata(bind_key) + return super().__new__(cls, *[args[0], metadata, *args[1:]], **kwargs) + + return Table + + def _make_declarative_base( + self, + model_class: _FSA_MCT, + disable_autonaming: bool = False, + ) -> t.Type[_FSAModel]: + """Create a SQLAlchemy declarative model class. The result is available as + :attr:`Model`. + + To customize, subclass :class:`.Model` and pass it as ``model_class`` to + :class:`SQLAlchemy`. To customize at the metaclass level, pass an already + created declarative model class as ``model_class``. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param model_class: A model base class, or an already created declarative model + class. + + :param disable_autonaming: Turns off automatic tablename generation in models. + + .. versionchanged:: 3.1.0 + Added support for passing SQLAlchemy 2.x base class as model class. + Added optional ``disable_autonaming`` parameter. + + .. versionchanged:: 3.0 + Renamed with a leading underscore, this method is internal. + + .. versionchanged:: 2.3 + ``model`` can be an already created declarative model class. + """ + model: t.Type[_FSAModel] + declarative_bases = _get_2x_declarative_bases(model_class) + if len(declarative_bases) > 1: + # raise error if more than one declarative base is found + raise ValueError( + "Only one declarative base can be passed to SQLAlchemy." + " Got: {}".format(model_class.__bases__) + ) + elif len(declarative_bases) == 1: + body = dict(model_class.__dict__) + body["__fsa__"] = self + mixin_classes = [BindMixin, NameMixin, Model] + if disable_autonaming: + mixin_classes.remove(NameMixin) + model = types.new_class( + "FlaskSQLAlchemyBase", + (*mixin_classes, *model_class.__bases__), + {"metaclass": type(declarative_bases[0])}, + lambda ns: ns.update(body), + ) + elif not isinstance(model_class, sa_orm.DeclarativeMeta): + metadata = self._make_metadata(None) + metaclass = DefaultMetaNoName if disable_autonaming else DefaultMeta + model = sa_orm.declarative_base( + metadata=metadata, cls=model_class, name="Model", metaclass=metaclass + ) + else: + model = model_class # type: ignore[assignment] + + if None not in self.metadatas: + # Use the model's metadata as the default metadata. + model.metadata.info["bind_key"] = None + self.metadatas[None] = model.metadata + else: + # Use the passed in default metadata as the model's metadata. + model.metadata = self.metadatas[None] + + model.query_class = self.Query + model.query = _QueryProperty() # type: ignore[assignment] + model.__fsa__ = self + return model + + def _apply_driver_defaults(self, options: dict[str, t.Any], app: Flask) -> None: + """Apply driver-specific configuration to an engine. + + SQLite in-memory databases use ``StaticPool`` and disable ``check_same_thread``. + File paths are relative to the app's :attr:`~flask.Flask.instance_path`, + which is created if it doesn't exist. + + MySQL sets ``charset="utf8mb4"``, and ``pool_timeout`` defaults to 2 hours. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param options: Arguments passed to the engine. + :param app: The application that the engine configuration belongs to. + + .. versionchanged:: 3.0 + SQLite paths are relative to ``app.instance_path``. It does not use + ``NullPool`` if ``pool_size`` is 0. Driver-level URIs are supported. + + .. versionchanged:: 3.0 + MySQL sets ``charset="utf8mb4". It does not set ``pool_size`` to 10. It + does not set ``pool_recycle`` if not using a queue pool. + + .. versionchanged:: 3.0 + Renamed from ``apply_driver_hacks``, this method is internal. It does not + return anything. + + .. versionchanged:: 2.5 + Returns ``(sa_url, options)``. + """ + url = sa.engine.make_url(options["url"]) + + if url.drivername in {"sqlite", "sqlite+pysqlite"}: + if url.database is None or url.database in {"", ":memory:"}: + options["poolclass"] = sa.pool.StaticPool + + if "connect_args" not in options: + options["connect_args"] = {} + + options["connect_args"]["check_same_thread"] = False + else: + # the url might look like sqlite:///file:path?uri=true + is_uri = url.query.get("uri", False) + + if is_uri: + db_str = url.database[5:] + else: + db_str = url.database + + if not os.path.isabs(db_str): + os.makedirs(app.instance_path, exist_ok=True) + db_str = os.path.join(app.instance_path, db_str) + + if is_uri: + db_str = f"file:{db_str}" + + options["url"] = url.set(database=db_str) + elif url.drivername.startswith("mysql"): + # set queue defaults only when using queue pool + if ( + "pool_class" not in options + or options["pool_class"] is sa.pool.QueuePool + ): + options.setdefault("pool_recycle", 7200) + + if "charset" not in url.query: + options["url"] = url.update_query_dict({"charset": "utf8mb4"}) + + def _make_engine( + self, bind_key: str | None, options: dict[str, t.Any], app: Flask + ) -> sa.engine.Engine: + """Create the :class:`sqlalchemy.engine.Engine` for the given bind key and app. + + To customize, use :data:`.SQLALCHEMY_ENGINE_OPTIONS` or + :data:`.SQLALCHEMY_BINDS` config. Pass ``engine_options`` to :class:`SQLAlchemy` + to set defaults for all engines. + + This method is used for internal setup. Its signature may change at any time. + + :meta private: + + :param bind_key: The name of the engine being created. + :param options: Arguments passed to the engine. + :param app: The application that the engine configuration belongs to. + + .. versionchanged:: 3.0 + Renamed from ``create_engine``, this method is internal. + """ + return sa.engine_from_config(options, prefix="") + + @property + def metadata(self) -> sa.MetaData: + """The default metadata used by :attr:`Model` and :attr:`Table` if no bind key + is set. + """ + return self.metadatas[None] + + @property + def engines(self) -> t.Mapping[str | None, sa.engine.Engine]: + """Map of bind keys to :class:`sqlalchemy.engine.Engine` instances for current + application. The ``None`` key refers to the default engine, and is available as + :attr:`engine`. + + To customize, set the :data:`.SQLALCHEMY_BINDS` config, and set defaults by + passing the ``engine_options`` parameter to the extension. + + This requires that a Flask application context is active. + + .. versionadded:: 3.0 + """ + app = current_app._get_current_object() # type: ignore[attr-defined] + + if app not in self._app_engines: + raise RuntimeError( + "The current Flask app is not registered with this 'SQLAlchemy'" + " instance. Did you forget to call 'init_app', or did you create" + " multiple 'SQLAlchemy' instances?" + ) + + return self._app_engines[app] + + @property + def engine(self) -> sa.engine.Engine: + """The default :class:`~sqlalchemy.engine.Engine` for the current application, + used by :attr:`session` if the :attr:`Model` or :attr:`Table` being queried does + not set a bind key. + + To customize, set the :data:`.SQLALCHEMY_ENGINE_OPTIONS` config, and set + defaults by passing the ``engine_options`` parameter to the extension. + + This requires that a Flask application context is active. + """ + return self.engines[None] + + def get_engine( + self, bind_key: str | None = None, **kwargs: t.Any + ) -> sa.engine.Engine: + """Get the engine for the given bind key for the current application. + This requires that a Flask application context is active. + + :param bind_key: The name of the engine. + + .. deprecated:: 3.0 + Will be removed in Flask-SQLAlchemy 3.2. Use ``engines[key]`` instead. + + .. versionchanged:: 3.0 + Renamed the ``bind`` parameter to ``bind_key``. Removed the ``app`` + parameter. + """ + warnings.warn( + "'get_engine' is deprecated and will be removed in Flask-SQLAlchemy" + " 3.2. Use 'engine' or 'engines[key]' instead. If you're using" + " Flask-Migrate or Alembic, you'll need to update your 'env.py' file.", + DeprecationWarning, + stacklevel=2, + ) + + if "bind" in kwargs: + bind_key = kwargs.pop("bind") + + return self.engines[bind_key] + + def get_or_404( + self, + entity: type[_O], + ident: t.Any, + *, + description: str | None = None, + **kwargs: t.Any, + ) -> _O: + """Like :meth:`session.get() ` but aborts with a + ``404 Not Found`` error instead of returning ``None``. + + :param entity: The model class to query. + :param ident: The primary key to query. + :param description: A custom message to show on the error page. + :param kwargs: Extra arguments passed to ``session.get()``. + + .. versionchanged:: 3.1 + Pass extra keyword arguments to ``session.get()``. + + .. versionadded:: 3.0 + """ + value = self.session.get(entity, ident, **kwargs) + + if value is None: + abort(404, description=description) + + return value + + def first_or_404( + self, statement: sa.sql.Select[t.Any], *, description: str | None = None + ) -> t.Any: + """Like :meth:`Result.scalar() `, but aborts + with a ``404 Not Found`` error instead of returning ``None``. + + :param statement: The ``select`` statement to execute. + :param description: A custom message to show on the error page. + + .. versionadded:: 3.0 + """ + value = self.session.execute(statement).scalar() + + if value is None: + abort(404, description=description) + + return value + + def one_or_404( + self, statement: sa.sql.Select[t.Any], *, description: str | None = None + ) -> t.Any: + """Like :meth:`Result.scalar_one() `, + but aborts with a ``404 Not Found`` error instead of raising ``NoResultFound`` + or ``MultipleResultsFound``. + + :param statement: The ``select`` statement to execute. + :param description: A custom message to show on the error page. + + .. versionadded:: 3.0 + """ + try: + return self.session.execute(statement).scalar_one() + except (sa_exc.NoResultFound, sa_exc.MultipleResultsFound): + abort(404, description=description) + + def paginate( + self, + select: sa.sql.Select[t.Any], + *, + page: int | None = None, + per_page: int | None = None, + max_per_page: int | None = None, + error_out: bool = True, + count: bool = True, + ) -> Pagination: + """Apply an offset and limit to a select statment based on the current page and + number of items per page, returning a :class:`.Pagination` object. + + The statement should select a model class, like ``select(User)``. This applies + ``unique()`` and ``scalars()`` modifiers to the result, so compound selects will + not return the expected results. + + :param select: The ``select`` statement to paginate. + :param page: The current page, used to calculate the offset. Defaults to the + ``page`` query arg during a request, or 1 otherwise. + :param per_page: The maximum number of items on a page, used to calculate the + offset and limit. Defaults to the ``per_page`` query arg during a request, + or 20 otherwise. + :param max_per_page: The maximum allowed value for ``per_page``, to limit a + user-provided value. Use ``None`` for no limit. Defaults to 100. + :param error_out: Abort with a ``404 Not Found`` error if no items are returned + and ``page`` is not 1, or if ``page`` or ``per_page`` is less than 1, or if + either are not ints. + :param count: Calculate the total number of values by issuing an extra count + query. For very complex queries this may be inaccurate or slow, so it can be + disabled and set manually if necessary. + + .. versionchanged:: 3.0 + The ``count`` query is more efficient. + + .. versionadded:: 3.0 + """ + return SelectPagination( + select=select, + session=self.session(), + page=page, + per_page=per_page, + max_per_page=max_per_page, + error_out=error_out, + count=count, + ) + + def _call_for_binds( + self, bind_key: str | None | list[str | None], op_name: str + ) -> None: + """Call a method on each metadata. + + :meta private: + + :param bind_key: A bind key or list of keys. Defaults to all binds. + :param op_name: The name of the method to call. + + .. versionchanged:: 3.0 + Renamed from ``_execute_for_all_tables``. + """ + if bind_key == "__all__": + keys: list[str | None] = list(self.metadatas) + elif bind_key is None or isinstance(bind_key, str): + keys = [bind_key] + else: + keys = bind_key + + for key in keys: + try: + engine = self.engines[key] + except KeyError: + message = f"Bind key '{key}' is not in 'SQLALCHEMY_BINDS' config." + + if key is None: + message = f"'SQLALCHEMY_DATABASE_URI' config is not set. {message}" + + raise sa_exc.UnboundExecutionError(message) from None + + metadata = self.metadatas[key] + getattr(metadata, op_name)(bind=engine) + + def create_all(self, bind_key: str | None | list[str | None] = "__all__") -> None: + """Create tables that do not exist in the database by calling + ``metadata.create_all()`` for all or some bind keys. This does not + update existing tables, use a migration library for that. + + This requires that a Flask application context is active. + + :param bind_key: A bind key or list of keys to create the tables for. Defaults + to all binds. + + .. versionchanged:: 3.0 + Renamed the ``bind`` parameter to ``bind_key``. Removed the ``app`` + parameter. + + .. versionchanged:: 0.12 + Added the ``bind`` and ``app`` parameters. + """ + self._call_for_binds(bind_key, "create_all") + + def drop_all(self, bind_key: str | None | list[str | None] = "__all__") -> None: + """Drop tables by calling ``metadata.drop_all()`` for all or some bind keys. + + This requires that a Flask application context is active. + + :param bind_key: A bind key or list of keys to drop the tables from. Defaults to + all binds. + + .. versionchanged:: 3.0 + Renamed the ``bind`` parameter to ``bind_key``. Removed the ``app`` + parameter. + + .. versionchanged:: 0.12 + Added the ``bind`` and ``app`` parameters. + """ + self._call_for_binds(bind_key, "drop_all") + + def reflect(self, bind_key: str | None | list[str | None] = "__all__") -> None: + """Load table definitions from the database by calling ``metadata.reflect()`` + for all or some bind keys. + + This requires that a Flask application context is active. + + :param bind_key: A bind key or list of keys to reflect the tables from. Defaults + to all binds. + + .. versionchanged:: 3.0 + Renamed the ``bind`` parameter to ``bind_key``. Removed the ``app`` + parameter. + + .. versionchanged:: 0.12 + Added the ``bind`` and ``app`` parameters. + """ + self._call_for_binds(bind_key, "reflect") + + def _set_rel_query(self, kwargs: dict[str, t.Any]) -> None: + """Apply the extension's :attr:`Query` class as the default for relationships + and backrefs. + + :meta private: + """ + kwargs.setdefault("query_class", self.Query) + + if "backref" in kwargs: + backref = kwargs["backref"] + + if isinstance(backref, str): + backref = (backref, {}) + + backref[1].setdefault("query_class", self.Query) + + def relationship( + self, *args: t.Any, **kwargs: t.Any + ) -> sa_orm.RelationshipProperty[t.Any]: + """A :func:`sqlalchemy.orm.relationship` that applies this extension's + :attr:`Query` class for dynamic relationships and backrefs. + + .. versionchanged:: 3.0 + The :attr:`Query` class is set on ``backref``. + """ + self._set_rel_query(kwargs) + return sa_orm.relationship(*args, **kwargs) + + def dynamic_loader( + self, argument: t.Any, **kwargs: t.Any + ) -> sa_orm.RelationshipProperty[t.Any]: + """A :func:`sqlalchemy.orm.dynamic_loader` that applies this extension's + :attr:`Query` class for relationships and backrefs. + + .. versionchanged:: 3.0 + The :attr:`Query` class is set on ``backref``. + """ + self._set_rel_query(kwargs) + return sa_orm.dynamic_loader(argument, **kwargs) + + def _relation( + self, *args: t.Any, **kwargs: t.Any + ) -> sa_orm.RelationshipProperty[t.Any]: + """A :func:`sqlalchemy.orm.relationship` that applies this extension's + :attr:`Query` class for dynamic relationships and backrefs. + + SQLAlchemy 2.0 removes this name, use ``relationship`` instead. + + :meta private: + + .. versionchanged:: 3.0 + The :attr:`Query` class is set on ``backref``. + """ + self._set_rel_query(kwargs) + f = sa_orm.relationship + return f(*args, **kwargs) + + def __getattr__(self, name: str) -> t.Any: + if name == "relation": + return self._relation + + if name == "event": + return sa_event + + if name.startswith("_"): + raise AttributeError(name) + + for mod in (sa, sa_orm): + if hasattr(mod, name): + return getattr(mod, name) + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/model.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/model.py new file mode 100644 index 000000000..c6f9e5a92 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/model.py @@ -0,0 +1,330 @@ +from __future__ import annotations + +import re +import typing as t + +import sqlalchemy as sa +import sqlalchemy.orm as sa_orm + +from .query import Query + +if t.TYPE_CHECKING: + from .extension import SQLAlchemy + + +class _QueryProperty: + """A class property that creates a query object for a model. + + :meta private: + """ + + def __get__(self, obj: Model | None, cls: type[Model]) -> Query: + return cls.query_class( + cls, session=cls.__fsa__.session() # type: ignore[arg-type] + ) + + +class Model: + """The base class of the :attr:`.SQLAlchemy.Model` declarative model class. + + To define models, subclass :attr:`db.Model <.SQLAlchemy.Model>`, not this. To + customize ``db.Model``, subclass this and pass it as ``model_class`` to + :class:`.SQLAlchemy`. To customize ``db.Model`` at the metaclass level, pass an + already created declarative model class as ``model_class``. + """ + + __fsa__: t.ClassVar[SQLAlchemy] + """Internal reference to the extension object. + + :meta private: + """ + + query_class: t.ClassVar[type[Query]] = Query + """Query class used by :attr:`query`. Defaults to :attr:`.SQLAlchemy.Query`, which + defaults to :class:`.Query`. + """ + + query: t.ClassVar[Query] = _QueryProperty() # type: ignore[assignment] + """A SQLAlchemy query for a model. Equivalent to ``db.session.query(Model)``. Can be + customized per-model by overriding :attr:`query_class`. + + .. warning:: + The query interface is considered legacy in SQLAlchemy. Prefer using + ``session.execute(select())`` instead. + """ + + def __repr__(self) -> str: + state = sa.inspect(self) + assert state is not None + + if state.transient: + pk = f"(transient {id(self)})" + elif state.pending: + pk = f"(pending {id(self)})" + else: + pk = ", ".join(map(str, state.identity)) + + return f"<{type(self).__name__} {pk}>" + + +class BindMetaMixin(type): + """Metaclass mixin that sets a model's ``metadata`` based on its ``__bind_key__``. + + If the model sets ``metadata`` or ``__table__`` directly, ``__bind_key__`` is + ignored. If the ``metadata`` is the same as the parent model, it will not be set + directly on the child model. + """ + + __fsa__: SQLAlchemy + metadata: sa.MetaData + + def __init__( + cls, name: str, bases: tuple[type, ...], d: dict[str, t.Any], **kwargs: t.Any + ) -> None: + if not ("metadata" in cls.__dict__ or "__table__" in cls.__dict__): + bind_key = getattr(cls, "__bind_key__", None) + parent_metadata = getattr(cls, "metadata", None) + metadata = cls.__fsa__._make_metadata(bind_key) + + if metadata is not parent_metadata: + cls.metadata = metadata + + super().__init__(name, bases, d, **kwargs) + + +class BindMixin: + """DeclarativeBase mixin to set a model's ``metadata`` based on ``__bind_key__``. + + If no ``__bind_key__`` is specified, the model will use the default metadata + provided by ``DeclarativeBase`` or ``DeclarativeBaseNoMeta``. + If the model doesn't set ``metadata`` or ``__table__`` directly + and does set ``__bind_key__``, the model will use the metadata + for the specified bind key. + If the ``metadata`` is the same as the parent model, it will not be set + directly on the child model. + + .. versionchanged:: 3.1.0 + """ + + __fsa__: SQLAlchemy + metadata: sa.MetaData + + @classmethod + def __init_subclass__(cls: t.Type[BindMixin], **kwargs: t.Dict[str, t.Any]) -> None: + if not ("metadata" in cls.__dict__ or "__table__" in cls.__dict__) and hasattr( + cls, "__bind_key__" + ): + bind_key = getattr(cls, "__bind_key__", None) + parent_metadata = getattr(cls, "metadata", None) + metadata = cls.__fsa__._make_metadata(bind_key) + + if metadata is not parent_metadata: + cls.metadata = metadata + + super().__init_subclass__(**kwargs) + + +class NameMetaMixin(type): + """Metaclass mixin that sets a model's ``__tablename__`` by converting the + ``CamelCase`` class name to ``snake_case``. A name is set for non-abstract models + that do not otherwise define ``__tablename__``. If a model does not define a primary + key, it will not generate a name or ``__table__``, for single-table inheritance. + """ + + metadata: sa.MetaData + __tablename__: str + __table__: sa.Table + + def __init__( + cls, name: str, bases: tuple[type, ...], d: dict[str, t.Any], **kwargs: t.Any + ) -> None: + if should_set_tablename(cls): + cls.__tablename__ = camel_to_snake_case(cls.__name__) + + super().__init__(name, bases, d, **kwargs) + + # __table_cls__ has run. If no table was created, use the parent table. + if ( + "__tablename__" not in cls.__dict__ + and "__table__" in cls.__dict__ + and cls.__dict__["__table__"] is None + ): + del cls.__table__ + + def __table_cls__(cls, *args: t.Any, **kwargs: t.Any) -> sa.Table | None: + """This is called by SQLAlchemy during mapper setup. It determines the final + table object that the model will use. + + If no primary key is found, that indicates single-table inheritance, so no table + will be created and ``__tablename__`` will be unset. + """ + schema = kwargs.get("schema") + + if schema is None: + key = args[0] + else: + key = f"{schema}.{args[0]}" + + # Check if a table with this name already exists. Allows reflected tables to be + # applied to models by name. + if key in cls.metadata.tables: + return sa.Table(*args, **kwargs) + + # If a primary key is found, create a table for joined-table inheritance. + for arg in args: + if (isinstance(arg, sa.Column) and arg.primary_key) or isinstance( + arg, sa.PrimaryKeyConstraint + ): + return sa.Table(*args, **kwargs) + + # If no base classes define a table, return one that's missing a primary key + # so SQLAlchemy shows the correct error. + for base in cls.__mro__[1:-1]: + if "__table__" in base.__dict__: + break + else: + return sa.Table(*args, **kwargs) + + # Single-table inheritance, use the parent table name. __init__ will unset + # __table__ based on this. + if "__tablename__" in cls.__dict__: + del cls.__tablename__ + + return None + + +class NameMixin: + """DeclarativeBase mixin that sets a model's ``__tablename__`` by converting the + ``CamelCase`` class name to ``snake_case``. A name is set for non-abstract models + that do not otherwise define ``__tablename__``. If a model does not define a primary + key, it will not generate a name or ``__table__``, for single-table inheritance. + + .. versionchanged:: 3.1.0 + """ + + metadata: sa.MetaData + __tablename__: str + __table__: sa.Table + + @classmethod + def __init_subclass__(cls: t.Type[NameMixin], **kwargs: t.Dict[str, t.Any]) -> None: + if should_set_tablename(cls): + cls.__tablename__ = camel_to_snake_case(cls.__name__) + + super().__init_subclass__(**kwargs) + + # __table_cls__ has run. If no table was created, use the parent table. + if ( + "__tablename__" not in cls.__dict__ + and "__table__" in cls.__dict__ + and cls.__dict__["__table__"] is None + ): + del cls.__table__ + + @classmethod + def __table_cls__(cls, *args: t.Any, **kwargs: t.Any) -> sa.Table | None: + """This is called by SQLAlchemy during mapper setup. It determines the final + table object that the model will use. + + If no primary key is found, that indicates single-table inheritance, so no table + will be created and ``__tablename__`` will be unset. + """ + schema = kwargs.get("schema") + + if schema is None: + key = args[0] + else: + key = f"{schema}.{args[0]}" + + # Check if a table with this name already exists. Allows reflected tables to be + # applied to models by name. + if key in cls.metadata.tables: + return sa.Table(*args, **kwargs) + + # If a primary key is found, create a table for joined-table inheritance. + for arg in args: + if (isinstance(arg, sa.Column) and arg.primary_key) or isinstance( + arg, sa.PrimaryKeyConstraint + ): + return sa.Table(*args, **kwargs) + + # If no base classes define a table, return one that's missing a primary key + # so SQLAlchemy shows the correct error. + for base in cls.__mro__[1:-1]: + if "__table__" in base.__dict__: + break + else: + return sa.Table(*args, **kwargs) + + # Single-table inheritance, use the parent table name. __init__ will unset + # __table__ based on this. + if "__tablename__" in cls.__dict__: + del cls.__tablename__ + + return None + + +def should_set_tablename(cls: type) -> bool: + """Determine whether ``__tablename__`` should be generated for a model. + + - If no class in the MRO sets a name, one should be generated. + - If a declared attr is found, it should be used instead. + - If a name is found, it should be used if the class is a mixin, otherwise one + should be generated. + - Abstract models should not have one generated. + + Later, ``__table_cls__`` will determine if the model looks like single or + joined-table inheritance. If no primary key is found, the name will be unset. + """ + if ( + cls.__dict__.get("__abstract__", False) + or ( + not issubclass(cls, (sa_orm.DeclarativeBase, sa_orm.DeclarativeBaseNoMeta)) + and not any(isinstance(b, sa_orm.DeclarativeMeta) for b in cls.__mro__[1:]) + ) + or any( + (b is sa_orm.DeclarativeBase or b is sa_orm.DeclarativeBaseNoMeta) + for b in cls.__bases__ + ) + ): + return False + + for base in cls.__mro__: + if "__tablename__" not in base.__dict__: + continue + + if isinstance(base.__dict__["__tablename__"], sa_orm.declared_attr): + return False + + return not ( + base is cls + or base.__dict__.get("__abstract__", False) + or not ( + # SQLAlchemy 1.x + isinstance(base, sa_orm.DeclarativeMeta) + # 2.x: DeclarativeBas uses this as metaclass + or isinstance(base, sa_orm.decl_api.DeclarativeAttributeIntercept) + # 2.x: DeclarativeBaseNoMeta doesn't use a metaclass + or issubclass(base, sa_orm.DeclarativeBaseNoMeta) + ) + ) + + return True + + +def camel_to_snake_case(name: str) -> str: + """Convert a ``CamelCase`` name to ``snake_case``.""" + name = re.sub(r"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))", r"_\1", name) + return name.lower().lstrip("_") + + +class DefaultMeta(BindMetaMixin, NameMetaMixin, sa_orm.DeclarativeMeta): + """SQLAlchemy declarative metaclass that provides ``__bind_key__`` and + ``__tablename__`` support. + """ + + +class DefaultMetaNoName(BindMetaMixin, sa_orm.DeclarativeMeta): + """SQLAlchemy declarative metaclass that provides ``__bind_key__`` and + ``__tablename__`` support. + """ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/pagination.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/pagination.py new file mode 100644 index 000000000..3d49d6e04 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/pagination.py @@ -0,0 +1,364 @@ +from __future__ import annotations + +import typing as t +from math import ceil + +import sqlalchemy as sa +import sqlalchemy.orm as sa_orm +from flask import abort +from flask import request + + +class Pagination: + """Apply an offset and limit to the query based on the current page and number of + items per page. + + Don't create pagination objects manually. They are created by + :meth:`.SQLAlchemy.paginate` and :meth:`.Query.paginate`. + + This is a base class, a subclass must implement :meth:`_query_items` and + :meth:`_query_count`. Those methods will use arguments passed as ``kwargs`` to + perform the queries. + + :param page: The current page, used to calculate the offset. Defaults to the + ``page`` query arg during a request, or 1 otherwise. + :param per_page: The maximum number of items on a page, used to calculate the + offset and limit. Defaults to the ``per_page`` query arg during a request, + or 20 otherwise. + :param max_per_page: The maximum allowed value for ``per_page``, to limit a + user-provided value. Use ``None`` for no limit. Defaults to 100. + :param error_out: Abort with a ``404 Not Found`` error if no items are returned + and ``page`` is not 1, or if ``page`` or ``per_page`` is less than 1, or if + either are not ints. + :param count: Calculate the total number of values by issuing an extra count + query. For very complex queries this may be inaccurate or slow, so it can be + disabled and set manually if necessary. + :param kwargs: Information about the query to paginate. Different subclasses will + require different arguments. + + .. versionchanged:: 3.0 + Iterating over a pagination object iterates over its items. + + .. versionchanged:: 3.0 + Creating instances manually is not a public API. + """ + + def __init__( + self, + page: int | None = None, + per_page: int | None = None, + max_per_page: int | None = 100, + error_out: bool = True, + count: bool = True, + **kwargs: t.Any, + ) -> None: + self._query_args = kwargs + page, per_page = self._prepare_page_args( + page=page, + per_page=per_page, + max_per_page=max_per_page, + error_out=error_out, + ) + + self.page: int = page + """The current page.""" + + self.per_page: int = per_page + """The maximum number of items on a page.""" + + self.max_per_page: int | None = max_per_page + """The maximum allowed value for ``per_page``.""" + + items = self._query_items() + + if not items and page != 1 and error_out: + abort(404) + + self.items: list[t.Any] = items + """The items on the current page. Iterating over the pagination object is + equivalent to iterating over the items. + """ + + if count: + total = self._query_count() + else: + total = None + + self.total: int | None = total + """The total number of items across all pages.""" + + @staticmethod + def _prepare_page_args( + *, + page: int | None = None, + per_page: int | None = None, + max_per_page: int | None = None, + error_out: bool = True, + ) -> tuple[int, int]: + if request: + if page is None: + try: + page = int(request.args.get("page", 1)) + except (TypeError, ValueError): + if error_out: + abort(404) + + page = 1 + + if per_page is None: + try: + per_page = int(request.args.get("per_page", 20)) + except (TypeError, ValueError): + if error_out: + abort(404) + + per_page = 20 + else: + if page is None: + page = 1 + + if per_page is None: + per_page = 20 + + if max_per_page is not None: + per_page = min(per_page, max_per_page) + + if page < 1: + if error_out: + abort(404) + else: + page = 1 + + if per_page < 1: + if error_out: + abort(404) + else: + per_page = 20 + + return page, per_page + + @property + def _query_offset(self) -> int: + """The index of the first item to query, passed to ``offset()``. + + :meta private: + + .. versionadded:: 3.0 + """ + return (self.page - 1) * self.per_page + + def _query_items(self) -> list[t.Any]: + """Execute the query to get the items on the current page. + + Uses init arguments stored in :attr:`_query_args`. + + :meta private: + + .. versionadded:: 3.0 + """ + raise NotImplementedError + + def _query_count(self) -> int: + """Execute the query to get the total number of items. + + Uses init arguments stored in :attr:`_query_args`. + + :meta private: + + .. versionadded:: 3.0 + """ + raise NotImplementedError + + @property + def first(self) -> int: + """The number of the first item on the page, starting from 1, or 0 if there are + no items. + + .. versionadded:: 3.0 + """ + if len(self.items) == 0: + return 0 + + return (self.page - 1) * self.per_page + 1 + + @property + def last(self) -> int: + """The number of the last item on the page, starting from 1, inclusive, or 0 if + there are no items. + + .. versionadded:: 3.0 + """ + first = self.first + return max(first, first + len(self.items) - 1) + + @property + def pages(self) -> int: + """The total number of pages.""" + if self.total == 0 or self.total is None: + return 0 + + return ceil(self.total / self.per_page) + + @property + def has_prev(self) -> bool: + """``True`` if this is not the first page.""" + return self.page > 1 + + @property + def prev_num(self) -> int | None: + """The previous page number, or ``None`` if this is the first page.""" + if not self.has_prev: + return None + + return self.page - 1 + + def prev(self, *, error_out: bool = False) -> Pagination: + """Query the :class:`Pagination` object for the previous page. + + :param error_out: Abort with a ``404 Not Found`` error if no items are returned + and ``page`` is not 1, or if ``page`` or ``per_page`` is less than 1, or if + either are not ints. + """ + p = type(self)( + page=self.page - 1, + per_page=self.per_page, + error_out=error_out, + count=False, + **self._query_args, + ) + p.total = self.total + return p + + @property + def has_next(self) -> bool: + """``True`` if this is not the last page.""" + return self.page < self.pages + + @property + def next_num(self) -> int | None: + """The next page number, or ``None`` if this is the last page.""" + if not self.has_next: + return None + + return self.page + 1 + + def next(self, *, error_out: bool = False) -> Pagination: + """Query the :class:`Pagination` object for the next page. + + :param error_out: Abort with a ``404 Not Found`` error if no items are returned + and ``page`` is not 1, or if ``page`` or ``per_page`` is less than 1, or if + either are not ints. + """ + p = type(self)( + page=self.page + 1, + per_page=self.per_page, + max_per_page=self.max_per_page, + error_out=error_out, + count=False, + **self._query_args, + ) + p.total = self.total + return p + + def iter_pages( + self, + *, + left_edge: int = 2, + left_current: int = 2, + right_current: int = 4, + right_edge: int = 2, + ) -> t.Iterator[int | None]: + """Yield page numbers for a pagination widget. Skipped pages between the edges + and middle are represented by a ``None``. + + For example, if there are 20 pages and the current page is 7, the following + values are yielded. + + .. code-block:: python + + 1, 2, None, 5, 6, 7, 8, 9, 10, 11, None, 19, 20 + + :param left_edge: How many pages to show from the first page. + :param left_current: How many pages to show left of the current page. + :param right_current: How many pages to show right of the current page. + :param right_edge: How many pages to show from the last page. + + .. versionchanged:: 3.0 + Improved efficiency of calculating what to yield. + + .. versionchanged:: 3.0 + ``right_current`` boundary is inclusive. + + .. versionchanged:: 3.0 + All parameters are keyword-only. + """ + pages_end = self.pages + 1 + + if pages_end == 1: + return + + left_end = min(1 + left_edge, pages_end) + yield from range(1, left_end) + + if left_end == pages_end: + return + + mid_start = max(left_end, self.page - left_current) + mid_end = min(self.page + right_current + 1, pages_end) + + if mid_start - left_end > 0: + yield None + + yield from range(mid_start, mid_end) + + if mid_end == pages_end: + return + + right_start = max(mid_end, pages_end - right_edge) + + if right_start - mid_end > 0: + yield None + + yield from range(right_start, pages_end) + + def __iter__(self) -> t.Iterator[t.Any]: + yield from self.items + + +class SelectPagination(Pagination): + """Returned by :meth:`.SQLAlchemy.paginate`. Takes ``select`` and ``session`` + arguments in addition to the :class:`Pagination` arguments. + + .. versionadded:: 3.0 + """ + + def _query_items(self) -> list[t.Any]: + select = self._query_args["select"] + select = select.limit(self.per_page).offset(self._query_offset) + session = self._query_args["session"] + return list(session.execute(select).unique().scalars()) + + def _query_count(self) -> int: + select = self._query_args["select"] + sub = select.options(sa_orm.lazyload("*")).order_by(None).subquery() + session = self._query_args["session"] + out = session.execute(sa.select(sa.func.count()).select_from(sub)).scalar() + return out # type: ignore[no-any-return] + + +class QueryPagination(Pagination): + """Returned by :meth:`.Query.paginate`. Takes a ``query`` argument in addition to + the :class:`Pagination` arguments. + + .. versionadded:: 3.0 + """ + + def _query_items(self) -> list[t.Any]: + query = self._query_args["query"] + out = query.limit(self.per_page).offset(self._query_offset).all() + return out # type: ignore[no-any-return] + + def _query_count(self) -> int: + # Query.count automatically disables eager loads + out = self._query_args["query"].order_by(None).count() + return out # type: ignore[no-any-return] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/query.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/query.py new file mode 100644 index 000000000..35f927d28 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/query.py @@ -0,0 +1,105 @@ +from __future__ import annotations + +import typing as t + +import sqlalchemy.exc as sa_exc +import sqlalchemy.orm as sa_orm +from flask import abort + +from .pagination import Pagination +from .pagination import QueryPagination + + +class Query(sa_orm.Query): # type: ignore[type-arg] + """SQLAlchemy :class:`~sqlalchemy.orm.query.Query` subclass with some extra methods + useful for querying in a web application. + + This is the default query class for :attr:`.Model.query`. + + .. versionchanged:: 3.0 + Renamed to ``Query`` from ``BaseQuery``. + """ + + def get_or_404(self, ident: t.Any, description: str | None = None) -> t.Any: + """Like :meth:`~sqlalchemy.orm.Query.get` but aborts with a ``404 Not Found`` + error instead of returning ``None``. + + :param ident: The primary key to query. + :param description: A custom message to show on the error page. + """ + rv = self.get(ident) + + if rv is None: + abort(404, description=description) + + return rv + + def first_or_404(self, description: str | None = None) -> t.Any: + """Like :meth:`~sqlalchemy.orm.Query.first` but aborts with a ``404 Not Found`` + error instead of returning ``None``. + + :param description: A custom message to show on the error page. + """ + rv = self.first() + + if rv is None: + abort(404, description=description) + + return rv + + def one_or_404(self, description: str | None = None) -> t.Any: + """Like :meth:`~sqlalchemy.orm.Query.one` but aborts with a ``404 Not Found`` + error instead of raising ``NoResultFound`` or ``MultipleResultsFound``. + + :param description: A custom message to show on the error page. + + .. versionadded:: 3.0 + """ + try: + return self.one() + except (sa_exc.NoResultFound, sa_exc.MultipleResultsFound): + abort(404, description=description) + + def paginate( + self, + *, + page: int | None = None, + per_page: int | None = None, + max_per_page: int | None = None, + error_out: bool = True, + count: bool = True, + ) -> Pagination: + """Apply an offset and limit to the query based on the current page and number + of items per page, returning a :class:`.Pagination` object. + + :param page: The current page, used to calculate the offset. Defaults to the + ``page`` query arg during a request, or 1 otherwise. + :param per_page: The maximum number of items on a page, used to calculate the + offset and limit. Defaults to the ``per_page`` query arg during a request, + or 20 otherwise. + :param max_per_page: The maximum allowed value for ``per_page``, to limit a + user-provided value. Use ``None`` for no limit. Defaults to 100. + :param error_out: Abort with a ``404 Not Found`` error if no items are returned + and ``page`` is not 1, or if ``page`` or ``per_page`` is less than 1, or if + either are not ints. + :param count: Calculate the total number of values by issuing an extra count + query. For very complex queries this may be inaccurate or slow, so it can be + disabled and set manually if necessary. + + .. versionchanged:: 3.0 + All parameters are keyword-only. + + .. versionchanged:: 3.0 + The ``count`` query is more efficient. + + .. versionchanged:: 3.0 + ``max_per_page`` defaults to 100. + """ + return QueryPagination( + query=self, + page=page, + per_page=per_page, + max_per_page=max_per_page, + error_out=error_out, + count=count, + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/record_queries.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/record_queries.py new file mode 100644 index 000000000..e8273be95 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/record_queries.py @@ -0,0 +1,117 @@ +from __future__ import annotations + +import dataclasses +import inspect +import typing as t +from time import perf_counter + +import sqlalchemy as sa +import sqlalchemy.event as sa_event +from flask import current_app +from flask import g +from flask import has_app_context + + +def get_recorded_queries() -> list[_QueryInfo]: + """Get the list of recorded query information for the current session. Queries are + recorded if the config :data:`.SQLALCHEMY_RECORD_QUERIES` is enabled. + + Each query info object has the following attributes: + + ``statement`` + The string of SQL generated by SQLAlchemy with parameter placeholders. + ``parameters`` + The parameters sent with the SQL statement. + ``start_time`` / ``end_time`` + Timing info about when the query started execution and when the results where + returned. Accuracy and value depends on the operating system. + ``duration`` + The time the query took in seconds. + ``location`` + A string description of where in your application code the query was executed. + This may not be possible to calculate, and the format is not stable. + + .. versionchanged:: 3.0 + Renamed from ``get_debug_queries``. + + .. versionchanged:: 3.0 + The info object is a dataclass instead of a tuple. + + .. versionchanged:: 3.0 + The info object attribute ``context`` is renamed to ``location``. + + .. versionchanged:: 3.0 + Not enabled automatically in debug or testing mode. + """ + return g.get("_sqlalchemy_queries", []) # type: ignore[no-any-return] + + +@dataclasses.dataclass +class _QueryInfo: + """Information about an executed query. Returned by :func:`get_recorded_queries`. + + .. versionchanged:: 3.0 + Renamed from ``_DebugQueryTuple``. + + .. versionchanged:: 3.0 + Changed to a dataclass instead of a tuple. + + .. versionchanged:: 3.0 + ``context`` is renamed to ``location``. + """ + + statement: str | None + parameters: t.Any + start_time: float + end_time: float + location: str + + @property + def duration(self) -> float: + return self.end_time - self.start_time + + +def _listen(engine: sa.engine.Engine) -> None: + sa_event.listen(engine, "before_cursor_execute", _record_start, named=True) + sa_event.listen(engine, "after_cursor_execute", _record_end, named=True) + + +def _record_start(context: sa.engine.ExecutionContext, **kwargs: t.Any) -> None: + if not has_app_context(): + return + + context._fsa_start_time = perf_counter() # type: ignore[attr-defined] + + +def _record_end(context: sa.engine.ExecutionContext, **kwargs: t.Any) -> None: + if not has_app_context(): + return + + if "_sqlalchemy_queries" not in g: + g._sqlalchemy_queries = [] + + import_top = current_app.import_name.partition(".")[0] + import_dot = f"{import_top}." + frame = inspect.currentframe() + + while frame: + name = frame.f_globals.get("__name__") + + if name and (name == import_top or name.startswith(import_dot)): + code = frame.f_code + location = f"{code.co_filename}:{frame.f_lineno} ({code.co_name})" + break + + frame = frame.f_back + else: + location = "" + + g._sqlalchemy_queries.append( + _QueryInfo( + statement=context.statement, + parameters=context.parameters, + start_time=context._fsa_start_time, # type: ignore[attr-defined] + end_time=perf_counter(), + location=location, + ) + ) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/session.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/session.py new file mode 100644 index 000000000..631fffa82 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/session.py @@ -0,0 +1,111 @@ +from __future__ import annotations + +import typing as t + +import sqlalchemy as sa +import sqlalchemy.exc as sa_exc +import sqlalchemy.orm as sa_orm +from flask.globals import app_ctx + +if t.TYPE_CHECKING: + from .extension import SQLAlchemy + + +class Session(sa_orm.Session): + """A SQLAlchemy :class:`~sqlalchemy.orm.Session` class that chooses what engine to + use based on the bind key associated with the metadata associated with the thing + being queried. + + To customize ``db.session``, subclass this and pass it as the ``class_`` key in the + ``session_options`` to :class:`.SQLAlchemy`. + + .. versionchanged:: 3.0 + Renamed from ``SignallingSession``. + """ + + def __init__(self, db: SQLAlchemy, **kwargs: t.Any) -> None: + super().__init__(**kwargs) + self._db = db + self._model_changes: dict[object, tuple[t.Any, str]] = {} + + def get_bind( + self, + mapper: t.Any | None = None, + clause: t.Any | None = None, + bind: sa.engine.Engine | sa.engine.Connection | None = None, + **kwargs: t.Any, + ) -> sa.engine.Engine | sa.engine.Connection: + """Select an engine based on the ``bind_key`` of the metadata associated with + the model or table being queried. If no bind key is set, uses the default bind. + + .. versionchanged:: 3.0.3 + Fix finding the bind for a joined inheritance model. + + .. versionchanged:: 3.0 + The implementation more closely matches the base SQLAlchemy implementation. + + .. versionchanged:: 2.1 + Support joining an external transaction. + """ + if bind is not None: + return bind + + engines = self._db.engines + + if mapper is not None: + try: + mapper = sa.inspect(mapper) + except sa_exc.NoInspectionAvailable as e: + if isinstance(mapper, type): + raise sa_orm.exc.UnmappedClassError(mapper) from e + + raise + + engine = _clause_to_engine(mapper.local_table, engines) + + if engine is not None: + return engine + + if clause is not None: + engine = _clause_to_engine(clause, engines) + + if engine is not None: + return engine + + if None in engines: + return engines[None] + + return super().get_bind(mapper=mapper, clause=clause, bind=bind, **kwargs) + + +def _clause_to_engine( + clause: sa.ClauseElement | None, + engines: t.Mapping[str | None, sa.engine.Engine], +) -> sa.engine.Engine | None: + """If the clause is a table, return the engine associated with the table's + metadata's bind key. + """ + table = None + + if clause is not None: + if isinstance(clause, sa.Table): + table = clause + elif isinstance(clause, sa.UpdateBase) and isinstance(clause.table, sa.Table): + table = clause.table + + if table is not None and "bind_key" in table.metadata.info: + key = table.metadata.info["bind_key"] + + if key not in engines: + raise sa_exc.UnboundExecutionError( + f"Bind key '{key}' is not in 'SQLALCHEMY_BINDS' config." + ) + + return engines[key] + + return None + + +def _app_ctx_id() -> int: + """Get the id of the current Flask application context for the session scope.""" + return id(app_ctx._get_current_object()) # type: ignore[attr-defined] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/table.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/table.py new file mode 100644 index 000000000..ab08a692a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/table.py @@ -0,0 +1,39 @@ +from __future__ import annotations + +import typing as t + +import sqlalchemy as sa +import sqlalchemy.sql.schema as sa_sql_schema + + +class _Table(sa.Table): + @t.overload + def __init__( + self, + name: str, + *args: sa_sql_schema.SchemaItem, + bind_key: str | None = None, + **kwargs: t.Any, + ) -> None: + ... + + @t.overload + def __init__( + self, + name: str, + metadata: sa.MetaData, + *args: sa_sql_schema.SchemaItem, + **kwargs: t.Any, + ) -> None: + ... + + @t.overload + def __init__( + self, name: str, *args: sa_sql_schema.SchemaItem, **kwargs: t.Any + ) -> None: + ... + + def __init__( + self, name: str, *args: sa_sql_schema.SchemaItem, **kwargs: t.Any + ) -> None: + super().__init__(name, *args, **kwargs) # type: ignore[arg-type] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/track_modifications.py b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/track_modifications.py new file mode 100644 index 000000000..7028b6538 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/flask_sqlalchemy/track_modifications.py @@ -0,0 +1,88 @@ +from __future__ import annotations + +import typing as t + +import sqlalchemy as sa +import sqlalchemy.event as sa_event +import sqlalchemy.orm as sa_orm +from flask import current_app +from flask import has_app_context +from flask.signals import Namespace # type: ignore[attr-defined] + +if t.TYPE_CHECKING: + from .session import Session + +_signals = Namespace() + +models_committed = _signals.signal("models-committed") +"""This Blinker signal is sent after the session is committed if there were changed +models in the session. + +The sender is the application that emitted the changes. The receiver is passed the +``changes`` argument with a list of tuples in the form ``(instance, operation)``. +The operations are ``"insert"``, ``"update"``, and ``"delete"``. +""" + +before_models_committed = _signals.signal("before-models-committed") +"""This signal works exactly like :data:`models_committed` but is emitted before the +commit takes place. +""" + + +def _listen(session: sa_orm.scoped_session[Session]) -> None: + sa_event.listen(session, "before_flush", _record_ops, named=True) + sa_event.listen(session, "before_commit", _record_ops, named=True) + sa_event.listen(session, "before_commit", _before_commit) + sa_event.listen(session, "after_commit", _after_commit) + sa_event.listen(session, "after_rollback", _after_rollback) + + +def _record_ops(session: Session, **kwargs: t.Any) -> None: + if not has_app_context(): + return + + if not current_app.config["SQLALCHEMY_TRACK_MODIFICATIONS"]: + return + + for targets, operation in ( + (session.new, "insert"), + (session.dirty, "update"), + (session.deleted, "delete"), + ): + for target in targets: + state = sa.inspect(target) + key = state.identity_key if state.has_identity else id(target) + session._model_changes[key] = (target, operation) + + +def _before_commit(session: Session) -> None: + if not has_app_context(): + return + + app = current_app._get_current_object() # type: ignore[attr-defined] + + if not app.config["SQLALCHEMY_TRACK_MODIFICATIONS"]: + return + + if session._model_changes: + changes = list(session._model_changes.values()) + before_models_committed.send(app, changes=changes) + + +def _after_commit(session: Session) -> None: + if not has_app_context(): + return + + app = current_app._get_current_object() # type: ignore[attr-defined] + + if not app.config["SQLALCHEMY_TRACK_MODIFICATIONS"]: + return + + if session._model_changes: + changes = list(session._model_changes.values()) + models_committed.send(app, changes=changes) + session._model_changes.clear() + + +def _after_rollback(session: Session) -> None: + session._model_changes.clear() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/METADATA new file mode 100644 index 000000000..5818532f3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/METADATA @@ -0,0 +1,116 @@ +Metadata-Version: 2.4 +Name: greenlet +Version: 3.2.3 +Summary: Lightweight in-process concurrent programming +Home-page: https://greenlet.readthedocs.io/ +Author: Alexey Borzenkov +Author-email: snaury@gmail.com +Maintainer: Jason Madden +Maintainer-email: jason@seecoresoftware.com +License: MIT AND Python-2.0 +Project-URL: Bug Tracker, https://github.com/python-greenlet/greenlet/issues +Project-URL: Source Code, https://github.com/python-greenlet/greenlet/ +Project-URL: Documentation, https://greenlet.readthedocs.io/ +Project-URL: Changes, https://greenlet.readthedocs.io/en/latest/changes.html +Keywords: greenlet coroutine concurrency threads cooperative +Platform: any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Natural Language :: English +Classifier: Programming Language :: C +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Classifier: Operating System :: OS Independent +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: >=3.9 +Description-Content-Type: text/x-rst +License-File: LICENSE +License-File: LICENSE.PSF +Provides-Extra: docs +Requires-Dist: Sphinx; extra == "docs" +Requires-Dist: furo; extra == "docs" +Provides-Extra: test +Requires-Dist: objgraph; extra == "test" +Requires-Dist: psutil; extra == "test" +Dynamic: author +Dynamic: author-email +Dynamic: classifier +Dynamic: description +Dynamic: description-content-type +Dynamic: home-page +Dynamic: keywords +Dynamic: license +Dynamic: license-file +Dynamic: maintainer +Dynamic: maintainer-email +Dynamic: platform +Dynamic: project-url +Dynamic: provides-extra +Dynamic: requires-python +Dynamic: summary + +.. This file is included into docs/history.rst + + +Greenlets are lightweight coroutines for in-process concurrent +programming. + +The "greenlet" package is a spin-off of `Stackless`_, a version of +CPython that supports micro-threads called "tasklets". Tasklets run +pseudo-concurrently (typically in a single or a few OS-level threads) +and are synchronized with data exchanges on "channels". + +A "greenlet", on the other hand, is a still more primitive notion of +micro-thread with no implicit scheduling; coroutines, in other words. +This is useful when you want to control exactly when your code runs. +You can build custom scheduled micro-threads on top of greenlet; +however, it seems that greenlets are useful on their own as a way to +make advanced control flow structures. For example, we can recreate +generators; the difference with Python's own generators is that our +generators can call nested functions and the nested functions can +yield values too. (Additionally, you don't need a "yield" keyword. See +the example in `test_generator.py +`_). + +Greenlets are provided as a C extension module for the regular unmodified +interpreter. + +.. _`Stackless`: http://www.stackless.com + + +Who is using Greenlet? +====================== + +There are several libraries that use Greenlet as a more flexible +alternative to Python's built in coroutine support: + + - `Concurrence`_ + - `Eventlet`_ + - `Gevent`_ + +.. _Concurrence: http://opensource.hyves.org/concurrence/ +.. _Eventlet: http://eventlet.net/ +.. _Gevent: http://www.gevent.org/ + +Getting Greenlet +================ + +The easiest way to get Greenlet is to install it with pip:: + + pip install greenlet + + +Source code archives and binary distributions are available on the +python package index at https://pypi.org/project/greenlet + +The source code repository is hosted on github: +https://github.com/python-greenlet/greenlet + +Documentation is available on readthedocs.org: +https://greenlet.readthedocs.io diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/RECORD new file mode 100644 index 000000000..d21a3c56c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/RECORD @@ -0,0 +1,121 @@ +../../../include/site/python3.10/greenlet/greenlet.h,sha256=sz5pYRSQqedgOt2AMgxLZdTjO-qcr_JMvgiEJR9IAJ8,4755 +greenlet-3.2.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +greenlet-3.2.3.dist-info/METADATA,sha256=4LPtpV-ZBD7xpjTg5BKO5apQWLITbBym_tFpzsdSRak,4077 +greenlet-3.2.3.dist-info/RECORD,, +greenlet-3.2.3.dist-info/WHEEL,sha256=HeTPKmtWRHAcGMCPCdnU4FtSZuvZBNNr3hB4MNGY2JY,152 +greenlet-3.2.3.dist-info/licenses/LICENSE,sha256=dpgx1uXfrywggC-sz_H6-0wgJd2PYlPfpH_K1Z1NCXk,1434 +greenlet-3.2.3.dist-info/licenses/LICENSE.PSF,sha256=5f88I8EQ5JTNfXNsEP2W1GJFe6_soxCEDbZScpjH1Gs,2424 +greenlet-3.2.3.dist-info/top_level.txt,sha256=YSnRsCRoO61JGlP57o8iKL6rdLWDWuiyKD8ekpWUsDc,9 +greenlet/CObjects.cpp,sha256=OPej1bWBgc4sRrTRQ2aFFML9pzDYKlKhlJSjsI0X_eU,3508 +greenlet/PyGreenlet.cpp,sha256=ogWsQ5VhSdItWRLLpWOgSuqYuM3QwQ4cVCxOQIgHx6E,23441 +greenlet/PyGreenlet.hpp,sha256=2ZQlOxYNoy7QwD7mppFoOXe_At56NIsJ0eNsE_hoSsw,1463 +greenlet/PyGreenletUnswitchable.cpp,sha256=PQE0fSZa_IOyUM44IESHkJoD2KtGW3dkhkmZSYY3WHs,4375 +greenlet/PyModule.cpp,sha256=J2TH06dGcNEarioS6NbWXkdME8hJY05XVbdqLrfO5w4,8587 +greenlet/TBrokenGreenlet.cpp,sha256=smN26uC7ahAbNYiS10rtWPjCeTG4jevM8siA2sjJiXg,1021 +greenlet/TExceptionState.cpp,sha256=U7Ctw9fBdNraS0d174MoQW7bN-ae209Ta0JuiKpcpVI,1359 +greenlet/TGreenlet.cpp,sha256=HGYGKpmKYqQ842tASW-QaaV8wua4a5XV_quYKPDsV_Y,25731 +greenlet/TGreenlet.hpp,sha256=zMakwTJ30onkLiAq5h1WR7ewEkiJXKM0N9AR6BnXQMI,28141 +greenlet/TGreenletGlobals.cpp,sha256=YyEmDjKf1g32bsL-unIUScFLnnA1fzLWf2gOMd-D0Zw,3264 +greenlet/TMainGreenlet.cpp,sha256=fvgb8HHB-FVTPEKjR1s_ifCZSpp5D5YQByik0CnIABg,3276 +greenlet/TPythonState.cpp,sha256=vBMJT9qScTSIqhnOTVJqsGug3WbKv9dDt0cOqyhUk8w,15779 +greenlet/TStackState.cpp,sha256=V444I8Jj9DhQz-9leVW_9dtiSRjaE1NMlgDG02Xxq-Y,7381 +greenlet/TThreadState.hpp,sha256=2Jgg7DtGggMYR_x3CLAvAFf1mIdIDtQvSSItcdmX4ZQ,19131 +greenlet/TThreadStateCreator.hpp,sha256=uYTexDWooXSSgUc5uh-Mhm5BQi3-kR6CqpizvNynBFQ,2610 +greenlet/TThreadStateDestroy.cpp,sha256=36yBCAMq3beXTZd-XnFA7DwaHVSOx2vc28-nf0spysU,8169 +greenlet/TUserGreenlet.cpp,sha256=uemg0lwKXtYB0yzmvyYdIIAsKnNkifXM1OJ2OlrFP1A,23553 +greenlet/__init__.py,sha256=1zAfqzVVnBnQhxOL6UZ0t9jrNJUNJRgPEFvzW1qss9k,1723 +greenlet/__pycache__/__init__.cpython-310.pyc,, +greenlet/_greenlet.cpython-310-x86_64-linux-gnu.so,sha256=JcsWCGVCb-4g62P9Ae61ccRiENojILPQxT7Y96h01v0,1360968 +greenlet/greenlet.cpp,sha256=WdItb1yWL9WNsTqJNf0Iw8ZwDHD49pkDP0rIRGBg2pw,10996 +greenlet/greenlet.h,sha256=sz5pYRSQqedgOt2AMgxLZdTjO-qcr_JMvgiEJR9IAJ8,4755 +greenlet/greenlet_allocator.hpp,sha256=kxyWW4Qdwlrc7ufgdb5vd6Y7jhauQ699Kod0mqiO1iM,1582 +greenlet/greenlet_compiler_compat.hpp,sha256=nRxpLN9iNbnLVyFDeVmOwyeeNm6scQrOed1l7JQYMCM,4346 +greenlet/greenlet_cpython_compat.hpp,sha256=XrsoFv8nKavrdxly5_-q9lqGeE8d3wKt1YtldsAHAT8,4068 +greenlet/greenlet_exceptions.hpp,sha256=06Bx81DtVaJTa6RtiMcV141b-XHv4ppEgVItkblcLWY,4503 +greenlet/greenlet_internal.hpp,sha256=Ajc-_09W4xWzm9XfyXHAeQAFUgKGKsnJwYsTCoNy3ns,2709 +greenlet/greenlet_msvc_compat.hpp,sha256=0MyaiyoCE_A6UROXZlMQRxRS17gfyh0d7NUppU3EVFc,2978 +greenlet/greenlet_refs.hpp,sha256=OnbA91yZf3QHH6-eJccvoNDAaN-pQBMMrclFU1Ot3J4,34436 +greenlet/greenlet_slp_switch.hpp,sha256=kM1QHA2iV-gH4cFyN6lfIagHQxvJZjWOVJdIxRE3TlQ,3198 +greenlet/greenlet_thread_support.hpp,sha256=XUJ6ljWjf9OYyuOILiz8e_yHvT3fbaUiHdhiPNQUV4s,867 +greenlet/platform/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +greenlet/platform/__pycache__/__init__.cpython-310.pyc,, +greenlet/platform/setup_switch_x64_masm.cmd,sha256=ZpClUJeU0ujEPSTWNSepP0W2f9XiYQKA8QKSoVou8EU,143 +greenlet/platform/switch_aarch64_gcc.h,sha256=GKC0yWNXnbK2X--X6aguRCMj2Tg7hDU1Zkl3RljDvC8,4307 +greenlet/platform/switch_alpha_unix.h,sha256=Z-SvF8JQV3oxWT8JRbL9RFu4gRFxPdJ7cviM8YayMmw,671 +greenlet/platform/switch_amd64_unix.h,sha256=EcSFCBlodEBhqhKjcJqY_5Dn_jn7pKpkJlOvp7gFXLI,2748 +greenlet/platform/switch_arm32_gcc.h,sha256=Z3KkHszdgq6uU4YN3BxvKMG2AdDnovwCCNrqGWZ1Lyo,2479 +greenlet/platform/switch_arm32_ios.h,sha256=mm5_R9aXB92hyxzFRwB71M60H6AlvHjrpTrc72Pz3l8,1892 +greenlet/platform/switch_arm64_masm.asm,sha256=4kpTtfy7rfcr8j1CpJLAK21EtZpGDAJXWRU68HEy5A8,1245 +greenlet/platform/switch_arm64_masm.obj,sha256=DmLnIB_icoEHAz1naue_pJPTZgR9ElM7-Nmztr-o9_U,746 +greenlet/platform/switch_arm64_msvc.h,sha256=RqK5MHLmXI3Q-FQ7tm32KWnbDNZKnkJdq8CR89cz640,398 +greenlet/platform/switch_csky_gcc.h,sha256=kDikyiPpewP71KoBZQO_MukDTXTXBiC7x-hF0_2DL0w,1331 +greenlet/platform/switch_loongarch64_linux.h,sha256=7M-Dhc4Q8tRbJCJhalDLwU6S9Mx8MjmN1RbTDgIvQTM,779 +greenlet/platform/switch_m68k_gcc.h,sha256=VSa6NpZhvyyvF-Q58CTIWSpEDo4FKygOyTz00whctlw,928 +greenlet/platform/switch_mips_unix.h,sha256=E0tYsqc5anDY1BhenU1l8DW-nVHC_BElzLgJw3TGtPk,1426 +greenlet/platform/switch_ppc64_aix.h,sha256=_BL0iyRr3ZA5iPlr3uk9SJ5sNRWGYLrXcZ5z-CE9anE,3860 +greenlet/platform/switch_ppc64_linux.h,sha256=0rriT5XyxPb0GqsSSn_bP9iQsnjsPbBmu0yqo5goSyQ,3815 +greenlet/platform/switch_ppc_aix.h,sha256=pHA4slEjUFP3J3SYm1TAlNPhgb2G_PAtax5cO8BEe1A,2941 +greenlet/platform/switch_ppc_linux.h,sha256=YwrlKUzxlXuiKMQqr6MFAV1bPzWnmvk6X1AqJZEpOWU,2759 +greenlet/platform/switch_ppc_macosx.h,sha256=Z6KN_ud0n6nC3ltJrNz2qtvER6vnRAVRNH9mdIDpMxY,2624 +greenlet/platform/switch_ppc_unix.h,sha256=-ZG7MSSPEA5N4qO9PQChtyEJ-Fm6qInhyZm_ZBHTtMg,2652 +greenlet/platform/switch_riscv_unix.h,sha256=606V6ACDf79Fz_WGItnkgbjIJ0pGg_sHmPyDxQYKK58,949 +greenlet/platform/switch_s390_unix.h,sha256=RRlGu957ybmq95qNNY4Qw1mcaoT3eBnW5KbVwu48KX8,2763 +greenlet/platform/switch_sh_gcc.h,sha256=mcRJBTu-2UBf4kZtX601qofwuDuy-Y-hnxJtrcaB7do,901 +greenlet/platform/switch_sparc_sun_gcc.h,sha256=xZish9GsMHBienUbUMsX1-ZZ-as7hs36sVhYIE3ew8Y,2797 +greenlet/platform/switch_x32_unix.h,sha256=nM98PKtzTWc1lcM7TRMUZJzskVdR1C69U1UqZRWX0GE,1509 +greenlet/platform/switch_x64_masm.asm,sha256=nu6n2sWyXuXfpPx40d9YmLfHXUc1sHgeTvX1kUzuvEM,1841 +greenlet/platform/switch_x64_masm.obj,sha256=GNtTNxYdo7idFUYsQv-mrXWgyT5EJ93-9q90lN6svtQ,1078 +greenlet/platform/switch_x64_msvc.h,sha256=LIeasyKo_vHzspdMzMHbosRhrBfKI4BkQOh4qcTHyJw,1805 +greenlet/platform/switch_x86_msvc.h,sha256=TtGOwinbFfnn6clxMNkCz8i6OmgB6kVRrShoF5iT9to,12838 +greenlet/platform/switch_x86_unix.h,sha256=VplW9H0FF0cZHw1DhJdIUs5q6YLS4cwb2nYwjF83R1s,3059 +greenlet/slp_platformselect.h,sha256=hTb3GFdcPUYJTuu1MY93js7MZEax1_e5E-gflpi0RzI,3959 +greenlet/tests/__init__.py,sha256=sqxm7-dZuGBwmNI0n6xrcQJGoHHjoXUGyUTnvHidcYM,9361 +greenlet/tests/__pycache__/__init__.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_clearing_run_switches.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_cpp_exception.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_initialstub_already_started.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_slp_switch.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_switch_three_greenlets.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_switch_three_greenlets2.cpython-310.pyc,, +greenlet/tests/__pycache__/fail_switch_two_greenlets.cpython-310.pyc,, +greenlet/tests/__pycache__/leakcheck.cpython-310.pyc,, +greenlet/tests/__pycache__/test_contextvars.cpython-310.pyc,, +greenlet/tests/__pycache__/test_cpp.cpython-310.pyc,, +greenlet/tests/__pycache__/test_extension_interface.cpython-310.pyc,, +greenlet/tests/__pycache__/test_gc.cpython-310.pyc,, +greenlet/tests/__pycache__/test_generator.cpython-310.pyc,, +greenlet/tests/__pycache__/test_generator_nested.cpython-310.pyc,, +greenlet/tests/__pycache__/test_greenlet.cpython-310.pyc,, +greenlet/tests/__pycache__/test_greenlet_trash.cpython-310.pyc,, +greenlet/tests/__pycache__/test_leaks.cpython-310.pyc,, +greenlet/tests/__pycache__/test_stack_saved.cpython-310.pyc,, +greenlet/tests/__pycache__/test_throw.cpython-310.pyc,, +greenlet/tests/__pycache__/test_tracing.cpython-310.pyc,, +greenlet/tests/__pycache__/test_version.cpython-310.pyc,, +greenlet/tests/__pycache__/test_weakref.cpython-310.pyc,, +greenlet/tests/_test_extension.c,sha256=vkeGA-6oeJcGILsD7oIrT1qZop2GaTOHXiNT7mcSl-0,5773 +greenlet/tests/_test_extension.cpython-310-x86_64-linux-gnu.so,sha256=p118NJ4hObhSNcvKLduspwQExvXHPDAbWVVMU6o3dqs,17256 +greenlet/tests/_test_extension_cpp.cpp,sha256=e0kVnaB8CCaEhE9yHtNyfqTjevsPDKKx-zgxk7PPK48,6565 +greenlet/tests/_test_extension_cpp.cpython-310-x86_64-linux-gnu.so,sha256=jjtWz-wUMS5imeCnjV_p4tKeRexjRgSz8XYkfztHmw0,57880 +greenlet/tests/fail_clearing_run_switches.py,sha256=o433oA_nUCtOPaMEGc8VEhZIKa71imVHXFw7TsXaP8M,1263 +greenlet/tests/fail_cpp_exception.py,sha256=o_ZbipWikok8Bjc-vjiQvcb5FHh2nVW-McGKMLcMzh0,985 +greenlet/tests/fail_initialstub_already_started.py,sha256=txENn5IyzGx2p-XR1XB7qXmC8JX_4mKDEA8kYBXUQKc,1961 +greenlet/tests/fail_slp_switch.py,sha256=rJBZcZfTWR3e2ERQtPAud6YKShiDsP84PmwOJbp4ey0,524 +greenlet/tests/fail_switch_three_greenlets.py,sha256=zSitV7rkNnaoHYVzAGGLnxz-yPtohXJJzaE8ehFDQ0M,956 +greenlet/tests/fail_switch_three_greenlets2.py,sha256=FPJensn2EJxoropl03JSTVP3kgP33k04h6aDWWozrOk,1285 +greenlet/tests/fail_switch_two_greenlets.py,sha256=1CaI8s3504VbbF1vj1uBYuy-zxBHVzHPIAd1LIc8ONg,817 +greenlet/tests/leakcheck.py,sha256=inbfM7_oVzd8jIKGxCgo4JqpFZaDAnWPkSULJ8vIE1s,11964 +greenlet/tests/test_contextvars.py,sha256=xutO-qZgKTwKsA9lAqTjIcTBEiQV4RpNKM-vO2_YCVU,10541 +greenlet/tests/test_cpp.py,sha256=hpxhFAdKJTpAVZP8CBGs1ZcrKdscI9BaDZk4btkI5d4,2736 +greenlet/tests/test_extension_interface.py,sha256=eJ3cwLacdK2WbsrC-4DgeyHdwLRcG4zx7rrkRtqSzC4,3829 +greenlet/tests/test_gc.py,sha256=PCOaRpIyjNnNlDogGL3FZU_lrdXuM-pv1rxeE5TP5mc,2923 +greenlet/tests/test_generator.py,sha256=tONXiTf98VGm347o1b-810daPiwdla5cbpFg6QI1R1g,1240 +greenlet/tests/test_generator_nested.py,sha256=7v4HOYrf1XZP39dk5IUMubdZ8yc3ynwZcqj9GUJyMSA,3718 +greenlet/tests/test_greenlet.py,sha256=rYWDvMx7ZpMlQju9KRxsBR61ela7HSJCg98JtR7RPOQ,46251 +greenlet/tests/test_greenlet_trash.py,sha256=n2dBlQfOoEO1ODatFi8QdhboH3fB86YtqzcYMYOXxbw,7947 +greenlet/tests/test_leaks.py,sha256=Qeso_qH9MCWJOkk2I3VcTh7UhaNvWxrzAmNBta-fUyY,17714 +greenlet/tests/test_stack_saved.py,sha256=eyzqNY2VCGuGlxhT_In6TvZ6Okb0AXFZVyBEnK1jDwA,446 +greenlet/tests/test_throw.py,sha256=u2TQ_WvvCd6N6JdXWIxVEcXkKu5fepDlz9dktYdmtng,3712 +greenlet/tests/test_tracing.py,sha256=VlwzMU0C1noospZhuUMyB7MHw200emIvGCN_6G2p2ZU,8250 +greenlet/tests/test_version.py,sha256=O9DpAITsOFgiRcjd4odQ7ejmwx_N9Q1zQENVcbtFHIc,1339 +greenlet/tests/test_weakref.py,sha256=F8M23btEF87bIbpptLNBORosbQqNZGiYeKMqYjWrsak,883 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/WHEEL new file mode 100644 index 000000000..6c1fa982a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: setuptools (80.9.0) +Root-Is-Purelib: false +Tag: cp310-cp310-manylinux_2_24_x86_64 +Tag: cp310-cp310-manylinux_2_28_x86_64 + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE new file mode 100644 index 000000000..b73a4a10c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE @@ -0,0 +1,30 @@ +The following files are derived from Stackless Python and are subject to the +same license as Stackless Python: + + src/greenlet/slp_platformselect.h + files in src/greenlet/platform/ directory + +See LICENSE.PSF and http://www.stackless.com/ for details. + +Unless otherwise noted, the files in greenlet have been released under the +following MIT license: + +Copyright (c) Armin Rigo, Christian Tismer and contributors + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE.PSF b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE.PSF new file mode 100644 index 000000000..d3b509a2b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/licenses/LICENSE.PSF @@ -0,0 +1,47 @@ +PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 +-------------------------------------------- + +1. This LICENSE AGREEMENT is between the Python Software Foundation +("PSF"), and the Individual or Organization ("Licensee") accessing and +otherwise using this software ("Python") in source or binary form and +its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby +grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, +analyze, test, perform and/or display publicly, prepare derivative works, +distribute, and otherwise use Python alone or in any derivative version, +provided, however, that PSF's License Agreement and PSF's notice of copyright, +i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, +2011 Python Software Foundation; All Rights Reserved" are retained in Python +alone or in any derivative version prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python. + +4. PSF is making Python available to Licensee on an "AS IS" +basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between PSF and +Licensee. This License Agreement does not grant permission to use PSF +trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using Python, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/top_level.txt b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/top_level.txt new file mode 100644 index 000000000..46725be4f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet-3.2.3.dist-info/top_level.txt @@ -0,0 +1 @@ +greenlet diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/CObjects.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/CObjects.cpp new file mode 100644 index 000000000..c135995b6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/CObjects.cpp @@ -0,0 +1,157 @@ +#ifndef COBJECTS_CPP +#define COBJECTS_CPP +/***************************************************************************** + * C interface + * + * These are exported using the CObject API + */ +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wunused-function" +#endif + +#include "greenlet_exceptions.hpp" + +#include "greenlet_internal.hpp" +#include "greenlet_refs.hpp" + + +#include "TThreadStateDestroy.cpp" + +#include "PyGreenlet.hpp" + +using greenlet::PyErrOccurred; +using greenlet::Require; + + + +extern "C" { +static PyGreenlet* +PyGreenlet_GetCurrent(void) +{ + return GET_THREAD_STATE().state().get_current().relinquish_ownership(); +} + +static int +PyGreenlet_SetParent(PyGreenlet* g, PyGreenlet* nparent) +{ + return green_setparent((PyGreenlet*)g, (PyObject*)nparent, NULL); +} + +static PyGreenlet* +PyGreenlet_New(PyObject* run, PyGreenlet* parent) +{ + using greenlet::refs::NewDictReference; + // In the past, we didn't use green_new and green_init, but that + // was a maintenance issue because we duplicated code. This way is + // much safer, but slightly slower. If that's a problem, we could + // refactor green_init to separate argument parsing from initialization. + OwnedGreenlet g = OwnedGreenlet::consuming(green_new(&PyGreenlet_Type, nullptr, nullptr)); + if (!g) { + return NULL; + } + + try { + NewDictReference kwargs; + if (run) { + kwargs.SetItem(mod_globs->str_run, run); + } + if (parent) { + kwargs.SetItem("parent", (PyObject*)parent); + } + + Require(green_init(g.borrow(), mod_globs->empty_tuple, kwargs.borrow())); + } + catch (const PyErrOccurred&) { + return nullptr; + } + + return g.relinquish_ownership(); +} + +static PyObject* +PyGreenlet_Switch(PyGreenlet* self, PyObject* args, PyObject* kwargs) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return NULL; + } + + if (args == NULL) { + args = mod_globs->empty_tuple; + } + + if (kwargs == NULL || !PyDict_Check(kwargs)) { + kwargs = NULL; + } + + return green_switch(self, args, kwargs); +} + +static PyObject* +PyGreenlet_Throw(PyGreenlet* self, PyObject* typ, PyObject* val, PyObject* tb) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return nullptr; + } + try { + PyErrPieces err_pieces(typ, val, tb); + return internal_green_throw(self, err_pieces).relinquish_ownership(); + } + catch (const PyErrOccurred&) { + return nullptr; + } +} + + + +static int +Extern_PyGreenlet_MAIN(PyGreenlet* self) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return -1; + } + return self->pimpl->main(); +} + +static int +Extern_PyGreenlet_ACTIVE(PyGreenlet* self) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return -1; + } + return self->pimpl->active(); +} + +static int +Extern_PyGreenlet_STARTED(PyGreenlet* self) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return -1; + } + return self->pimpl->started(); +} + +static PyGreenlet* +Extern_PyGreenlet_GET_PARENT(PyGreenlet* self) +{ + if (!PyGreenlet_Check(self)) { + PyErr_BadArgument(); + return NULL; + } + // This can return NULL even if there is no exception + return self->pimpl->parent().acquire(); +} +} // extern C. + +/** End C API ****************************************************************/ +#ifdef __clang__ +# pragma clang diagnostic pop +#endif + + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.cpp new file mode 100644 index 000000000..29c0bba0b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.cpp @@ -0,0 +1,738 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +#ifndef PYGREENLET_CPP +#define PYGREENLET_CPP +/***************** +The Python slot functions for TGreenlet. + */ + + +#define PY_SSIZE_T_CLEAN +#include +#include "structmember.h" // PyMemberDef + +#include "greenlet_internal.hpp" +#include "TThreadStateDestroy.cpp" +#include "TGreenlet.hpp" +// #include "TUserGreenlet.cpp" +// #include "TMainGreenlet.cpp" +// #include "TBrokenGreenlet.cpp" + + +#include "greenlet_refs.hpp" +#include "greenlet_slp_switch.hpp" + +#include "greenlet_thread_support.hpp" +#include "TGreenlet.hpp" + +#include "TGreenletGlobals.cpp" +#include "TThreadStateDestroy.cpp" +#include "PyGreenlet.hpp" +// #include "TGreenlet.cpp" + +// #include "TExceptionState.cpp" +// #include "TPythonState.cpp" +// #include "TStackState.cpp" + +using greenlet::LockGuard; +using greenlet::LockInitError; +using greenlet::PyErrOccurred; +using greenlet::Require; + +using greenlet::g_handle_exit; +using greenlet::single_result; + +using greenlet::Greenlet; +using greenlet::UserGreenlet; +using greenlet::MainGreenlet; +using greenlet::BrokenGreenlet; +using greenlet::ThreadState; +using greenlet::PythonState; + + + +static PyGreenlet* +green_new(PyTypeObject* type, PyObject* UNUSED(args), PyObject* UNUSED(kwds)) +{ + PyGreenlet* o = + (PyGreenlet*)PyBaseObject_Type.tp_new(type, mod_globs->empty_tuple, mod_globs->empty_dict); + if (o) { + new UserGreenlet(o, GET_THREAD_STATE().state().borrow_current()); + assert(Py_REFCNT(o) == 1); + } + return o; +} + + +// green_init is used in the tp_init slot. So it's important that +// it can be called directly from CPython. Thus, we don't use +// BorrowedGreenlet and BorrowedObject --- although in theory +// these should be binary layout compatible, that may not be +// guaranteed to be the case (32-bit linux ppc possibly). +static int +green_init(PyGreenlet* self, PyObject* args, PyObject* kwargs) +{ + PyArgParseParam run; + PyArgParseParam nparent; + static const char* kwlist[] = { + "run", + "parent", + NULL + }; + + // recall: The O specifier does NOT increase the reference count. + if (!PyArg_ParseTupleAndKeywords( + args, kwargs, "|OO:green", (char**)kwlist, &run, &nparent)) { + return -1; + } + + if (run) { + if (green_setrun(self, run, NULL)) { + return -1; + } + } + if (nparent && !nparent.is_None()) { + return green_setparent(self, nparent, NULL); + } + return 0; +} + + + +static int +green_traverse(PyGreenlet* self, visitproc visit, void* arg) +{ + // We must only visit referenced objects, i.e. only objects + // Py_INCREF'ed by this greenlet (directly or indirectly): + // + // - stack_prev is not visited: holds previous stack pointer, but it's not + // referenced + // - frames are not visited as we don't strongly reference them; + // alive greenlets are not garbage collected + // anyway. This can be a problem, however, if this greenlet is + // never allowed to finish, and is referenced from the frame: we + // have an uncollectible cycle in that case. Note that the + // frame object itself is also frequently not even tracked by the GC + // starting with Python 3.7 (frames are allocated by the + // interpreter untracked, and only become tracked when their + // evaluation is finished if they have a refcount > 1). All of + // this is to say that we should probably strongly reference + // the frame object. Doing so, while always allowing GC on a + // greenlet, solves several leaks for us. + + Py_VISIT(self->dict); + if (!self->pimpl) { + // Hmm. I have seen this at interpreter shutdown time, + // I think. That's very odd because this doesn't go away until + // we're ``green_dealloc()``, at which point we shouldn't be + // traversed anymore. + return 0; + } + + return self->pimpl->tp_traverse(visit, arg); +} + +static int +green_is_gc(PyObject* _self) +{ + BorrowedGreenlet self(_self); + int result = 0; + /* Main greenlet can be garbage collected since it can only + become unreachable if the underlying thread exited. + Active greenlets --- including those that are suspended --- + cannot be garbage collected, however. + */ + if (self->main() || !self->active()) { + result = 1; + } + // The main greenlet pointer will eventually go away after the thread dies. + if (self->was_running_in_dead_thread()) { + // Our thread is dead! We can never run again. Might as well + // GC us. Note that if a tuple containing only us and other + // immutable objects had been scanned before this, when we + // would have returned 0, the tuple will take itself out of GC + // tracking and never be investigated again. So that could + // result in both us and the tuple leaking due to an + // unreachable/uncollectible reference. The same goes for + // dictionaries. + // + // It's not a great idea to be changing our GC state on the + // fly. + result = 1; + } + return result; +} + + +static int +green_clear(PyGreenlet* self) +{ + /* Greenlet is only cleared if it is about to be collected. + Since active greenlets are not garbage collectable, we can + be sure that, even if they are deallocated during clear, + nothing they reference is in unreachable or finalizers, + so even if it switches we are relatively safe. */ + // XXX: Are we responsible for clearing weakrefs here? + Py_CLEAR(self->dict); + return self->pimpl->tp_clear(); +} + +/** + * Returns 0 on failure (the object was resurrected) or 1 on success. + **/ +static int +_green_dealloc_kill_started_non_main_greenlet(BorrowedGreenlet self) +{ + /* Hacks hacks hacks copied from instance_dealloc() */ + /* Temporarily resurrect the greenlet. */ + assert(self.REFCNT() == 0); + Py_SET_REFCNT(self.borrow(), 1); + /* Save the current exception, if any. */ + PyErrPieces saved_err; + try { + // BY THE TIME WE GET HERE, the state may actually be going + // away + // if we're shutting down the interpreter and freeing thread + // entries, + // this could result in freeing greenlets that were leaked. So + // we can't try to read the state. + self->deallocing_greenlet_in_thread( + self->thread_state() + ? static_cast(GET_THREAD_STATE()) + : nullptr); + } + catch (const PyErrOccurred&) { + PyErr_WriteUnraisable(self.borrow_o()); + /* XXX what else should we do? */ + } + /* Check for no resurrection must be done while we keep + * our internal reference, otherwise PyFile_WriteObject + * causes recursion if using Py_INCREF/Py_DECREF + */ + if (self.REFCNT() == 1 && self->active()) { + /* Not resurrected, but still not dead! + XXX what else should we do? we complain. */ + PyObject* f = PySys_GetObject("stderr"); + Py_INCREF(self.borrow_o()); /* leak! */ + if (f != NULL) { + PyFile_WriteString("GreenletExit did not kill ", f); + PyFile_WriteObject(self.borrow_o(), f, 0); + PyFile_WriteString("\n", f); + } + } + /* Restore the saved exception. */ + saved_err.PyErrRestore(); + /* Undo the temporary resurrection; can't use DECREF here, + * it would cause a recursive call. + */ + assert(self.REFCNT() > 0); + + Py_ssize_t refcnt = self.REFCNT() - 1; + Py_SET_REFCNT(self.borrow_o(), refcnt); + if (refcnt != 0) { + /* Resurrected! */ + _Py_NewReference(self.borrow_o()); + Py_SET_REFCNT(self.borrow_o(), refcnt); + /* Better to use tp_finalizer slot (PEP 442) + * and call ``PyObject_CallFinalizerFromDealloc``, + * but that's only supported in Python 3.4+; see + * Modules/_io/iobase.c for an example. + * + * The following approach is copied from iobase.c in CPython 2.7. + * (along with much of this function in general). Here's their + * comment: + * + * When called from a heap type's dealloc, the type will be + * decref'ed on return (see e.g. subtype_dealloc in typeobject.c). */ + if (PyType_HasFeature(self.TYPE(), Py_TPFLAGS_HEAPTYPE)) { + Py_INCREF(self.TYPE()); + } + + PyObject_GC_Track((PyObject*)self); + + _Py_DEC_REFTOTAL; +#ifdef COUNT_ALLOCS + --Py_TYPE(self)->tp_frees; + --Py_TYPE(self)->tp_allocs; +#endif /* COUNT_ALLOCS */ + return 0; + } + return 1; +} + + +static void +green_dealloc(PyGreenlet* self) +{ + PyObject_GC_UnTrack(self); + BorrowedGreenlet me(self); + if (me->active() + && me->started() + && !me->main()) { + if (!_green_dealloc_kill_started_non_main_greenlet(me)) { + return; + } + } + + if (self->weakreflist != NULL) { + PyObject_ClearWeakRefs((PyObject*)self); + } + Py_CLEAR(self->dict); + + if (self->pimpl) { + // In case deleting this, which frees some memory, + // somehow winds up calling back into us. That's usually a + //bug in our code. + Greenlet* p = self->pimpl; + self->pimpl = nullptr; + delete p; + } + // and finally we're done. self is now invalid. + Py_TYPE(self)->tp_free((PyObject*)self); +} + + + +static OwnedObject +internal_green_throw(BorrowedGreenlet self, PyErrPieces& err_pieces) +{ + PyObject* result = nullptr; + err_pieces.PyErrRestore(); + assert(PyErr_Occurred()); + if (self->started() && !self->active()) { + /* dead greenlet: turn GreenletExit into a regular return */ + result = g_handle_exit(OwnedObject()).relinquish_ownership(); + } + self->args() <<= result; + + return single_result(self->g_switch()); +} + + + +PyDoc_STRVAR( + green_switch_doc, + "switch(*args, **kwargs)\n" + "\n" + "Switch execution to this greenlet.\n" + "\n" + "If this greenlet has never been run, then this greenlet\n" + "will be switched to using the body of ``self.run(*args, **kwargs)``.\n" + "\n" + "If the greenlet is active (has been run, but was switch()'ed\n" + "out before leaving its run function), then this greenlet will\n" + "be resumed and the return value to its switch call will be\n" + "None if no arguments are given, the given argument if one\n" + "argument is given, or the args tuple and keyword args dict if\n" + "multiple arguments are given.\n" + "\n" + "If the greenlet is dead, or is the current greenlet then this\n" + "function will simply return the arguments using the same rules as\n" + "above.\n"); + +static PyObject* +green_switch(PyGreenlet* self, PyObject* args, PyObject* kwargs) +{ + using greenlet::SwitchingArgs; + SwitchingArgs switch_args(OwnedObject::owning(args), OwnedObject::owning(kwargs)); + self->pimpl->may_switch_away(); + self->pimpl->args() <<= switch_args; + + // If we're switching out of a greenlet, and that switch is the + // last thing the greenlet does, the greenlet ought to be able to + // go ahead and die at that point. Currently, someone else must + // manually switch back to the greenlet so that we "fall off the + // end" and can perform cleanup. You'd think we'd be able to + // figure out that this is happening using the frame's ``f_lasti`` + // member, which is supposed to be an index into + // ``frame->f_code->co_code``, the bytecode string. However, in + // recent interpreters, ``f_lasti`` tends not to be updated thanks + // to things like the PREDICT() macros in ceval.c. So it doesn't + // really work to do that in many cases. For example, the Python + // code: + // def run(): + // greenlet.getcurrent().parent.switch() + // produces bytecode of len 16, with the actual call to switch() + // being at index 10 (in Python 3.10). However, the reported + // ``f_lasti`` we actually see is...5! (Which happens to be the + // second byte of the CALL_METHOD op for ``getcurrent()``). + + try { + //OwnedObject result = single_result(self->pimpl->g_switch()); + OwnedObject result(single_result(self->pimpl->g_switch())); +#ifndef NDEBUG + // Note that the current greenlet isn't necessarily self. If self + // finished, we went to one of its parents. + assert(!self->pimpl->args()); + + const BorrowedGreenlet& current = GET_THREAD_STATE().state().borrow_current(); + // It's possible it's never been switched to. + assert(!current->args()); +#endif + PyObject* p = result.relinquish_ownership(); + + if (!p && !PyErr_Occurred()) { + // This shouldn't be happening anymore, so the asserts + // are there for debug builds. Non-debug builds + // crash "gracefully" in this case, although there is an + // argument to be made for killing the process in all + // cases --- for this to be the case, our switches + // probably nested in an incorrect way, so the state is + // suspicious. Nothing should be corrupt though, just + // confused at the Python level. Letting this propagate is + // probably good enough. + assert(p || PyErr_Occurred()); + throw PyErrOccurred( + mod_globs->PyExc_GreenletError, + "Greenlet.switch() returned NULL without an exception set." + ); + } + return p; + } + catch(const PyErrOccurred&) { + return nullptr; + } +} + +PyDoc_STRVAR( + green_throw_doc, + "Switches execution to this greenlet, but immediately raises the\n" + "given exception in this greenlet. If no argument is provided, the " + "exception\n" + "defaults to `greenlet.GreenletExit`. The normal exception\n" + "propagation rules apply, as described for `switch`. Note that calling " + "this\n" + "method is almost equivalent to the following::\n" + "\n" + " def raiser():\n" + " raise typ, val, tb\n" + " g_raiser = greenlet(raiser, parent=g)\n" + " g_raiser.switch()\n" + "\n" + "except that this trick does not work for the\n" + "`greenlet.GreenletExit` exception, which would not propagate\n" + "from ``g_raiser`` to ``g``.\n"); + +static PyObject* +green_throw(PyGreenlet* self, PyObject* args) +{ + PyArgParseParam typ(mod_globs->PyExc_GreenletExit); + PyArgParseParam val; + PyArgParseParam tb; + + if (!PyArg_ParseTuple(args, "|OOO:throw", &typ, &val, &tb)) { + return nullptr; + } + + assert(typ.borrow() || val.borrow()); + + self->pimpl->may_switch_away(); + try { + // Both normalizing the error and the actual throw_greenlet + // could throw PyErrOccurred. + PyErrPieces err_pieces(typ.borrow(), val.borrow(), tb.borrow()); + + return internal_green_throw(self, err_pieces).relinquish_ownership(); + } + catch (const PyErrOccurred&) { + return nullptr; + } +} + +static int +green_bool(PyGreenlet* self) +{ + return self->pimpl->active(); +} + +/** + * CAUTION: Allocates memory, may run GC and arbitrary Python code. + */ +static PyObject* +green_getdict(PyGreenlet* self, void* UNUSED(context)) +{ + if (self->dict == NULL) { + self->dict = PyDict_New(); + if (self->dict == NULL) { + return NULL; + } + } + Py_INCREF(self->dict); + return self->dict; +} + +static int +green_setdict(PyGreenlet* self, PyObject* val, void* UNUSED(context)) +{ + PyObject* tmp; + + if (val == NULL) { + PyErr_SetString(PyExc_TypeError, "__dict__ may not be deleted"); + return -1; + } + if (!PyDict_Check(val)) { + PyErr_SetString(PyExc_TypeError, "__dict__ must be a dictionary"); + return -1; + } + tmp = self->dict; + Py_INCREF(val); + self->dict = val; + Py_XDECREF(tmp); + return 0; +} + +static bool +_green_not_dead(BorrowedGreenlet self) +{ + // XXX: Where else should we do this? + // Probably on entry to most Python-facing functions? + if (self->was_running_in_dead_thread()) { + self->deactivate_and_free(); + return false; + } + return self->active() || !self->started(); +} + + +static PyObject* +green_getdead(PyGreenlet* self, void* UNUSED(context)) +{ + if (_green_not_dead(self)) { + Py_RETURN_FALSE; + } + else { + Py_RETURN_TRUE; + } +} + +static PyObject* +green_get_stack_saved(PyGreenlet* self, void* UNUSED(context)) +{ + return PyLong_FromSsize_t(self->pimpl->stack_saved()); +} + + +static PyObject* +green_getrun(PyGreenlet* self, void* UNUSED(context)) +{ + try { + OwnedObject result(BorrowedGreenlet(self)->run()); + return result.relinquish_ownership(); + } + catch(const PyErrOccurred&) { + return nullptr; + } +} + + +static int +green_setrun(PyGreenlet* self, PyObject* nrun, void* UNUSED(context)) +{ + try { + BorrowedGreenlet(self)->run(nrun); + return 0; + } + catch(const PyErrOccurred&) { + return -1; + } +} + +static PyObject* +green_getparent(PyGreenlet* self, void* UNUSED(context)) +{ + return BorrowedGreenlet(self)->parent().acquire_or_None(); +} + + +static int +green_setparent(PyGreenlet* self, PyObject* nparent, void* UNUSED(context)) +{ + try { + BorrowedGreenlet(self)->parent(nparent); + } + catch(const PyErrOccurred&) { + return -1; + } + return 0; +} + + +static PyObject* +green_getcontext(const PyGreenlet* self, void* UNUSED(context)) +{ + const Greenlet *const g = self->pimpl; + try { + OwnedObject result(g->context()); + return result.relinquish_ownership(); + } + catch(const PyErrOccurred&) { + return nullptr; + } +} + +static int +green_setcontext(PyGreenlet* self, PyObject* nctx, void* UNUSED(context)) +{ + try { + BorrowedGreenlet(self)->context(nctx); + return 0; + } + catch(const PyErrOccurred&) { + return -1; + } +} + + +static PyObject* +green_getframe(PyGreenlet* self, void* UNUSED(context)) +{ + const PythonState::OwnedFrame& top_frame = BorrowedGreenlet(self)->top_frame(); + return top_frame.acquire_or_None(); +} + + +static PyObject* +green_getstate(PyGreenlet* self) +{ + PyErr_Format(PyExc_TypeError, + "cannot serialize '%s' object", + Py_TYPE(self)->tp_name); + return nullptr; +} + +static PyObject* +green_repr(PyGreenlet* _self) +{ + BorrowedGreenlet self(_self); + /* + Return a string like + + + The handling of greenlets across threads is not super good. + We mostly use the internal definitions of these terms, but they + generally should make sense to users as well. + */ + PyObject* result; + int never_started = !self->started() && !self->active(); + + const char* const tp_name = Py_TYPE(self)->tp_name; + + if (_green_not_dead(self)) { + /* XXX: The otid= is almost useless because you can't correlate it to + any thread identifier exposed to Python. We could use + PyThreadState_GET()->thread_id, but we'd need to save that in the + greenlet, or save the whole PyThreadState object itself. + + As it stands, its only useful for identifying greenlets from the same thread. + */ + const char* state_in_thread; + if (self->was_running_in_dead_thread()) { + // The thread it was running in is dead! + // This can happen, especially at interpreter shut down. + // It complicates debugging output because it may be + // impossible to access the current thread state at that + // time. Thus, don't access the current thread state. + state_in_thread = " (thread exited)"; + } + else { + state_in_thread = GET_THREAD_STATE().state().is_current(self) + ? " current" + : (self->started() ? " suspended" : ""); + } + result = PyUnicode_FromFormat( + "<%s object at %p (otid=%p)%s%s%s%s>", + tp_name, + self.borrow_o(), + self->thread_state(), + state_in_thread, + self->active() ? " active" : "", + never_started ? " pending" : " started", + self->main() ? " main" : "" + ); + } + else { + result = PyUnicode_FromFormat( + "<%s object at %p (otid=%p) %sdead>", + tp_name, + self.borrow_o(), + self->thread_state(), + self->was_running_in_dead_thread() + ? "(thread exited) " + : "" + ); + } + + return result; +} + + +static PyMethodDef green_methods[] = { + { + .ml_name="switch", + .ml_meth=reinterpret_cast(green_switch), + .ml_flags=METH_VARARGS | METH_KEYWORDS, + .ml_doc=green_switch_doc + }, + {.ml_name="throw", .ml_meth=(PyCFunction)green_throw, .ml_flags=METH_VARARGS, .ml_doc=green_throw_doc}, + {.ml_name="__getstate__", .ml_meth=(PyCFunction)green_getstate, .ml_flags=METH_NOARGS, .ml_doc=NULL}, + {.ml_name=NULL, .ml_meth=NULL} /* sentinel */ +}; + +static PyGetSetDef green_getsets[] = { + /* name, getter, setter, doc, context pointer */ + {.name="__dict__", .get=(getter)green_getdict, .set=(setter)green_setdict}, + {.name="run", .get=(getter)green_getrun, .set=(setter)green_setrun}, + {.name="parent", .get=(getter)green_getparent, .set=(setter)green_setparent}, + {.name="gr_frame", .get=(getter)green_getframe }, + { + .name="gr_context", + .get=(getter)green_getcontext, + .set=(setter)green_setcontext + }, + {.name="dead", .get=(getter)green_getdead}, + {.name="_stack_saved", .get=(getter)green_get_stack_saved}, + {.name=NULL} +}; + +static PyMemberDef green_members[] = { + {.name=NULL} +}; + +static PyNumberMethods green_as_number = { + .nb_bool=(inquiry)green_bool, +}; + + +PyTypeObject PyGreenlet_Type = { + .ob_base=PyVarObject_HEAD_INIT(NULL, 0) + .tp_name="greenlet.greenlet", /* tp_name */ + .tp_basicsize=sizeof(PyGreenlet), /* tp_basicsize */ + /* methods */ + .tp_dealloc=(destructor)green_dealloc, /* tp_dealloc */ + .tp_repr=(reprfunc)green_repr, /* tp_repr */ + .tp_as_number=&green_as_number, /* tp_as _number*/ + .tp_flags=G_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ + .tp_doc="greenlet(run=None, parent=None) -> greenlet\n\n" + "Creates a new greenlet object (without running it).\n\n" + " - *run* -- The callable to invoke.\n" + " - *parent* -- The parent greenlet. The default is the current " + "greenlet.", /* tp_doc */ + .tp_traverse=(traverseproc)green_traverse, /* tp_traverse */ + .tp_clear=(inquiry)green_clear, /* tp_clear */ + .tp_weaklistoffset=offsetof(PyGreenlet, weakreflist), /* tp_weaklistoffset */ + + .tp_methods=green_methods, /* tp_methods */ + .tp_members=green_members, /* tp_members */ + .tp_getset=green_getsets, /* tp_getset */ + .tp_dictoffset=offsetof(PyGreenlet, dict), /* tp_dictoffset */ + .tp_init=(initproc)green_init, /* tp_init */ + .tp_alloc=PyType_GenericAlloc, /* tp_alloc */ + .tp_new=(newfunc)green_new, /* tp_new */ + .tp_free=PyObject_GC_Del, /* tp_free */ + .tp_is_gc=(inquiry)green_is_gc, /* tp_is_gc */ +}; + +#endif + +// Local Variables: +// flycheck-clang-include-path: ("/opt/local/Library/Frameworks/Python.framework/Versions/3.8/include/python3.8") +// End: diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.hpp new file mode 100644 index 000000000..df6cd805b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenlet.hpp @@ -0,0 +1,35 @@ +#ifndef PYGREENLET_HPP +#define PYGREENLET_HPP + + +#include "greenlet.h" +#include "greenlet_compiler_compat.hpp" +#include "greenlet_refs.hpp" + + +using greenlet::refs::OwnedGreenlet; +using greenlet::refs::BorrowedGreenlet; +using greenlet::refs::BorrowedObject;; +using greenlet::refs::OwnedObject; +using greenlet::refs::PyErrPieces; + + +// XXX: These doesn't really belong here, it's not a Python slot. +static OwnedObject internal_green_throw(BorrowedGreenlet self, PyErrPieces& err_pieces); + +static PyGreenlet* green_new(PyTypeObject* type, PyObject* UNUSED(args), PyObject* UNUSED(kwds)); +static int green_clear(PyGreenlet* self); +static int green_init(PyGreenlet* self, PyObject* args, PyObject* kwargs); +static int green_setparent(PyGreenlet* self, PyObject* nparent, void* UNUSED(context)); +static int green_setrun(PyGreenlet* self, PyObject* nrun, void* UNUSED(context)); +static int green_traverse(PyGreenlet* self, visitproc visit, void* arg); +static void green_dealloc(PyGreenlet* self); +static PyObject* green_getparent(PyGreenlet* self, void* UNUSED(context)); + +static int green_is_gc(PyObject* self); +static PyObject* green_getdead(PyGreenlet* self, void* UNUSED(context)); +static PyObject* green_getrun(PyGreenlet* self, void* UNUSED(context)); +static int green_setcontext(PyGreenlet* self, PyObject* nctx, void* UNUSED(context)); +static PyObject* green_getframe(PyGreenlet* self, void* UNUSED(context)); +static PyObject* green_repr(PyGreenlet* self); +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenletUnswitchable.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenletUnswitchable.cpp new file mode 100644 index 000000000..1b768ee34 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyGreenletUnswitchable.cpp @@ -0,0 +1,147 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + Implementation of the Python slots for PyGreenletUnswitchable_Type +*/ +#ifndef PY_GREENLET_UNSWITCHABLE_CPP +#define PY_GREENLET_UNSWITCHABLE_CPP + + + +#define PY_SSIZE_T_CLEAN +#include +#include "structmember.h" // PyMemberDef + +#include "greenlet_internal.hpp" +// Code after this point can assume access to things declared in stdint.h, +// including the fixed-width types. This goes for the platform-specific switch functions +// as well. +#include "greenlet_refs.hpp" +#include "greenlet_slp_switch.hpp" + +#include "greenlet_thread_support.hpp" +#include "TGreenlet.hpp" + +#include "TGreenlet.cpp" +#include "TGreenletGlobals.cpp" +#include "TThreadStateDestroy.cpp" + + +using greenlet::LockGuard; +using greenlet::LockInitError; +using greenlet::PyErrOccurred; +using greenlet::Require; + +using greenlet::g_handle_exit; +using greenlet::single_result; + +using greenlet::Greenlet; +using greenlet::UserGreenlet; +using greenlet::MainGreenlet; +using greenlet::BrokenGreenlet; +using greenlet::ThreadState; +using greenlet::PythonState; + + +#include "PyGreenlet.hpp" + +static PyGreenlet* +green_unswitchable_new(PyTypeObject* type, PyObject* UNUSED(args), PyObject* UNUSED(kwds)) +{ + PyGreenlet* o = + (PyGreenlet*)PyBaseObject_Type.tp_new(type, mod_globs->empty_tuple, mod_globs->empty_dict); + if (o) { + new BrokenGreenlet(o, GET_THREAD_STATE().state().borrow_current()); + assert(Py_REFCNT(o) == 1); + } + return o; +} + +static PyObject* +green_unswitchable_getforce(PyGreenlet* self, void* UNUSED(context)) +{ + BrokenGreenlet* broken = dynamic_cast(self->pimpl); + return PyBool_FromLong(broken->_force_switch_error); +} + +static int +green_unswitchable_setforce(PyGreenlet* self, PyObject* nforce, void* UNUSED(context)) +{ + if (!nforce) { + PyErr_SetString( + PyExc_AttributeError, + "Cannot delete force_switch_error" + ); + return -1; + } + BrokenGreenlet* broken = dynamic_cast(self->pimpl); + int is_true = PyObject_IsTrue(nforce); + if (is_true == -1) { + return -1; + } + broken->_force_switch_error = is_true; + return 0; +} + +static PyObject* +green_unswitchable_getforceslp(PyGreenlet* self, void* UNUSED(context)) +{ + BrokenGreenlet* broken = dynamic_cast(self->pimpl); + return PyBool_FromLong(broken->_force_slp_switch_error); +} + +static int +green_unswitchable_setforceslp(PyGreenlet* self, PyObject* nforce, void* UNUSED(context)) +{ + if (!nforce) { + PyErr_SetString( + PyExc_AttributeError, + "Cannot delete force_slp_switch_error" + ); + return -1; + } + BrokenGreenlet* broken = dynamic_cast(self->pimpl); + int is_true = PyObject_IsTrue(nforce); + if (is_true == -1) { + return -1; + } + broken->_force_slp_switch_error = is_true; + return 0; +} + +static PyGetSetDef green_unswitchable_getsets[] = { + /* name, getter, setter, doc, closure (context pointer) */ + { + .name="force_switch_error", + .get=(getter)green_unswitchable_getforce, + .set=(setter)green_unswitchable_setforce, + .doc=NULL + }, + { + .name="force_slp_switch_error", + .get=(getter)green_unswitchable_getforceslp, + .set=(setter)green_unswitchable_setforceslp, + .doc=nullptr + }, + {.name=nullptr} +}; + +PyTypeObject PyGreenletUnswitchable_Type = { + .ob_base=PyVarObject_HEAD_INIT(NULL, 0) + .tp_name="greenlet._greenlet.UnswitchableGreenlet", + .tp_dealloc= (destructor)green_dealloc, /* tp_dealloc */ + .tp_flags=G_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ + .tp_doc="Undocumented internal class", /* tp_doc */ + .tp_traverse=(traverseproc)green_traverse, /* tp_traverse */ + .tp_clear=(inquiry)green_clear, /* tp_clear */ + + .tp_getset=green_unswitchable_getsets, /* tp_getset */ + .tp_base=&PyGreenlet_Type, /* tp_base */ + .tp_init=(initproc)green_init, /* tp_init */ + .tp_alloc=PyType_GenericAlloc, /* tp_alloc */ + .tp_new=(newfunc)green_unswitchable_new, /* tp_new */ + .tp_free=PyObject_GC_Del, /* tp_free */ + .tp_is_gc=(inquiry)green_is_gc, /* tp_is_gc */ +}; + + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyModule.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyModule.cpp new file mode 100644 index 000000000..6adcb5c3d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/PyModule.cpp @@ -0,0 +1,292 @@ +#ifndef PY_MODULE_CPP +#define PY_MODULE_CPP + +#include "greenlet_internal.hpp" + + +#include "TGreenletGlobals.cpp" +#include "TMainGreenlet.cpp" +#include "TThreadStateDestroy.cpp" + +using greenlet::LockGuard; +using greenlet::ThreadState; + +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wunused-function" +# pragma clang diagnostic ignored "-Wunused-variable" +#endif + +PyDoc_STRVAR(mod_getcurrent_doc, + "getcurrent() -> greenlet\n" + "\n" + "Returns the current greenlet (i.e. the one which called this " + "function).\n"); + +static PyObject* +mod_getcurrent(PyObject* UNUSED(module)) +{ + return GET_THREAD_STATE().state().get_current().relinquish_ownership_o(); +} + +PyDoc_STRVAR(mod_settrace_doc, + "settrace(callback) -> object\n" + "\n" + "Sets a new tracing function and returns the previous one.\n"); +static PyObject* +mod_settrace(PyObject* UNUSED(module), PyObject* args) +{ + PyArgParseParam tracefunc; + if (!PyArg_ParseTuple(args, "O", &tracefunc)) { + return NULL; + } + ThreadState& state = GET_THREAD_STATE(); + OwnedObject previous = state.get_tracefunc(); + if (!previous) { + previous = Py_None; + } + + state.set_tracefunc(tracefunc); + + return previous.relinquish_ownership(); +} + +PyDoc_STRVAR(mod_gettrace_doc, + "gettrace() -> object\n" + "\n" + "Returns the currently set tracing function, or None.\n"); + +static PyObject* +mod_gettrace(PyObject* UNUSED(module)) +{ + OwnedObject tracefunc = GET_THREAD_STATE().state().get_tracefunc(); + if (!tracefunc) { + tracefunc = Py_None; + } + return tracefunc.relinquish_ownership(); +} + + + +PyDoc_STRVAR(mod_set_thread_local_doc, + "set_thread_local(key, value) -> None\n" + "\n" + "Set a value in the current thread-local dictionary. Debugging only.\n"); + +static PyObject* +mod_set_thread_local(PyObject* UNUSED(module), PyObject* args) +{ + PyArgParseParam key; + PyArgParseParam value; + PyObject* result = NULL; + + if (PyArg_UnpackTuple(args, "set_thread_local", 2, 2, &key, &value)) { + if(PyDict_SetItem( + PyThreadState_GetDict(), // borrow + key, + value) == 0 ) { + // success + Py_INCREF(Py_None); + result = Py_None; + } + } + return result; +} + +PyDoc_STRVAR(mod_get_pending_cleanup_count_doc, + "get_pending_cleanup_count() -> Integer\n" + "\n" + "Get the number of greenlet cleanup operations pending. Testing only.\n"); + + +static PyObject* +mod_get_pending_cleanup_count(PyObject* UNUSED(module)) +{ + LockGuard cleanup_lock(*mod_globs->thread_states_to_destroy_lock); + return PyLong_FromSize_t(mod_globs->thread_states_to_destroy.size()); +} + +PyDoc_STRVAR(mod_get_total_main_greenlets_doc, + "get_total_main_greenlets() -> Integer\n" + "\n" + "Quickly return the number of main greenlets that exist. Testing only.\n"); + +static PyObject* +mod_get_total_main_greenlets(PyObject* UNUSED(module)) +{ + return PyLong_FromSize_t(G_TOTAL_MAIN_GREENLETS); +} + + + +PyDoc_STRVAR(mod_get_clocks_used_doing_optional_cleanup_doc, + "get_clocks_used_doing_optional_cleanup() -> Integer\n" + "\n" + "Get the number of clock ticks the program has used doing optional " + "greenlet cleanup.\n" + "Beginning in greenlet 2.0, greenlet tries to find and dispose of greenlets\n" + "that leaked after a thread exited. This requires invoking Python's garbage collector,\n" + "which may have a performance cost proportional to the number of live objects.\n" + "This function returns the amount of processor time\n" + "greenlet has used to do this. In programs that run with very large amounts of live\n" + "objects, this metric can be used to decide whether the cost of doing this cleanup\n" + "is worth the memory leak being corrected. If not, you can disable the cleanup\n" + "using ``enable_optional_cleanup(False)``.\n" + "The units are arbitrary and can only be compared to themselves (similarly to ``time.clock()``);\n" + "for example, to see how it scales with your heap. You can attempt to convert them into seconds\n" + "by dividing by the value of CLOCKS_PER_SEC." + "If cleanup has been disabled, returns None." + "\n" + "This is an implementation specific, provisional API. It may be changed or removed\n" + "in the future.\n" + ".. versionadded:: 2.0" + ); +static PyObject* +mod_get_clocks_used_doing_optional_cleanup(PyObject* UNUSED(module)) +{ + std::clock_t& clocks = ThreadState::clocks_used_doing_gc(); + + if (clocks == std::clock_t(-1)) { + Py_RETURN_NONE; + } + // This might not actually work on some implementations; clock_t + // is an opaque type. + return PyLong_FromSsize_t(clocks); +} + +PyDoc_STRVAR(mod_enable_optional_cleanup_doc, + "mod_enable_optional_cleanup(bool) -> None\n" + "\n" + "Enable or disable optional cleanup operations.\n" + "See ``get_clocks_used_doing_optional_cleanup()`` for details.\n" + ); +static PyObject* +mod_enable_optional_cleanup(PyObject* UNUSED(module), PyObject* flag) +{ + int is_true = PyObject_IsTrue(flag); + if (is_true == -1) { + return nullptr; + } + + std::clock_t& clocks = ThreadState::clocks_used_doing_gc(); + if (is_true) { + // If we already have a value, we don't want to lose it. + if (clocks == std::clock_t(-1)) { + clocks = 0; + } + } + else { + clocks = std::clock_t(-1); + } + Py_RETURN_NONE; +} + + + + +#if !GREENLET_PY313 +PyDoc_STRVAR(mod_get_tstate_trash_delete_nesting_doc, + "get_tstate_trash_delete_nesting() -> Integer\n" + "\n" + "Return the 'trash can' nesting level. Testing only.\n"); +static PyObject* +mod_get_tstate_trash_delete_nesting(PyObject* UNUSED(module)) +{ + PyThreadState* tstate = PyThreadState_GET(); + +#if GREENLET_PY312 + return PyLong_FromLong(tstate->trash.delete_nesting); +#else + return PyLong_FromLong(tstate->trash_delete_nesting); +#endif +} +#endif + + + + +static PyMethodDef GreenMethods[] = { + { + .ml_name="getcurrent", + .ml_meth=(PyCFunction)mod_getcurrent, + .ml_flags=METH_NOARGS, + .ml_doc=mod_getcurrent_doc + }, + { + .ml_name="settrace", + .ml_meth=(PyCFunction)mod_settrace, + .ml_flags=METH_VARARGS, + .ml_doc=mod_settrace_doc + }, + { + .ml_name="gettrace", + .ml_meth=(PyCFunction)mod_gettrace, + .ml_flags=METH_NOARGS, + .ml_doc=mod_gettrace_doc + }, + { + .ml_name="set_thread_local", + .ml_meth=(PyCFunction)mod_set_thread_local, + .ml_flags=METH_VARARGS, + .ml_doc=mod_set_thread_local_doc + }, + { + .ml_name="get_pending_cleanup_count", + .ml_meth=(PyCFunction)mod_get_pending_cleanup_count, + .ml_flags=METH_NOARGS, + .ml_doc=mod_get_pending_cleanup_count_doc + }, + { + .ml_name="get_total_main_greenlets", + .ml_meth=(PyCFunction)mod_get_total_main_greenlets, + .ml_flags=METH_NOARGS, + .ml_doc=mod_get_total_main_greenlets_doc + }, + { + .ml_name="get_clocks_used_doing_optional_cleanup", + .ml_meth=(PyCFunction)mod_get_clocks_used_doing_optional_cleanup, + .ml_flags=METH_NOARGS, + .ml_doc=mod_get_clocks_used_doing_optional_cleanup_doc + }, + { + .ml_name="enable_optional_cleanup", + .ml_meth=(PyCFunction)mod_enable_optional_cleanup, + .ml_flags=METH_O, + .ml_doc=mod_enable_optional_cleanup_doc + }, +#if !GREENLET_PY313 + { + .ml_name="get_tstate_trash_delete_nesting", + .ml_meth=(PyCFunction)mod_get_tstate_trash_delete_nesting, + .ml_flags=METH_NOARGS, + .ml_doc=mod_get_tstate_trash_delete_nesting_doc + }, +#endif + {.ml_name=NULL, .ml_meth=NULL} /* Sentinel */ +}; + +static const char* const copy_on_greentype[] = { + "getcurrent", + "error", + "GreenletExit", + "settrace", + "gettrace", + NULL +}; + +static struct PyModuleDef greenlet_module_def = { + .m_base=PyModuleDef_HEAD_INIT, + .m_name="greenlet._greenlet", + .m_doc=NULL, + .m_size=-1, + .m_methods=GreenMethods, +}; + + +#endif + +#ifdef __clang__ +# pragma clang diagnostic pop +#elif defined(__GNUC__) +# pragma GCC diagnostic pop +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TBrokenGreenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TBrokenGreenlet.cpp new file mode 100644 index 000000000..7e9ab5be9 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TBrokenGreenlet.cpp @@ -0,0 +1,45 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of greenlet::UserGreenlet. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ + +#include "TGreenlet.hpp" + +namespace greenlet { + +void* BrokenGreenlet::operator new(size_t UNUSED(count)) +{ + return allocator.allocate(1); +} + + +void BrokenGreenlet::operator delete(void* ptr) +{ + return allocator.deallocate(static_cast(ptr), + 1); +} + +greenlet::PythonAllocator greenlet::BrokenGreenlet::allocator; + +bool +BrokenGreenlet::force_slp_switch_error() const noexcept +{ + return this->_force_slp_switch_error; +} + +UserGreenlet::switchstack_result_t BrokenGreenlet::g_switchstack(void) +{ + if (this->_force_switch_error) { + return switchstack_result_t(-1); + } + return UserGreenlet::g_switchstack(); +} + +}; //namespace greenlet diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TExceptionState.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TExceptionState.cpp new file mode 100644 index 000000000..08a94ae83 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TExceptionState.cpp @@ -0,0 +1,62 @@ +#ifndef GREENLET_EXCEPTION_STATE_CPP +#define GREENLET_EXCEPTION_STATE_CPP + +#include +#include "TGreenlet.hpp" + +namespace greenlet { + + +ExceptionState::ExceptionState() +{ + this->clear(); +} + +void ExceptionState::operator<<(const PyThreadState *const tstate) noexcept +{ + this->exc_info = tstate->exc_info; + this->exc_state = tstate->exc_state; +} + +void ExceptionState::operator>>(PyThreadState *const tstate) noexcept +{ + tstate->exc_state = this->exc_state; + tstate->exc_info = + this->exc_info ? this->exc_info : &tstate->exc_state; + this->clear(); +} + +void ExceptionState::clear() noexcept +{ + this->exc_info = nullptr; + this->exc_state.exc_value = nullptr; +#if !GREENLET_PY311 + this->exc_state.exc_type = nullptr; + this->exc_state.exc_traceback = nullptr; +#endif + this->exc_state.previous_item = nullptr; +} + +int ExceptionState::tp_traverse(visitproc visit, void* arg) noexcept +{ + Py_VISIT(this->exc_state.exc_value); +#if !GREENLET_PY311 + Py_VISIT(this->exc_state.exc_type); + Py_VISIT(this->exc_state.exc_traceback); +#endif + return 0; +} + +void ExceptionState::tp_clear() noexcept +{ + Py_CLEAR(this->exc_state.exc_value); +#if !GREENLET_PY311 + Py_CLEAR(this->exc_state.exc_type); + Py_CLEAR(this->exc_state.exc_traceback); +#endif +} + + +}; // namespace greenlet + +#endif // GREENLET_EXCEPTION_STATE_CPP diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.cpp new file mode 100644 index 000000000..4698a1784 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.cpp @@ -0,0 +1,718 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of greenlet::Greenlet. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#ifndef TGREENLET_CPP +#define TGREENLET_CPP +#include "greenlet_internal.hpp" +#include "TGreenlet.hpp" + + +#include "TGreenletGlobals.cpp" +#include "TThreadStateDestroy.cpp" + +namespace greenlet { + +Greenlet::Greenlet(PyGreenlet* p) + : Greenlet(p, StackState()) +{ +} + +Greenlet::Greenlet(PyGreenlet* p, const StackState& initial_stack) + : _self(p), stack_state(initial_stack) +{ + assert(p->pimpl == nullptr); + p->pimpl = this; +} + +Greenlet::~Greenlet() +{ + // XXX: Can't do this. tp_clear is a virtual function, and by the + // time we're here, we've sliced off our child classes. + //this->tp_clear(); + this->_self->pimpl = nullptr; +} + +bool +Greenlet::force_slp_switch_error() const noexcept +{ + return false; +} + +void +Greenlet::release_args() +{ + this->switch_args.CLEAR(); +} + +/** + * CAUTION: This will allocate memory and may trigger garbage + * collection and arbitrary Python code. + */ +OwnedObject +Greenlet::throw_GreenletExit_during_dealloc(const ThreadState& UNUSED(current_thread_state)) +{ + // If we're killed because we lost all references in the + // middle of a switch, that's ok. Don't reset the args/kwargs, + // we still want to pass them to the parent. + PyErr_SetString(mod_globs->PyExc_GreenletExit, + "Killing the greenlet because all references have vanished."); + // To get here it had to have run before + return this->g_switch(); +} + +inline void +Greenlet::slp_restore_state() noexcept +{ +#ifdef SLP_BEFORE_RESTORE_STATE + SLP_BEFORE_RESTORE_STATE(); +#endif + this->stack_state.copy_heap_to_stack( + this->thread_state()->borrow_current()->stack_state); +} + + +inline int +Greenlet::slp_save_state(char *const stackref) noexcept +{ + // XXX: This used to happen in the middle, before saving, but + // after finding the next owner. Does that matter? This is + // only defined for Sparc/GCC where it flushes register + // windows to the stack (I think) +#ifdef SLP_BEFORE_SAVE_STATE + SLP_BEFORE_SAVE_STATE(); +#endif + return this->stack_state.copy_stack_to_heap(stackref, + this->thread_state()->borrow_current()->stack_state); +} + +/** + * CAUTION: This will allocate memory and may trigger garbage + * collection and arbitrary Python code. + */ +OwnedObject +Greenlet::on_switchstack_or_initialstub_failure( + Greenlet* target, + const Greenlet::switchstack_result_t& err, + const bool target_was_me, + const bool was_initial_stub) +{ + // If we get here, either g_initialstub() + // failed, or g_switchstack() failed. Either one of those + // cases SHOULD leave us in the original greenlet with a valid stack. + if (!PyErr_Occurred()) { + PyErr_SetString( + PyExc_SystemError, + was_initial_stub + ? "Failed to switch stacks into a greenlet for the first time." + : "Failed to switch stacks into a running greenlet."); + } + this->release_args(); + + if (target && !target_was_me) { + target->murder_in_place(); + } + + assert(!err.the_new_current_greenlet); + assert(!err.origin_greenlet); + return OwnedObject(); + +} + +OwnedGreenlet +Greenlet::g_switchstack_success() noexcept +{ + PyThreadState* tstate = PyThreadState_GET(); + // restore the saved state + this->python_state >> tstate; + this->exception_state >> tstate; + + // The thread state hasn't been changed yet. + ThreadState* thread_state = this->thread_state(); + OwnedGreenlet result(thread_state->get_current()); + thread_state->set_current(this->self()); + //assert(thread_state->borrow_current().borrow() == this->_self); + return result; +} + +Greenlet::switchstack_result_t +Greenlet::g_switchstack(void) +{ + // if any of these assertions fail, it's likely because we + // switched away and tried to switch back to us. Early stages of + // switching are not reentrant because we re-use ``this->args()``. + // Switching away would happen if we trigger a garbage collection + // (by just using some Python APIs that happen to allocate Python + // objects) and some garbage had weakref callbacks or __del__ that + // switches (people don't write code like that by hand, but with + // gevent it's possible without realizing it) + assert(this->args() || PyErr_Occurred()); + { /* save state */ + if (this->thread_state()->is_current(this->self())) { + // Hmm, nothing to do. + // TODO: Does this bypass trace events that are + // important? + return switchstack_result_t(0, + this, this->thread_state()->borrow_current()); + } + BorrowedGreenlet current = this->thread_state()->borrow_current(); + PyThreadState* tstate = PyThreadState_GET(); + + current->python_state << tstate; + current->exception_state << tstate; + this->python_state.will_switch_from(tstate); + switching_thread_state = this; + current->expose_frames(); + } + assert(this->args() || PyErr_Occurred()); + // If this is the first switch into a greenlet, this will + // return twice, once with 1 in the new greenlet, once with 0 + // in the origin. + int err; + if (this->force_slp_switch_error()) { + err = -1; + } + else { + err = slp_switch(); + } + + if (err < 0) { /* error */ + // Tested by + // test_greenlet.TestBrokenGreenlets.test_failed_to_slp_switch_into_running + // + // It's not clear if it's worth trying to clean up and + // continue here. Failing to switch stacks is a big deal which + // may not be recoverable (who knows what state the stack is in). + // Also, we've stolen references in preparation for calling + // ``g_switchstack_success()`` and we don't have a clean + // mechanism for backing that all out. + Py_FatalError("greenlet: Failed low-level slp_switch(). The stack is probably corrupt."); + } + + // No stack-based variables are valid anymore. + + // But the global is volatile so we can reload it without the + // compiler caching it from earlier. + Greenlet* greenlet_that_switched_in = switching_thread_state; // aka this + switching_thread_state = nullptr; + // except that no stack variables are valid, we would: + // assert(this == greenlet_that_switched_in); + + // switchstack success is where we restore the exception state, + // etc. It returns the origin greenlet because its convenient. + + OwnedGreenlet origin = greenlet_that_switched_in->g_switchstack_success(); + assert(greenlet_that_switched_in->args() || PyErr_Occurred()); + return switchstack_result_t(err, greenlet_that_switched_in, origin); +} + + +inline void +Greenlet::check_switch_allowed() const +{ + // TODO: Make this take a parameter of the current greenlet, + // or current main greenlet, to make the check for + // cross-thread switching cheaper. Surely somewhere up the + // call stack we've already accessed the thread local variable. + + // We expect to always have a main greenlet now; accessing the thread state + // created it. However, if we get here and cleanup has already + // begun because we're a greenlet that was running in a + // (now dead) thread, these invariants will not hold true. In + // fact, accessing `this->thread_state` may not even be possible. + + // If the thread this greenlet was running in is dead, + // we'll still have a reference to a main greenlet, but the + // thread state pointer we have is bogus. + // TODO: Give the objects an API to determine if they belong + // to a dead thread. + + const BorrowedMainGreenlet main_greenlet = this->find_main_greenlet_in_lineage(); + + if (!main_greenlet) { + throw PyErrOccurred(mod_globs->PyExc_GreenletError, + "cannot switch to a garbage collected greenlet"); + } + + if (!main_greenlet->thread_state()) { + throw PyErrOccurred(mod_globs->PyExc_GreenletError, + "cannot switch to a different thread (which happens to have exited)"); + } + + // The main greenlet we found was from the .parent lineage. + // That may or may not have any relationship to the main + // greenlet of the running thread. We can't actually access + // our this->thread_state members to try to check that, + // because it could be in the process of getting destroyed, + // but setting the main_greenlet->thread_state member to NULL + // may not be visible yet. So we need to check against the + // current thread state (once the cheaper checks are out of + // the way) + const BorrowedMainGreenlet current_main_greenlet = GET_THREAD_STATE().state().borrow_main_greenlet(); + if ( + // lineage main greenlet is not this thread's greenlet + current_main_greenlet != main_greenlet + || ( + // atteched to some thread + this->main_greenlet() + // XXX: Same condition as above. Was this supposed to be + // this->main_greenlet()? + && current_main_greenlet != main_greenlet) + // switching into a known dead thread (XXX: which, if we get here, + // is bad, because we just accessed the thread state, which is + // gone!) + || (!current_main_greenlet->thread_state())) { + // CAUTION: This may trigger memory allocations, gc, and + // arbitrary Python code. + throw PyErrOccurred( + mod_globs->PyExc_GreenletError, + "Cannot switch to a different thread\n\tCurrent: %R\n\tExpected: %R", + current_main_greenlet, main_greenlet); + } +} + +const OwnedObject +Greenlet::context() const +{ + using greenlet::PythonStateContext; + OwnedObject result; + + if (this->is_currently_running_in_some_thread()) { + /* Currently running greenlet: context is stored in the thread state, + not the greenlet object. */ + if (GET_THREAD_STATE().state().is_current(this->self())) { + result = PythonStateContext::context(PyThreadState_GET()); + } + else { + throw ValueError( + "cannot get context of a " + "greenlet that is running in a different thread"); + } + } + else { + /* Greenlet is not running: just return context. */ + result = this->python_state.context(); + } + if (!result) { + result = OwnedObject::None(); + } + return result; +} + + +void +Greenlet::context(BorrowedObject given) +{ + using greenlet::PythonStateContext; + if (!given) { + throw AttributeError("can't delete context attribute"); + } + if (given.is_None()) { + /* "Empty context" is stored as NULL, not None. */ + given = nullptr; + } + + //checks type, incrs refcnt + greenlet::refs::OwnedContext context(given); + PyThreadState* tstate = PyThreadState_GET(); + + if (this->is_currently_running_in_some_thread()) { + if (!GET_THREAD_STATE().state().is_current(this->self())) { + throw ValueError("cannot set context of a greenlet" + " that is running in a different thread"); + } + + /* Currently running greenlet: context is stored in the thread state, + not the greenlet object. */ + OwnedObject octx = OwnedObject::consuming(PythonStateContext::context(tstate)); + PythonStateContext::context(tstate, context.relinquish_ownership()); + } + else { + /* Greenlet is not running: just set context. Note that the + greenlet may be dead.*/ + this->python_state.context() = context; + } +} + +/** + * CAUTION: May invoke arbitrary Python code. + * + * Figure out what the result of ``greenlet.switch(arg, kwargs)`` + * should be and transfers ownership of it to the left-hand-side. + * + * If switch() was just passed an arg tuple, then we'll just return that. + * If only keyword arguments were passed, then we'll pass the keyword + * argument dict. Otherwise, we'll create a tuple of (args, kwargs) and + * return both. + * + * CAUTION: This may allocate a new tuple object, which may + * cause the Python garbage collector to run, which in turn may + * run arbitrary Python code that switches. + */ +OwnedObject& operator<<=(OwnedObject& lhs, greenlet::SwitchingArgs& rhs) noexcept +{ + // Because this may invoke arbitrary Python code, which could + // result in switching back to us, we need to get the + // arguments locally on the stack. + assert(rhs); + OwnedObject args = rhs.args(); + OwnedObject kwargs = rhs.kwargs(); + rhs.CLEAR(); + // We shouldn't be called twice for the same switch. + assert(args || kwargs); + assert(!rhs); + + if (!kwargs) { + lhs = args; + } + else if (!PyDict_Size(kwargs.borrow())) { + lhs = args; + } + else if (!PySequence_Length(args.borrow())) { + lhs = kwargs; + } + else { + // PyTuple_Pack allocates memory, may GC, may run arbitrary + // Python code. + lhs = OwnedObject::consuming(PyTuple_Pack(2, args.borrow(), kwargs.borrow())); + } + return lhs; +} + +static OwnedObject +g_handle_exit(const OwnedObject& greenlet_result) +{ + if (!greenlet_result && mod_globs->PyExc_GreenletExit.PyExceptionMatches()) { + /* catch and ignore GreenletExit */ + PyErrFetchParam val; + PyErr_Fetch(PyErrFetchParam(), val, PyErrFetchParam()); + if (!val) { + return OwnedObject::None(); + } + return OwnedObject(val); + } + + if (greenlet_result) { + // package the result into a 1-tuple + // PyTuple_Pack increments the reference of its arguments, + // so we always need to decref the greenlet result; + // the owner will do that. + return OwnedObject::consuming(PyTuple_Pack(1, greenlet_result.borrow())); + } + + return OwnedObject(); +} + + + +/** + * May run arbitrary Python code. + */ +OwnedObject +Greenlet::g_switch_finish(const switchstack_result_t& err) +{ + assert(err.the_new_current_greenlet == this); + + ThreadState& state = *this->thread_state(); + // Because calling the trace function could do arbitrary things, + // including switching away from this greenlet and then maybe + // switching back, we need to capture the arguments now so that + // they don't change. + OwnedObject result; + if (this->args()) { + result <<= this->args(); + } + else { + assert(PyErr_Occurred()); + } + assert(!this->args()); + try { + // Our only caller handles the bad error case + assert(err.status >= 0); + assert(state.borrow_current() == this->self()); + if (OwnedObject tracefunc = state.get_tracefunc()) { + assert(result || PyErr_Occurred()); + g_calltrace(tracefunc, + result ? mod_globs->event_switch : mod_globs->event_throw, + err.origin_greenlet, + this->self()); + } + // The above could have invoked arbitrary Python code, but + // it couldn't switch back to this object and *also* + // throw an exception, so the args won't have changed. + + if (PyErr_Occurred()) { + // We get here if we fell of the end of the run() function + // raising an exception. The switch itself was + // successful, but the function raised. + // valgrind reports that memory allocated here can still + // be reached after a test run. + throw PyErrOccurred::from_current(); + } + return result; + } + catch (const PyErrOccurred&) { + /* Turn switch errors into switch throws */ + /* Turn trace errors into switch throws */ + this->release_args(); + throw; + } +} + +void +Greenlet::g_calltrace(const OwnedObject& tracefunc, + const greenlet::refs::ImmortalEventName& event, + const BorrowedGreenlet& origin, + const BorrowedGreenlet& target) +{ + PyErrPieces saved_exc; + try { + TracingGuard tracing_guard; + // TODO: We have saved the active exception (if any) that's + // about to be raised. In the 'throw' case, we could provide + // the exception to the tracefunction, which seems very helpful. + tracing_guard.CallTraceFunction(tracefunc, event, origin, target); + } + catch (const PyErrOccurred&) { + // In case of exceptions trace function is removed, + // and any existing exception is replaced with the tracing + // exception. + GET_THREAD_STATE().state().set_tracefunc(Py_None); + throw; + } + + saved_exc.PyErrRestore(); + assert( + (event == mod_globs->event_throw && PyErr_Occurred()) + || (event == mod_globs->event_switch && !PyErr_Occurred()) + ); +} + +void +Greenlet::murder_in_place() +{ + if (this->active()) { + assert(!this->is_currently_running_in_some_thread()); + this->deactivate_and_free(); + } +} + +inline void +Greenlet::deactivate_and_free() +{ + if (!this->active()) { + return; + } + // Throw away any saved stack. + this->stack_state = StackState(); + assert(!this->stack_state.active()); + // Throw away any Python references. + // We're holding a borrowed reference to the last + // frame we executed. Since we borrowed it, the + // normal traversal, clear, and dealloc functions + // ignore it, meaning it leaks. (The thread state + // object can't find it to clear it when that's + // deallocated either, because by definition if we + // got an object on this list, it wasn't + // running and the thread state doesn't have + // this frame.) + // So here, we *do* clear it. + this->python_state.tp_clear(true); +} + +bool +Greenlet::belongs_to_thread(const ThreadState* thread_state) const +{ + if (!this->thread_state() // not running anywhere, or thread + // exited + || !thread_state) { // same, or there is no thread state. + return false; + } + return true; +} + + +void +Greenlet::deallocing_greenlet_in_thread(const ThreadState* current_thread_state) +{ + /* Cannot raise an exception to kill the greenlet if + it is not running in the same thread! */ + if (this->belongs_to_thread(current_thread_state)) { + assert(current_thread_state); + // To get here it had to have run before + /* Send the greenlet a GreenletExit exception. */ + + // We don't care about the return value, only whether an + // exception happened. + this->throw_GreenletExit_during_dealloc(*current_thread_state); + return; + } + + // Not the same thread! Temporarily save the greenlet + // into its thread's deleteme list, *if* it exists. + // If that thread has already exited, and processed its pending + // cleanup, we'll never be able to clean everything up: we won't + // be able to raise an exception. + // That's mostly OK! Since we can't add it to a list, our refcount + // won't increase, and we'll go ahead with the DECREFs later. + ThreadState *const thread_state = this->thread_state(); + if (thread_state) { + thread_state->delete_when_thread_running(this->self()); + } + else { + // The thread is dead, we can't raise an exception. + // We need to make it look non-active, though, so that dealloc + // finishes killing it. + this->deactivate_and_free(); + } + return; +} + + +int +Greenlet::tp_traverse(visitproc visit, void* arg) +{ + + int result; + if ((result = this->exception_state.tp_traverse(visit, arg)) != 0) { + return result; + } + //XXX: This is ugly. But so is handling everything having to do + //with the top frame. + bool visit_top_frame = this->was_running_in_dead_thread(); + // When true, the thread is dead. Our implicit weak reference to the + // frame is now all that's left; we consider ourselves to + // strongly own it now. + if ((result = this->python_state.tp_traverse(visit, arg, visit_top_frame)) != 0) { + return result; + } + return 0; +} + +int +Greenlet::tp_clear() +{ + bool own_top_frame = this->was_running_in_dead_thread(); + this->exception_state.tp_clear(); + this->python_state.tp_clear(own_top_frame); + return 0; +} + +bool Greenlet::is_currently_running_in_some_thread() const +{ + return this->stack_state.active() && !this->python_state.top_frame(); +} + +#if GREENLET_PY312 +void GREENLET_NOINLINE(Greenlet::expose_frames)() +{ + if (!this->python_state.top_frame()) { + return; + } + + _PyInterpreterFrame* last_complete_iframe = nullptr; + _PyInterpreterFrame* iframe = this->python_state.top_frame()->f_frame; + while (iframe) { + // We must make a copy before looking at the iframe contents, + // since iframe might point to a portion of the greenlet's C stack + // that was spilled when switching greenlets. + _PyInterpreterFrame iframe_copy; + this->stack_state.copy_from_stack(&iframe_copy, iframe, sizeof(*iframe)); + if (!_PyFrame_IsIncomplete(&iframe_copy)) { + // If the iframe were OWNED_BY_CSTACK then it would always be + // incomplete. Since it's not incomplete, it's not on the C stack + // and we can access it through the original `iframe` pointer + // directly. This is important since GetFrameObject might + // lazily _create_ the frame object and we don't want the + // interpreter to lose track of it. + assert(iframe_copy.owner != FRAME_OWNED_BY_CSTACK); + + // We really want to just write: + // PyFrameObject* frame = _PyFrame_GetFrameObject(iframe); + // but _PyFrame_GetFrameObject calls _PyFrame_MakeAndSetFrameObject + // which is not a visible symbol in libpython. The easiest + // way to get a public function to call it is using + // PyFrame_GetBack, which is defined as follows: + // assert(frame != NULL); + // assert(!_PyFrame_IsIncomplete(frame->f_frame)); + // PyFrameObject *back = frame->f_back; + // if (back == NULL) { + // _PyInterpreterFrame *prev = frame->f_frame->previous; + // prev = _PyFrame_GetFirstComplete(prev); + // if (prev) { + // back = _PyFrame_GetFrameObject(prev); + // } + // } + // return (PyFrameObject*)Py_XNewRef(back); + if (!iframe->frame_obj) { + PyFrameObject dummy_frame; + _PyInterpreterFrame dummy_iframe; + dummy_frame.f_back = nullptr; + dummy_frame.f_frame = &dummy_iframe; + // force the iframe to be considered complete without + // needing to check its code object: + dummy_iframe.owner = FRAME_OWNED_BY_GENERATOR; + dummy_iframe.previous = iframe; + assert(!_PyFrame_IsIncomplete(&dummy_iframe)); + // Drop the returned reference immediately; the iframe + // continues to hold a strong reference + Py_XDECREF(PyFrame_GetBack(&dummy_frame)); + assert(iframe->frame_obj); + } + + // This is a complete frame, so make the last one of those we saw + // point at it, bypassing any incomplete frames (which may have + // been on the C stack) in between the two. We're overwriting + // last_complete_iframe->previous and need that to be reversible, + // so we store the original previous ptr in the frame object + // (which we must have created on a previous iteration through + // this loop). The frame object has a bunch of storage that is + // only used when its iframe is OWNED_BY_FRAME_OBJECT, which only + // occurs when the frame object outlives the frame's execution, + // which can't have happened yet because the frame is currently + // executing as far as the interpreter is concerned. So, we can + // reuse it for our own purposes. + assert(iframe->owner == FRAME_OWNED_BY_THREAD + || iframe->owner == FRAME_OWNED_BY_GENERATOR); + if (last_complete_iframe) { + assert(last_complete_iframe->frame_obj); + memcpy(&last_complete_iframe->frame_obj->_f_frame_data[0], + &last_complete_iframe->previous, sizeof(void *)); + last_complete_iframe->previous = iframe; + } + last_complete_iframe = iframe; + } + // Frames that are OWNED_BY_FRAME_OBJECT are linked via the + // frame's f_back while all others are linked via the iframe's + // previous ptr. Since all the frames we traverse are running + // as far as the interpreter is concerned, we don't have to + // worry about the OWNED_BY_FRAME_OBJECT case. + iframe = iframe_copy.previous; + } + + // Give the outermost complete iframe a null previous pointer to + // account for any potential incomplete/C-stack iframes between it + // and the actual top-of-stack + if (last_complete_iframe) { + assert(last_complete_iframe->frame_obj); + memcpy(&last_complete_iframe->frame_obj->_f_frame_data[0], + &last_complete_iframe->previous, sizeof(void *)); + last_complete_iframe->previous = nullptr; + } +} +#else +void Greenlet::expose_frames() +{ + +} +#endif + +}; // namespace greenlet +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.hpp new file mode 100644 index 000000000..aa6d6d5ad --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenlet.hpp @@ -0,0 +1,824 @@ +#ifndef GREENLET_GREENLET_HPP +#define GREENLET_GREENLET_HPP +/* + * Declarations of the core data structures. +*/ + +#define PY_SSIZE_T_CLEAN +#include + +#include "greenlet_compiler_compat.hpp" +#include "greenlet_refs.hpp" +#include "greenlet_cpython_compat.hpp" +#include "greenlet_allocator.hpp" + +using greenlet::refs::OwnedObject; +using greenlet::refs::OwnedGreenlet; +using greenlet::refs::OwnedMainGreenlet; +using greenlet::refs::BorrowedGreenlet; + +#if PY_VERSION_HEX < 0x30B00A6 +# define _PyCFrame CFrame +# define _PyInterpreterFrame _interpreter_frame +#endif + +#if GREENLET_PY312 +# define Py_BUILD_CORE +# include "internal/pycore_frame.h" +#endif + +#if GREENLET_PY314 +# include "internal/pycore_interpframe_structs.h" +#if defined(_MSC_VER) || defined(__MINGW64__) +# include "greenlet_msvc_compat.hpp" +#else +# include "internal/pycore_interpframe.h" +#endif +#endif + +// XXX: TODO: Work to remove all virtual functions +// for speed of calling and size of objects (no vtable). +// One pattern is the Curiously Recurring Template +namespace greenlet +{ + class ExceptionState + { + private: + G_NO_COPIES_OF_CLS(ExceptionState); + + // Even though these are borrowed objects, we actually own + // them, when they're not null. + // XXX: Express that in the API. + private: + _PyErr_StackItem* exc_info; + _PyErr_StackItem exc_state; + public: + ExceptionState(); + void operator<<(const PyThreadState *const tstate) noexcept; + void operator>>(PyThreadState* tstate) noexcept; + void clear() noexcept; + + int tp_traverse(visitproc visit, void* arg) noexcept; + void tp_clear() noexcept; + }; + + template + void operator<<(const PyThreadState *const tstate, T& exc); + + class PythonStateContext + { + protected: + greenlet::refs::OwnedContext _context; + public: + inline const greenlet::refs::OwnedContext& context() const + { + return this->_context; + } + inline greenlet::refs::OwnedContext& context() + { + return this->_context; + } + + inline void tp_clear() + { + this->_context.CLEAR(); + } + + template + inline static PyObject* context(T* tstate) + { + return tstate->context; + } + + template + inline static void context(T* tstate, PyObject* new_context) + { + tstate->context = new_context; + tstate->context_ver++; + } + }; + class SwitchingArgs; + class PythonState : public PythonStateContext + { + public: + typedef greenlet::refs::OwnedReference OwnedFrame; + private: + G_NO_COPIES_OF_CLS(PythonState); + // We own this if we're suspended (although currently we don't + // tp_traverse into it; that's a TODO). If we're running, it's + // empty. If we get deallocated and *still* have a frame, it + // won't be reachable from the place that normally decref's + // it, so we need to do it (hence owning it). + OwnedFrame _top_frame; +#if GREENLET_USE_CFRAME + _PyCFrame* cframe; + int use_tracing; +#endif +#if GREENLET_PY314 + int py_recursion_depth; +#elif GREENLET_PY312 + int py_recursion_depth; + int c_recursion_depth; +#else + int recursion_depth; +#endif +#if GREENLET_PY313 + PyObject *delete_later; +#else + int trash_delete_nesting; +#endif +#if GREENLET_PY311 + _PyInterpreterFrame* current_frame; + _PyStackChunk* datastack_chunk; + PyObject** datastack_top; + PyObject** datastack_limit; +#endif + // The PyInterpreterFrame list on 3.12+ contains some entries that are + // on the C stack, which can't be directly accessed while a greenlet is + // suspended. In order to keep greenlet gr_frame introspection working, + // we adjust stack switching to rewrite the interpreter frame list + // to skip these C-stack frames; we call this "exposing" the greenlet's + // frames because it makes them valid to work with in Python. Then when + // the greenlet is resumed we need to remember to reverse the operation + // we did. The C-stack frames are "entry frames" which are a low-level + // interpreter detail; they're not needed for introspection, but do + // need to be present for the eval loop to work. + void unexpose_frames(); + + public: + + PythonState(); + // You can use this for testing whether we have a frame + // or not. It returns const so they can't modify it. + const OwnedFrame& top_frame() const noexcept; + + inline void operator<<(const PyThreadState *const tstate) noexcept; + inline void operator>>(PyThreadState* tstate) noexcept; + void clear() noexcept; + + int tp_traverse(visitproc visit, void* arg, bool visit_top_frame) noexcept; + void tp_clear(bool own_top_frame) noexcept; + void set_initial_state(const PyThreadState* const tstate) noexcept; +#if GREENLET_USE_CFRAME + void set_new_cframe(_PyCFrame& frame) noexcept; +#endif + + void may_switch_away() noexcept; + inline void will_switch_from(PyThreadState *const origin_tstate) noexcept; + void did_finish(PyThreadState* tstate) noexcept; + }; + + class StackState + { + // By having only plain C (POD) members, no virtual functions + // or bases, we get a trivial assignment operator generated + // for us. However, that's not safe since we do manage memory. + // So we declare an assignment operator that only works if we + // don't have any memory allocated. (We don't use + // std::shared_ptr for reference counting just to keep this + // object small) + private: + char* _stack_start; + char* stack_stop; + char* stack_copy; + intptr_t _stack_saved; + StackState* stack_prev; + inline int copy_stack_to_heap_up_to(const char* const stop) noexcept; + inline void free_stack_copy() noexcept; + + public: + /** + * Creates a started, but inactive, state, using *current* + * as the previous. + */ + StackState(void* mark, StackState& current); + /** + * Creates an inactive, unstarted, state. + */ + StackState(); + ~StackState(); + StackState(const StackState& other); + StackState& operator=(const StackState& other); + inline void copy_heap_to_stack(const StackState& current) noexcept; + inline int copy_stack_to_heap(char* const stackref, const StackState& current) noexcept; + inline bool started() const noexcept; + inline bool main() const noexcept; + inline bool active() const noexcept; + inline void set_active() noexcept; + inline void set_inactive() noexcept; + inline intptr_t stack_saved() const noexcept; + inline char* stack_start() const noexcept; + static inline StackState make_main() noexcept; +#ifdef GREENLET_USE_STDIO + friend std::ostream& operator<<(std::ostream& os, const StackState& s); +#endif + + // Fill in [dest, dest + n) with the values that would be at + // [src, src + n) while this greenlet is running. This is like memcpy + // except that if the greenlet is suspended it accounts for the portion + // of the greenlet's stack that was spilled to the heap. `src` may + // be on this greenlet's stack, or on the heap, but not on a different + // greenlet's stack. + void copy_from_stack(void* dest, const void* src, size_t n) const; + }; +#ifdef GREENLET_USE_STDIO + std::ostream& operator<<(std::ostream& os, const StackState& s); +#endif + + class SwitchingArgs + { + private: + G_NO_ASSIGNMENT_OF_CLS(SwitchingArgs); + // If args and kwargs are both false (NULL), this is a *throw*, not a + // switch. PyErr_... must have been called already. + OwnedObject _args; + OwnedObject _kwargs; + public: + + SwitchingArgs() + {} + + SwitchingArgs(const OwnedObject& args, const OwnedObject& kwargs) + : _args(args), + _kwargs(kwargs) + {} + + SwitchingArgs(const SwitchingArgs& other) + : _args(other._args), + _kwargs(other._kwargs) + {} + + const OwnedObject& args() + { + return this->_args; + } + + const OwnedObject& kwargs() + { + return this->_kwargs; + } + + /** + * Moves ownership from the argument to this object. + */ + SwitchingArgs& operator<<=(SwitchingArgs& other) + { + if (this != &other) { + this->_args = other._args; + this->_kwargs = other._kwargs; + other.CLEAR(); + } + return *this; + } + + /** + * Acquires ownership of the argument (consumes the reference). + */ + SwitchingArgs& operator<<=(PyObject* args) + { + this->_args = OwnedObject::consuming(args); + this->_kwargs.CLEAR(); + return *this; + } + + /** + * Acquires ownership of the argument. + * + * Sets the args to be the given value; clears the kwargs. + */ + SwitchingArgs& operator<<=(OwnedObject& args) + { + assert(&args != &this->_args); + this->_args = args; + this->_kwargs.CLEAR(); + args.CLEAR(); + + return *this; + } + + explicit operator bool() const noexcept + { + return this->_args || this->_kwargs; + } + + inline void CLEAR() + { + this->_args.CLEAR(); + this->_kwargs.CLEAR(); + } + + const std::string as_str() const noexcept + { + return PyUnicode_AsUTF8( + OwnedObject::consuming( + PyUnicode_FromFormat( + "SwitchingArgs(args=%R, kwargs=%R)", + this->_args.borrow(), + this->_kwargs.borrow() + ) + ).borrow() + ); + } + }; + + class ThreadState; + + class UserGreenlet; + class MainGreenlet; + + class Greenlet + { + private: + G_NO_COPIES_OF_CLS(Greenlet); + PyGreenlet* const _self; + private: + // XXX: Work to remove these. + friend class ThreadState; + friend class UserGreenlet; + friend class MainGreenlet; + protected: + ExceptionState exception_state; + SwitchingArgs switch_args; + StackState stack_state; + PythonState python_state; + Greenlet(PyGreenlet* p, const StackState& initial_state); + public: + // This constructor takes ownership of the PyGreenlet, by + // setting ``p->pimpl = this;``. + Greenlet(PyGreenlet* p); + virtual ~Greenlet(); + + const OwnedObject context() const; + + // You MUST call this _very_ early in the switching process to + // prepare anything that may need prepared. This might perform + // garbage collections or otherwise run arbitrary Python code. + // + // One specific use of it is for Python 3.11+, preventing + // running arbitrary code at unsafe times. See + // PythonState::may_switch_away(). + inline void may_switch_away() + { + this->python_state.may_switch_away(); + } + + inline void context(refs::BorrowedObject new_context); + + inline SwitchingArgs& args() + { + return this->switch_args; + } + + virtual const refs::BorrowedMainGreenlet main_greenlet() const = 0; + + inline intptr_t stack_saved() const noexcept + { + return this->stack_state.stack_saved(); + } + + // This is used by the macro SLP_SAVE_STATE to compute the + // difference in stack sizes. It might be nice to handle the + // computation ourself, but the type of the result + // varies by platform, so doing it in the macro is the + // simplest way. + inline const char* stack_start() const noexcept + { + return this->stack_state.stack_start(); + } + + virtual OwnedObject throw_GreenletExit_during_dealloc(const ThreadState& current_thread_state); + virtual OwnedObject g_switch() = 0; + /** + * Force the greenlet to appear dead. Used when it's not + * possible to throw an exception into a greenlet anymore. + * + * This losses access to the thread state and the main greenlet. + */ + virtual void murder_in_place(); + + /** + * Called when somebody notices we were running in a dead + * thread to allow cleaning up resources (because we can't + * raise GreenletExit into it anymore). + * This is very similar to ``murder_in_place()``, except that + * it DOES NOT lose the main greenlet or thread state. + */ + inline void deactivate_and_free(); + + + // Called when some thread wants to deallocate a greenlet + // object. + // The thread may or may not be the same thread the greenlet + // was running in. + // The thread state will be null if the thread the greenlet + // was running in was known to have exited. + void deallocing_greenlet_in_thread(const ThreadState* current_state); + + // Must be called on 3.12+ before exposing a suspended greenlet's + // frames to user code. This rewrites the linked list of interpreter + // frames to skip the ones that are being stored on the C stack (which + // can't be safely accessed while the greenlet is suspended because + // that stack space might be hosting a different greenlet), and + // sets PythonState::frames_were_exposed so we remember to restore + // the original list before resuming the greenlet. The C-stack frames + // are a low-level interpreter implementation detail; while they're + // important to the bytecode eval loop, they're superfluous for + // introspection purposes. + void expose_frames(); + + + // TODO: Figure out how to make these non-public. + inline void slp_restore_state() noexcept; + inline int slp_save_state(char *const stackref) noexcept; + + inline bool is_currently_running_in_some_thread() const; + virtual bool belongs_to_thread(const ThreadState* state) const; + + inline bool started() const + { + return this->stack_state.started(); + } + inline bool active() const + { + return this->stack_state.active(); + } + inline bool main() const + { + return this->stack_state.main(); + } + virtual refs::BorrowedMainGreenlet find_main_greenlet_in_lineage() const = 0; + + virtual const OwnedGreenlet parent() const = 0; + virtual void parent(const refs::BorrowedObject new_parent) = 0; + + inline const PythonState::OwnedFrame& top_frame() + { + return this->python_state.top_frame(); + } + + virtual const OwnedObject& run() const = 0; + virtual void run(const refs::BorrowedObject nrun) = 0; + + + virtual int tp_traverse(visitproc visit, void* arg); + virtual int tp_clear(); + + + // Return the thread state that the greenlet is running in, or + // null if the greenlet is not running or the thread is known + // to have exited. + virtual ThreadState* thread_state() const noexcept = 0; + + // Return true if the greenlet is known to have been running + // (active) in a thread that has now exited. + virtual bool was_running_in_dead_thread() const noexcept = 0; + + // Return a borrowed greenlet that is the Python object + // this object represents. + inline BorrowedGreenlet self() const noexcept + { + return BorrowedGreenlet(this->_self); + } + + // For testing. If this returns true, we should pretend that + // slp_switch() failed. + virtual bool force_slp_switch_error() const noexcept; + + protected: + inline void release_args(); + + // The functions that must not be inlined are declared virtual. + // We also mark them as protected, not private, so that the + // compiler is forced to call them through a function pointer. + // (A sufficiently smart compiler could directly call a private + // virtual function since it can never be overridden in a + // subclass). + + // Also TODO: Switch away from integer error codes and to enums, + // or throw exceptions when possible. + struct switchstack_result_t + { + int status; + Greenlet* the_new_current_greenlet; + OwnedGreenlet origin_greenlet; + + switchstack_result_t() + : status(0), + the_new_current_greenlet(nullptr) + {} + + switchstack_result_t(int err) + : status(err), + the_new_current_greenlet(nullptr) + {} + + switchstack_result_t(int err, Greenlet* state, OwnedGreenlet& origin) + : status(err), + the_new_current_greenlet(state), + origin_greenlet(origin) + { + } + + switchstack_result_t(int err, Greenlet* state, const BorrowedGreenlet& origin) + : status(err), + the_new_current_greenlet(state), + origin_greenlet(origin) + { + } + + switchstack_result_t(const switchstack_result_t& other) + : status(other.status), + the_new_current_greenlet(other.the_new_current_greenlet), + origin_greenlet(other.origin_greenlet) + {} + + switchstack_result_t& operator=(const switchstack_result_t& other) + { + this->status = other.status; + this->the_new_current_greenlet = other.the_new_current_greenlet; + this->origin_greenlet = other.origin_greenlet; + return *this; + } + }; + + OwnedObject on_switchstack_or_initialstub_failure( + Greenlet* target, + const switchstack_result_t& err, + const bool target_was_me=false, + const bool was_initial_stub=false); + + // Returns the previous greenlet we just switched away from. + virtual OwnedGreenlet g_switchstack_success() noexcept; + + + // Check the preconditions for switching to this greenlet; if they + // aren't met, throws PyErrOccurred. Most callers will want to + // catch this and clear the arguments + inline void check_switch_allowed() const; + class GreenletStartedWhileInPython : public std::runtime_error + { + public: + GreenletStartedWhileInPython() : std::runtime_error("") + {} + }; + + protected: + + + /** + Perform a stack switch into this greenlet. + + This temporarily sets the global variable + ``switching_thread_state`` to this greenlet; as soon as the + call to ``slp_switch`` completes, this is reset to NULL. + Consequently, this depends on the GIL. + + TODO: Adopt the stackman model and pass ``slp_switch`` a + callback function and context pointer; this eliminates the + need for global variables altogether. + + Because the stack switch happens in this function, this + function can't use its own stack (local) variables, set + before the switch, and then accessed after the switch. + + Further, you con't even access ``g_thread_state_global`` + before and after the switch from the global variable. + Because it is thread local some compilers cache it in a + register/on the stack, notably new versions of MSVC; this + breaks with strange crashes sometime later, because writing + to anything in ``g_thread_state_global`` after the switch + is actually writing to random memory. For this reason, we + call a non-inlined function to finish the operation. (XXX: + The ``/GT`` MSVC compiler argument probably fixes that.) + + It is very important that stack switch is 'atomic', i.e. no + calls into other Python code allowed (except very few that + are safe), because global variables are very fragile. (This + should no longer be the case with thread-local variables.) + + */ + // Made virtual to facilitate subclassing UserGreenlet for testing. + virtual switchstack_result_t g_switchstack(void); + +class TracingGuard +{ +private: + PyThreadState* tstate; +public: + TracingGuard() + : tstate(PyThreadState_GET()) + { + PyThreadState_EnterTracing(this->tstate); + } + + ~TracingGuard() + { + PyThreadState_LeaveTracing(this->tstate); + this->tstate = nullptr; + } + + inline void CallTraceFunction(const OwnedObject& tracefunc, + const greenlet::refs::ImmortalEventName& event, + const BorrowedGreenlet& origin, + const BorrowedGreenlet& target) + { + // TODO: This calls tracefunc(event, (origin, target)). Add a shortcut + // function for that that's specialized to avoid the Py_BuildValue + // string parsing, or start with just using "ON" format with PyTuple_Pack(2, + // origin, target). That seems like what the N format is meant + // for. + // XXX: Why does event not automatically cast back to a PyObject? + // It tries to call the "deleted constructor ImmortalEventName + // const" instead. + assert(tracefunc); + assert(event); + assert(origin); + assert(target); + greenlet::refs::NewReference retval( + PyObject_CallFunction( + tracefunc.borrow(), + "O(OO)", + event.borrow(), + origin.borrow(), + target.borrow() + )); + if (!retval) { + throw PyErrOccurred::from_current(); + } + } +}; + + static void + g_calltrace(const OwnedObject& tracefunc, + const greenlet::refs::ImmortalEventName& event, + const greenlet::refs::BorrowedGreenlet& origin, + const BorrowedGreenlet& target); + private: + OwnedObject g_switch_finish(const switchstack_result_t& err); + + }; + + class UserGreenlet : public Greenlet + { + private: + static greenlet::PythonAllocator allocator; + OwnedMainGreenlet _main_greenlet; + OwnedObject _run_callable; + OwnedGreenlet _parent; + public: + static void* operator new(size_t UNUSED(count)); + static void operator delete(void* ptr); + + UserGreenlet(PyGreenlet* p, BorrowedGreenlet the_parent); + virtual ~UserGreenlet(); + + virtual refs::BorrowedMainGreenlet find_main_greenlet_in_lineage() const; + virtual bool was_running_in_dead_thread() const noexcept; + virtual ThreadState* thread_state() const noexcept; + virtual OwnedObject g_switch(); + virtual const OwnedObject& run() const + { + if (this->started() || !this->_run_callable) { + throw AttributeError("run"); + } + return this->_run_callable; + } + virtual void run(const refs::BorrowedObject nrun); + + virtual const OwnedGreenlet parent() const; + virtual void parent(const refs::BorrowedObject new_parent); + + virtual const refs::BorrowedMainGreenlet main_greenlet() const; + + virtual void murder_in_place(); + virtual bool belongs_to_thread(const ThreadState* state) const; + virtual int tp_traverse(visitproc visit, void* arg); + virtual int tp_clear(); + class ParentIsCurrentGuard + { + private: + OwnedGreenlet oldparent; + UserGreenlet* greenlet; + G_NO_COPIES_OF_CLS(ParentIsCurrentGuard); + public: + ParentIsCurrentGuard(UserGreenlet* p, const ThreadState& thread_state); + ~ParentIsCurrentGuard(); + }; + virtual OwnedObject throw_GreenletExit_during_dealloc(const ThreadState& current_thread_state); + protected: + virtual switchstack_result_t g_initialstub(void* mark); + private: + // This function isn't meant to return. + // This accepts raw pointers and the ownership of them at the + // same time. The caller should use ``inner_bootstrap(origin.relinquish_ownership())``. + void inner_bootstrap(PyGreenlet* origin_greenlet, PyObject* run); + }; + + class BrokenGreenlet : public UserGreenlet + { + private: + static greenlet::PythonAllocator allocator; + public: + bool _force_switch_error = false; + bool _force_slp_switch_error = false; + + static void* operator new(size_t UNUSED(count)); + static void operator delete(void* ptr); + BrokenGreenlet(PyGreenlet* p, BorrowedGreenlet the_parent) + : UserGreenlet(p, the_parent) + {} + virtual ~BrokenGreenlet() + {} + + virtual switchstack_result_t g_switchstack(void); + virtual bool force_slp_switch_error() const noexcept; + + }; + + class MainGreenlet : public Greenlet + { + private: + static greenlet::PythonAllocator allocator; + refs::BorrowedMainGreenlet _self; + ThreadState* _thread_state; + G_NO_COPIES_OF_CLS(MainGreenlet); + public: + static void* operator new(size_t UNUSED(count)); + static void operator delete(void* ptr); + + MainGreenlet(refs::BorrowedMainGreenlet::PyType*, ThreadState*); + virtual ~MainGreenlet(); + + + virtual const OwnedObject& run() const; + virtual void run(const refs::BorrowedObject nrun); + + virtual const OwnedGreenlet parent() const; + virtual void parent(const refs::BorrowedObject new_parent); + + virtual const refs::BorrowedMainGreenlet main_greenlet() const; + + virtual refs::BorrowedMainGreenlet find_main_greenlet_in_lineage() const; + virtual bool was_running_in_dead_thread() const noexcept; + virtual ThreadState* thread_state() const noexcept; + void thread_state(ThreadState*) noexcept; + virtual OwnedObject g_switch(); + virtual int tp_traverse(visitproc visit, void* arg); + }; + + // Instantiate one on the stack to save the GC state, + // and then disable GC. When it goes out of scope, GC will be + // restored to its original state. Sadly, these APIs are only + // available on 3.10+; luckily, we only need them on 3.11+. +#if GREENLET_PY310 + class GCDisabledGuard + { + private: + int was_enabled = 0; + public: + GCDisabledGuard() + : was_enabled(PyGC_IsEnabled()) + { + PyGC_Disable(); + } + + ~GCDisabledGuard() + { + if (this->was_enabled) { + PyGC_Enable(); + } + } + }; +#endif + + OwnedObject& operator<<=(OwnedObject& lhs, greenlet::SwitchingArgs& rhs) noexcept; + + //TODO: Greenlet::g_switch() should call this automatically on its + //return value. As it is, the module code is calling it. + static inline OwnedObject + single_result(const OwnedObject& results) + { + if (results + && PyTuple_Check(results.borrow()) + && PyTuple_GET_SIZE(results.borrow()) == 1) { + PyObject* result = PyTuple_GET_ITEM(results.borrow(), 0); + assert(result); + return OwnedObject::owning(result); + } + return results; + } + + + static OwnedObject + g_handle_exit(const OwnedObject& greenlet_result); + + + template + void operator<<(const PyThreadState *const lhs, T& rhs) + { + rhs.operator<<(lhs); + } + +} // namespace greenlet ; + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenletGlobals.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenletGlobals.cpp new file mode 100644 index 000000000..0087d2ff3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TGreenletGlobals.cpp @@ -0,0 +1,94 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of GreenletGlobals. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#ifndef T_GREENLET_GLOBALS +#define T_GREENLET_GLOBALS + +#include "greenlet_refs.hpp" +#include "greenlet_exceptions.hpp" +#include "greenlet_thread_support.hpp" +#include "greenlet_internal.hpp" + +namespace greenlet { + +// This encapsulates what were previously module global "constants" +// established at init time. +// This is a step towards Python3 style module state that allows +// reloading. +// +// In an earlier iteration of this code, we used placement new to be +// able to allocate this object statically still, so that references +// to its members don't incur an extra pointer indirection. +// But under some scenarios, that could result in crashes at +// shutdown because apparently the destructor was getting run twice? +class GreenletGlobals +{ + +public: + const greenlet::refs::ImmortalEventName event_switch; + const greenlet::refs::ImmortalEventName event_throw; + const greenlet::refs::ImmortalException PyExc_GreenletError; + const greenlet::refs::ImmortalException PyExc_GreenletExit; + const greenlet::refs::ImmortalObject empty_tuple; + const greenlet::refs::ImmortalObject empty_dict; + const greenlet::refs::ImmortalString str_run; + Mutex* const thread_states_to_destroy_lock; + greenlet::cleanup_queue_t thread_states_to_destroy; + + GreenletGlobals() : + event_switch("switch"), + event_throw("throw"), + PyExc_GreenletError("greenlet.error"), + PyExc_GreenletExit("greenlet.GreenletExit", PyExc_BaseException), + empty_tuple(Require(PyTuple_New(0))), + empty_dict(Require(PyDict_New())), + str_run("run"), + thread_states_to_destroy_lock(new Mutex()) + {} + + ~GreenletGlobals() + { + // This object is (currently) effectively immortal, and not + // just because of those placement new tricks; if we try to + // deallocate the static object we allocated, and overwrote, + // we would be doing so at C++ teardown time, which is after + // the final Python GIL is released, and we can't use the API + // then. + // (The members will still be destructed, but they also don't + // do any deallocation.) + } + + void queue_to_destroy(ThreadState* ts) const + { + // we're currently accessed through a static const object, + // implicitly marking our members as const, so code can't just + // call push_back (or pop_back) without casting away the + // const. + // + // Do that for callers. + greenlet::cleanup_queue_t& q = const_cast(this->thread_states_to_destroy); + q.push_back(ts); + } + + ThreadState* take_next_to_destroy() const + { + greenlet::cleanup_queue_t& q = const_cast(this->thread_states_to_destroy); + ThreadState* result = q.back(); + q.pop_back(); + return result; + } +}; + +}; // namespace greenlet + +static const greenlet::GreenletGlobals* mod_globs; + +#endif // T_GREENLET_GLOBALS diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TMainGreenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TMainGreenlet.cpp new file mode 100644 index 000000000..a2a9cfe4b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TMainGreenlet.cpp @@ -0,0 +1,153 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of greenlet::MainGreenlet. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#ifndef T_MAIN_GREENLET_CPP +#define T_MAIN_GREENLET_CPP + +#include "TGreenlet.hpp" + + + +// Protected by the GIL. Incremented when we create a main greenlet, +// in a new thread, decremented when it is destroyed. +static Py_ssize_t G_TOTAL_MAIN_GREENLETS; + +namespace greenlet { +greenlet::PythonAllocator MainGreenlet::allocator; + +void* MainGreenlet::operator new(size_t UNUSED(count)) +{ + return allocator.allocate(1); +} + + +void MainGreenlet::operator delete(void* ptr) +{ + return allocator.deallocate(static_cast(ptr), + 1); +} + + +MainGreenlet::MainGreenlet(PyGreenlet* p, ThreadState* state) + : Greenlet(p, StackState::make_main()), + _self(p), + _thread_state(state) +{ + G_TOTAL_MAIN_GREENLETS++; +} + +MainGreenlet::~MainGreenlet() +{ + G_TOTAL_MAIN_GREENLETS--; + this->tp_clear(); +} + +ThreadState* +MainGreenlet::thread_state() const noexcept +{ + return this->_thread_state; +} + +void +MainGreenlet::thread_state(ThreadState* t) noexcept +{ + assert(!t); + this->_thread_state = t; +} + + +const BorrowedMainGreenlet +MainGreenlet::main_greenlet() const +{ + return this->_self; +} + +BorrowedMainGreenlet +MainGreenlet::find_main_greenlet_in_lineage() const +{ + return BorrowedMainGreenlet(this->_self); +} + +bool +MainGreenlet::was_running_in_dead_thread() const noexcept +{ + return !this->_thread_state; +} + +OwnedObject +MainGreenlet::g_switch() +{ + try { + this->check_switch_allowed(); + } + catch (const PyErrOccurred&) { + this->release_args(); + throw; + } + + switchstack_result_t err = this->g_switchstack(); + if (err.status < 0) { + // XXX: This code path is untested, but it is shared + // with the UserGreenlet path that is tested. + return this->on_switchstack_or_initialstub_failure( + this, + err, + true, // target was me + false // was initial stub + ); + } + + return err.the_new_current_greenlet->g_switch_finish(err); +} + +int +MainGreenlet::tp_traverse(visitproc visit, void* arg) +{ + if (this->_thread_state) { + // we've already traversed main, (self), don't do it again. + int result = this->_thread_state->tp_traverse(visit, arg, false); + if (result) { + return result; + } + } + return Greenlet::tp_traverse(visit, arg); +} + +const OwnedObject& +MainGreenlet::run() const +{ + throw AttributeError("Main greenlets do not have a run attribute."); +} + +void +MainGreenlet::run(const BorrowedObject UNUSED(nrun)) +{ + throw AttributeError("Main greenlets do not have a run attribute."); +} + +void +MainGreenlet::parent(const BorrowedObject raw_new_parent) +{ + if (!raw_new_parent) { + throw AttributeError("can't delete attribute"); + } + throw AttributeError("cannot set the parent of a main greenlet"); +} + +const OwnedGreenlet +MainGreenlet::parent() const +{ + return OwnedGreenlet(); // null becomes None +} + +}; // namespace greenlet + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TPythonState.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TPythonState.cpp new file mode 100644 index 000000000..a7f743cfe --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TPythonState.cpp @@ -0,0 +1,402 @@ +#ifndef GREENLET_PYTHON_STATE_CPP +#define GREENLET_PYTHON_STATE_CPP + +#include +#include "TGreenlet.hpp" + +namespace greenlet { + +PythonState::PythonState() + : _top_frame() +#if GREENLET_USE_CFRAME + ,cframe(nullptr) + ,use_tracing(0) +#endif +#if GREENLET_PY314 + ,py_recursion_depth(0) +#elif GREENLET_PY312 + ,py_recursion_depth(0) + ,c_recursion_depth(0) +#else + ,recursion_depth(0) +#endif +#if GREENLET_PY313 + ,delete_later(nullptr) +#else + ,trash_delete_nesting(0) +#endif +#if GREENLET_PY311 + ,current_frame(nullptr) + ,datastack_chunk(nullptr) + ,datastack_top(nullptr) + ,datastack_limit(nullptr) +#endif +{ +#if GREENLET_USE_CFRAME + /* + The PyThreadState->cframe pointer usually points to memory on + the stack, alloceted in a call into PyEval_EvalFrameDefault. + + Initially, before any evaluation begins, it points to the + initial PyThreadState object's ``root_cframe`` object, which is + statically allocated for the lifetime of the thread. + + A greenlet can last for longer than a call to + PyEval_EvalFrameDefault, so we can't set its ``cframe`` pointer + to be the current ``PyThreadState->cframe``; nor could we use + one from the greenlet parent for the same reason. Yet a further + no: we can't allocate one scoped to the greenlet and then + destroy it when the greenlet is deallocated, because inside the + interpreter the _PyCFrame objects form a linked list, and that too + can result in accessing memory beyond its dynamic lifetime (if + the greenlet doesn't actually finish before it dies, its entry + could still be in the list). + + Using the ``root_cframe`` is problematic, though, because its + members are never modified by the interpreter and are set to 0, + meaning that its ``use_tracing`` flag is never updated. We don't + want to modify that value in the ``root_cframe`` ourself: it + *shouldn't* matter much because we should probably never get + back to the point where that's the only cframe on the stack; + even if it did matter, the major consequence of an incorrect + value for ``use_tracing`` is that if its true the interpreter + does some extra work --- however, it's just good code hygiene. + + Our solution: before a greenlet runs, after its initial + creation, it uses the ``root_cframe`` just to have something to + put there. However, once the greenlet is actually switched to + for the first time, ``g_initialstub`` (which doesn't actually + "return" while the greenlet is running) stores a new _PyCFrame on + its local stack, and copies the appropriate values from the + currently running _PyCFrame; this is then made the _PyCFrame for the + newly-minted greenlet. ``g_initialstub`` then proceeds to call + ``glet.run()``, which results in ``PyEval_...`` adding the + _PyCFrame to the list. Switches continue as normal. Finally, when + the greenlet finishes, the call to ``glet.run()`` returns and + the _PyCFrame is taken out of the linked list and the stack value + is now unused and free to expire. + + XXX: I think we can do better. If we're deallocing in the same + thread, can't we traverse the list and unlink our frame? + Can we just keep a reference to the thread state in case we + dealloc in another thread? (Is that even possible if we're still + running and haven't returned from g_initialstub?) + */ + this->cframe = &PyThreadState_GET()->root_cframe; +#endif +} + + +inline void PythonState::may_switch_away() noexcept +{ +#if GREENLET_PY311 + // PyThreadState_GetFrame is probably going to have to allocate a + // new frame object. That may trigger garbage collection. Because + // we call this during the early phases of a switch (it doesn't + // matter to which greenlet, as this has a global effect), if a GC + // triggers a switch away, two things can happen, both bad: + // - We might not get switched back to, halting forward progress. + // this is pathological, but possible. + // - We might get switched back to with a different set of + // arguments or a throw instead of a switch. That would corrupt + // our state (specifically, PyErr_Occurred() and this->args() + // would no longer agree). + // + // Thus, when we call this API, we need to have GC disabled. + // This method serves as a bottleneck we call when maybe beginning + // a switch. In this way, it is always safe -- no risk of GC -- to + // use ``_GetFrame()`` whenever we need to, just as it was in + // <=3.10 (because subsequent calls will be cached and not + // allocate memory). + + GCDisabledGuard no_gc; + Py_XDECREF(PyThreadState_GetFrame(PyThreadState_GET())); +#endif +} + +void PythonState::operator<<(const PyThreadState *const tstate) noexcept +{ + this->_context.steal(tstate->context); +#if GREENLET_USE_CFRAME + /* + IMPORTANT: ``cframe`` is a pointer into the STACK. Thus, because + the call to ``slp_switch()`` changes the contents of the stack, + you cannot read from ``ts_current->cframe`` after that call and + necessarily get the same values you get from reading it here. + Anything you need to restore from now to then must be saved in a + global/threadlocal variable (because we can't use stack + variables here either). For things that need to persist across + the switch, use `will_switch_from`. + */ + this->cframe = tstate->cframe; + #if !GREENLET_PY312 + this->use_tracing = tstate->cframe->use_tracing; + #endif +#endif // GREENLET_USE_CFRAME +#if GREENLET_PY311 + #if GREENLET_PY314 + this->py_recursion_depth = tstate->py_recursion_limit - tstate->py_recursion_remaining; + #elif GREENLET_PY312 + this->py_recursion_depth = tstate->py_recursion_limit - tstate->py_recursion_remaining; + this->c_recursion_depth = Py_C_RECURSION_LIMIT - tstate->c_recursion_remaining; + #else // not 312 + this->recursion_depth = tstate->recursion_limit - tstate->recursion_remaining; + #endif // GREENLET_PY312 + #if GREENLET_PY313 + this->current_frame = tstate->current_frame; + #elif GREENLET_USE_CFRAME + this->current_frame = tstate->cframe->current_frame; + #endif + this->datastack_chunk = tstate->datastack_chunk; + this->datastack_top = tstate->datastack_top; + this->datastack_limit = tstate->datastack_limit; + + PyFrameObject *frame = PyThreadState_GetFrame((PyThreadState *)tstate); + Py_XDECREF(frame); // PyThreadState_GetFrame gives us a new + // reference. + this->_top_frame.steal(frame); + #if GREENLET_PY313 + this->delete_later = Py_XNewRef(tstate->delete_later); + #elif GREENLET_PY312 + this->trash_delete_nesting = tstate->trash.delete_nesting; + #else // not 312 + this->trash_delete_nesting = tstate->trash_delete_nesting; + #endif // GREENLET_PY312 +#else // Not 311 + this->recursion_depth = tstate->recursion_depth; + this->_top_frame.steal(tstate->frame); + this->trash_delete_nesting = tstate->trash_delete_nesting; +#endif // GREENLET_PY311 +} + +#if GREENLET_PY312 +void GREENLET_NOINLINE(PythonState::unexpose_frames)() +{ + if (!this->top_frame()) { + return; + } + + // See GreenletState::expose_frames() and the comment on frames_were_exposed + // for more information about this logic. + _PyInterpreterFrame *iframe = this->_top_frame->f_frame; + while (iframe != nullptr) { + _PyInterpreterFrame *prev_exposed = iframe->previous; + assert(iframe->frame_obj); + memcpy(&iframe->previous, &iframe->frame_obj->_f_frame_data[0], + sizeof(void *)); + iframe = prev_exposed; + } +} +#else +void PythonState::unexpose_frames() +{} +#endif + +void PythonState::operator>>(PyThreadState *const tstate) noexcept +{ + tstate->context = this->_context.relinquish_ownership(); + /* Incrementing this value invalidates the contextvars cache, + which would otherwise remain valid across switches */ + tstate->context_ver++; +#if GREENLET_USE_CFRAME + tstate->cframe = this->cframe; + /* + If we were tracing, we need to keep tracing. + There should never be the possibility of hitting the + root_cframe here. See note above about why we can't + just copy this from ``origin->cframe->use_tracing``. + */ + #if !GREENLET_PY312 + tstate->cframe->use_tracing = this->use_tracing; + #endif +#endif // GREENLET_USE_CFRAME +#if GREENLET_PY311 + #if GREENLET_PY314 + tstate->py_recursion_remaining = tstate->py_recursion_limit - this->py_recursion_depth; + this->unexpose_frames(); + #elif GREENLET_PY312 + tstate->py_recursion_remaining = tstate->py_recursion_limit - this->py_recursion_depth; + tstate->c_recursion_remaining = Py_C_RECURSION_LIMIT - this->c_recursion_depth; + this->unexpose_frames(); + #else // \/ 3.11 + tstate->recursion_remaining = tstate->recursion_limit - this->recursion_depth; + #endif // GREENLET_PY312 + #if GREENLET_PY313 + tstate->current_frame = this->current_frame; + #elif GREENLET_USE_CFRAME + tstate->cframe->current_frame = this->current_frame; + #endif + tstate->datastack_chunk = this->datastack_chunk; + tstate->datastack_top = this->datastack_top; + tstate->datastack_limit = this->datastack_limit; + this->_top_frame.relinquish_ownership(); + #if GREENLET_PY313 + Py_XDECREF(tstate->delete_later); + tstate->delete_later = this->delete_later; + Py_CLEAR(this->delete_later); + #elif GREENLET_PY312 + tstate->trash.delete_nesting = this->trash_delete_nesting; + #else // not 3.12 + tstate->trash_delete_nesting = this->trash_delete_nesting; + #endif // GREENLET_PY312 +#else // not 3.11 + tstate->frame = this->_top_frame.relinquish_ownership(); + tstate->recursion_depth = this->recursion_depth; + tstate->trash_delete_nesting = this->trash_delete_nesting; +#endif // GREENLET_PY311 +} + +inline void PythonState::will_switch_from(PyThreadState *const origin_tstate) noexcept +{ +#if GREENLET_USE_CFRAME && !GREENLET_PY312 + // The weird thing is, we don't actually save this for an + // effect on the current greenlet, it's saved for an + // effect on the target greenlet. That is, we want + // continuity of this setting across the greenlet switch. + this->use_tracing = origin_tstate->cframe->use_tracing; +#endif +} + +void PythonState::set_initial_state(const PyThreadState* const tstate) noexcept +{ + this->_top_frame = nullptr; +#if GREENLET_PY314 + this->py_recursion_depth = tstate->py_recursion_limit - tstate->py_recursion_remaining; +#elif GREENLET_PY312 + this->py_recursion_depth = tstate->py_recursion_limit - tstate->py_recursion_remaining; + // XXX: TODO: Comment from a reviewer: + // Should this be ``Py_C_RECURSION_LIMIT - tstate->c_recursion_remaining``? + // But to me it looks more like that might not be the right + // initialization either? + this->c_recursion_depth = tstate->py_recursion_limit - tstate->py_recursion_remaining; +#elif GREENLET_PY311 + this->recursion_depth = tstate->recursion_limit - tstate->recursion_remaining; +#else + this->recursion_depth = tstate->recursion_depth; +#endif +} +// TODO: Better state management about when we own the top frame. +int PythonState::tp_traverse(visitproc visit, void* arg, bool own_top_frame) noexcept +{ + Py_VISIT(this->_context.borrow()); + if (own_top_frame) { + Py_VISIT(this->_top_frame.borrow()); + } + return 0; +} + +void PythonState::tp_clear(bool own_top_frame) noexcept +{ + PythonStateContext::tp_clear(); + // If we get here owning a frame, + // we got dealloc'd without being finished. We may or may not be + // in the same thread. + if (own_top_frame) { + this->_top_frame.CLEAR(); + } +} + +#if GREENLET_USE_CFRAME +void PythonState::set_new_cframe(_PyCFrame& frame) noexcept +{ + frame = *PyThreadState_GET()->cframe; + /* Make the target greenlet refer to the stack value. */ + this->cframe = &frame; + /* + And restore the link to the previous frame so this one gets + unliked appropriately. + */ + this->cframe->previous = &PyThreadState_GET()->root_cframe; +} +#endif + +const PythonState::OwnedFrame& PythonState::top_frame() const noexcept +{ + return this->_top_frame; +} + +void PythonState::did_finish(PyThreadState* tstate) noexcept +{ +#if GREENLET_PY311 + // See https://github.com/gevent/gevent/issues/1924 and + // https://github.com/python-greenlet/greenlet/issues/328. In + // short, Python 3.11 allocates memory for frames as a sort of + // linked list that's kept as part of PyThreadState in the + // ``datastack_chunk`` member and friends. These are saved and + // restored as part of switching greenlets. + // + // When we initially switch to a greenlet, we set those to NULL. + // That causes the frame management code to treat this like a + // brand new thread and start a fresh list of chunks, beginning + // with a new "root" chunk. As we make calls in this greenlet, + // those chunks get added, and as calls return, they get popped. + // But the frame code (pystate.c) is careful to make sure that the + // root chunk never gets popped. + // + // Thus, when a greenlet exits for the last time, there will be at + // least a single root chunk that we must be responsible for + // deallocating. + // + // The complex part is that these chunks are allocated and freed + // using ``_PyObject_VirtualAlloc``/``Free``. Those aren't public + // functions, and they aren't exported for linking. It so happens + // that we know they are just thin wrappers around the Arena + // allocator, so we can use that directly to deallocate in a + // compatible way. + // + // CAUTION: Check this implementation detail on every major version. + // + // It might be nice to be able to do this in our destructor, but + // can we be sure that no one else is using that memory? Plus, as + // described below, our pointers may not even be valid anymore. As + // a special case, there is one time that we know we can do this, + // and that's from the destructor of the associated UserGreenlet + // (NOT main greenlet) + PyObjectArenaAllocator alloc; + _PyStackChunk* chunk = nullptr; + if (tstate) { + // We really did finish, we can never be switched to again. + chunk = tstate->datastack_chunk; + // Unfortunately, we can't do much sanity checking. Our + // this->datastack_chunk pointer is out of date (evaluation may + // have popped down through it already) so we can't verify that + // we deallocate it. I don't think we can even check datastack_top + // for the same reason. + + PyObject_GetArenaAllocator(&alloc); + tstate->datastack_chunk = nullptr; + tstate->datastack_limit = nullptr; + tstate->datastack_top = nullptr; + + } + else if (this->datastack_chunk) { + // The UserGreenlet (NOT the main greenlet!) is being deallocated. If we're + // still holding a stack chunk, it's garbage because we know + // we can never switch back to let cPython clean it up. + // Because the last time we got switched away from, and we + // haven't run since then, we know our chain is valid and can + // be dealloced. + chunk = this->datastack_chunk; + PyObject_GetArenaAllocator(&alloc); + } + + if (alloc.free && chunk) { + // In case the arena mechanism has been torn down already. + while (chunk) { + _PyStackChunk *prev = chunk->previous; + chunk->previous = nullptr; + alloc.free(alloc.ctx, chunk, chunk->size); + chunk = prev; + } + } + + this->datastack_chunk = nullptr; + this->datastack_limit = nullptr; + this->datastack_top = nullptr; +#endif +} + + +}; // namespace greenlet + +#endif // GREENLET_PYTHON_STATE_CPP diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TStackState.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TStackState.cpp new file mode 100644 index 000000000..9743ab51b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TStackState.cpp @@ -0,0 +1,265 @@ +#ifndef GREENLET_STACK_STATE_CPP +#define GREENLET_STACK_STATE_CPP + +#include "TGreenlet.hpp" + +namespace greenlet { + +#ifdef GREENLET_USE_STDIO +#include +using std::cerr; +using std::endl; + +std::ostream& operator<<(std::ostream& os, const StackState& s) +{ + os << "StackState(stack_start=" << (void*)s._stack_start + << ", stack_stop=" << (void*)s.stack_stop + << ", stack_copy=" << (void*)s.stack_copy + << ", stack_saved=" << s._stack_saved + << ", stack_prev=" << s.stack_prev + << ", addr=" << &s + << ")"; + return os; +} +#endif + +StackState::StackState(void* mark, StackState& current) + : _stack_start(nullptr), + stack_stop((char*)mark), + stack_copy(nullptr), + _stack_saved(0), + /* Skip a dying greenlet */ + stack_prev(current._stack_start + ? ¤t + : current.stack_prev) +{ +} + +StackState::StackState() + : _stack_start(nullptr), + stack_stop(nullptr), + stack_copy(nullptr), + _stack_saved(0), + stack_prev(nullptr) +{ +} + +StackState::StackState(const StackState& other) +// can't use a delegating constructor because of +// MSVC for Python 2.7 + : _stack_start(nullptr), + stack_stop(nullptr), + stack_copy(nullptr), + _stack_saved(0), + stack_prev(nullptr) +{ + this->operator=(other); +} + +StackState& StackState::operator=(const StackState& other) +{ + if (&other == this) { + return *this; + } + if (other._stack_saved) { + throw std::runtime_error("Refusing to steal memory."); + } + + //If we have memory allocated, dispose of it + this->free_stack_copy(); + + this->_stack_start = other._stack_start; + this->stack_stop = other.stack_stop; + this->stack_copy = other.stack_copy; + this->_stack_saved = other._stack_saved; + this->stack_prev = other.stack_prev; + return *this; +} + +inline void StackState::free_stack_copy() noexcept +{ + PyMem_Free(this->stack_copy); + this->stack_copy = nullptr; + this->_stack_saved = 0; +} + +inline void StackState::copy_heap_to_stack(const StackState& current) noexcept +{ + + /* Restore the heap copy back into the C stack */ + if (this->_stack_saved != 0) { + memcpy(this->_stack_start, this->stack_copy, this->_stack_saved); + this->free_stack_copy(); + } + StackState* owner = const_cast(¤t); + if (!owner->_stack_start) { + owner = owner->stack_prev; /* greenlet is dying, skip it */ + } + while (owner && owner->stack_stop <= this->stack_stop) { + // cerr << "\tOwner: " << owner << endl; + owner = owner->stack_prev; /* find greenlet with more stack */ + } + this->stack_prev = owner; + // cerr << "\tFinished with: " << *this << endl; +} + +inline int StackState::copy_stack_to_heap_up_to(const char* const stop) noexcept +{ + /* Save more of g's stack into the heap -- at least up to 'stop' + g->stack_stop |________| + | | + | __ stop . . . . . + | | ==> . . + |________| _______ + | | | | + | | | | + g->stack_start | | |_______| g->stack_copy + */ + intptr_t sz1 = this->_stack_saved; + intptr_t sz2 = stop - this->_stack_start; + assert(this->_stack_start); + if (sz2 > sz1) { + char* c = (char*)PyMem_Realloc(this->stack_copy, sz2); + if (!c) { + PyErr_NoMemory(); + return -1; + } + memcpy(c + sz1, this->_stack_start + sz1, sz2 - sz1); + this->stack_copy = c; + this->_stack_saved = sz2; + } + return 0; +} + +inline int StackState::copy_stack_to_heap(char* const stackref, + const StackState& current) noexcept +{ + /* must free all the C stack up to target_stop */ + const char* const target_stop = this->stack_stop; + + StackState* owner = const_cast(¤t); + assert(owner->_stack_saved == 0); // everything is present on the stack + if (!owner->_stack_start) { + owner = owner->stack_prev; /* not saved if dying */ + } + else { + owner->_stack_start = stackref; + } + + while (owner->stack_stop < target_stop) { + /* ts_current is entierely within the area to free */ + if (owner->copy_stack_to_heap_up_to(owner->stack_stop)) { + return -1; /* XXX */ + } + owner = owner->stack_prev; + } + if (owner != this) { + if (owner->copy_stack_to_heap_up_to(target_stop)) { + return -1; /* XXX */ + } + } + return 0; +} + +inline bool StackState::started() const noexcept +{ + return this->stack_stop != nullptr; +} + +inline bool StackState::main() const noexcept +{ + return this->stack_stop == (char*)-1; +} + +inline bool StackState::active() const noexcept +{ + return this->_stack_start != nullptr; +} + +inline void StackState::set_active() noexcept +{ + assert(this->_stack_start == nullptr); + this->_stack_start = (char*)1; +} + +inline void StackState::set_inactive() noexcept +{ + this->_stack_start = nullptr; + // XXX: What if we still have memory out there? + // That case is actually triggered by + // test_issue251_issue252_explicit_reference_not_collectable (greenlet.tests.test_leaks.TestLeaks) + // and + // test_issue251_issue252_need_to_collect_in_background + // (greenlet.tests.test_leaks.TestLeaks) + // + // Those objects never get deallocated, so the destructor never + // runs. + // It *seems* safe to clean up the memory here? + if (this->_stack_saved) { + this->free_stack_copy(); + } +} + +inline intptr_t StackState::stack_saved() const noexcept +{ + return this->_stack_saved; +} + +inline char* StackState::stack_start() const noexcept +{ + return this->_stack_start; +} + + +inline StackState StackState::make_main() noexcept +{ + StackState s; + s._stack_start = (char*)1; + s.stack_stop = (char*)-1; + return s; +} + +StackState::~StackState() +{ + if (this->_stack_saved != 0) { + this->free_stack_copy(); + } +} + +void StackState::copy_from_stack(void* vdest, const void* vsrc, size_t n) const +{ + char* dest = static_cast(vdest); + const char* src = static_cast(vsrc); + if (src + n <= this->_stack_start + || src >= this->_stack_start + this->_stack_saved + || this->_stack_saved == 0) { + // Nothing we're copying was spilled from the stack + memcpy(dest, src, n); + return; + } + + if (src < this->_stack_start) { + // Copy the part before the saved stack. + // We know src + n > _stack_start due to the test above. + const size_t nbefore = this->_stack_start - src; + memcpy(dest, src, nbefore); + dest += nbefore; + src += nbefore; + n -= nbefore; + } + // We know src >= _stack_start after the before-copy, and + // src < _stack_start + _stack_saved due to the first if condition + size_t nspilled = std::min(n, this->_stack_start + this->_stack_saved - src); + memcpy(dest, this->stack_copy + (src - this->_stack_start), nspilled); + dest += nspilled; + src += nspilled; + n -= nspilled; + if (n > 0) { + // Copy the part after the saved stack + memcpy(dest, src, n); + } +} + +}; // namespace greenlet + +#endif // GREENLET_STACK_STATE_CPP diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadState.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadState.hpp new file mode 100644 index 000000000..e4e6f6cba --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadState.hpp @@ -0,0 +1,497 @@ +#ifndef GREENLET_THREAD_STATE_HPP +#define GREENLET_THREAD_STATE_HPP + +#include +#include + +#include "greenlet_internal.hpp" +#include "greenlet_refs.hpp" +#include "greenlet_thread_support.hpp" + +using greenlet::refs::BorrowedObject; +using greenlet::refs::BorrowedGreenlet; +using greenlet::refs::BorrowedMainGreenlet; +using greenlet::refs::OwnedMainGreenlet; +using greenlet::refs::OwnedObject; +using greenlet::refs::OwnedGreenlet; +using greenlet::refs::OwnedList; +using greenlet::refs::PyErrFetchParam; +using greenlet::refs::PyArgParseParam; +using greenlet::refs::ImmortalString; +using greenlet::refs::CreatedModule; +using greenlet::refs::PyErrPieces; +using greenlet::refs::NewReference; + +namespace greenlet { +/** + * Thread-local state of greenlets. + * + * Each native thread will get exactly one of these objects, + * automatically accessed through the best available thread-local + * mechanism the compiler supports (``thread_local`` for C++11 + * compilers or ``__thread``/``declspec(thread)`` for older GCC/clang + * or MSVC, respectively.) + * + * Previously, we kept thread-local state mostly in a bunch of + * ``static volatile`` variables in the main greenlet file.. This had + * the problem of requiring extra checks, loops, and great care + * accessing these variables if we potentially invoked any Python code + * that could release the GIL, because the state could change out from + * under us. Making the variables thread-local solves this problem. + * + * When we detected that a greenlet API accessing the current greenlet + * was invoked from a different thread than the greenlet belonged to, + * we stored a reference to the greenlet in the Python thread + * dictionary for the thread the greenlet belonged to. This could lead + * to memory leaks if the thread then exited (because of a reference + * cycle, as greenlets referred to the thread dictionary, and deleting + * non-current greenlets leaked their frame plus perhaps arguments on + * the C stack). If a thread exited while still having running + * greenlet objects (perhaps that had just switched back to the main + * greenlet), and did not invoke one of the greenlet APIs *in that + * thread, immediately before it exited, without some other thread + * then being invoked*, such a leak was guaranteed. + * + * This can be partly solved by using compiler thread-local variables + * instead of the Python thread dictionary, thus avoiding a cycle. + * + * To fully solve this problem, we need a reliable way to know that a + * thread is done and we should clean up the main greenlet. On POSIX, + * we can use the destructor function of ``pthread_key_create``, but + * there's nothing similar on Windows; a C++11 thread local object + * reliably invokes its destructor when the thread it belongs to exits + * (non-C++11 compilers offer ``__thread`` or ``declspec(thread)`` to + * create thread-local variables, but they can't hold C++ objects that + * invoke destructors; the C++11 version is the most portable solution + * I found). When the thread exits, we can drop references and + * otherwise manipulate greenlets and frames that we know can no + * longer be switched to. For compilers that don't support C++11 + * thread locals, we have a solution that uses the python thread + * dictionary, though it may not collect everything as promptly as + * other compilers do, if some other library is using the thread + * dictionary and has a cycle or extra reference. + * + * There are two small wrinkles. The first is that when the thread + * exits, it is too late to actually invoke Python APIs: the Python + * thread state is gone, and the GIL is released. To solve *this* + * problem, our destructor uses ``Py_AddPendingCall`` to transfer the + * destruction work to the main thread. (This is not an issue for the + * dictionary solution.) + * + * The second is that once the thread exits, the thread local object + * is invalid and we can't even access a pointer to it, so we can't + * pass it to ``Py_AddPendingCall``. This is handled by actually using + * a second object that's thread local (ThreadStateCreator) and having + * it dynamically allocate this object so it can live until the + * pending call runs. + */ + + + +class ThreadState { +private: + // As of commit 08ad1dd7012b101db953f492e0021fb08634afad + // this class needed 56 bytes in o Py_DEBUG build + // on 64-bit macOS 11. + // Adding the vector takes us up to 80 bytes () + + /* Strong reference to the main greenlet */ + OwnedMainGreenlet main_greenlet; + + /* Strong reference to the current greenlet. */ + OwnedGreenlet current_greenlet; + + /* Strong reference to the trace function, if any. */ + OwnedObject tracefunc; + + typedef std::vector > deleteme_t; + /* A vector of raw PyGreenlet pointers representing things that need + deleted when this thread is running. The vector owns the + references, but you need to manually INCREF/DECREF as you use + them. We don't use a vector because we + make copy of this vector, and that would become O(n) as all the + refcounts are incremented in the copy. + */ + deleteme_t deleteme; + +#ifdef GREENLET_NEEDS_EXCEPTION_STATE_SAVED + void* exception_state; +#endif + + static std::clock_t _clocks_used_doing_gc; + static ImmortalString get_referrers_name; + static PythonAllocator allocator; + + G_NO_COPIES_OF_CLS(ThreadState); + + + // Allocates a main greenlet for the thread state. If this fails, + // exits the process. Called only during constructing a ThreadState. + MainGreenlet* alloc_main() + { + PyGreenlet* gmain; + + /* create the main greenlet for this thread */ + gmain = reinterpret_cast(PyType_GenericAlloc(&PyGreenlet_Type, 0)); + if (gmain == NULL) { + throw PyFatalError("alloc_main failed to alloc"); //exits the process + } + + MainGreenlet* const main = new MainGreenlet(gmain, this); + + assert(Py_REFCNT(gmain) == 1); + assert(gmain->pimpl == main); + return main; + } + + +public: + static void* operator new(size_t UNUSED(count)) + { + return ThreadState::allocator.allocate(1); + } + + static void operator delete(void* ptr) + { + return ThreadState::allocator.deallocate(static_cast(ptr), + 1); + } + + static void init() + { + ThreadState::get_referrers_name = "get_referrers"; + ThreadState::_clocks_used_doing_gc = 0; + } + + ThreadState() + { + +#ifdef GREENLET_NEEDS_EXCEPTION_STATE_SAVED + this->exception_state = slp_get_exception_state(); +#endif + + // XXX: Potentially dangerous, exposing a not fully + // constructed object. + MainGreenlet* const main = this->alloc_main(); + this->main_greenlet = OwnedMainGreenlet::consuming( + main->self() + ); + assert(this->main_greenlet); + this->current_greenlet = main->self(); + // The main greenlet starts with 1 refs: The returned one. We + // then copied it to the current greenlet. + assert(this->main_greenlet.REFCNT() == 2); + } + + inline void restore_exception_state() + { +#ifdef GREENLET_NEEDS_EXCEPTION_STATE_SAVED + // It's probably important this be inlined and only call C + // functions to avoid adding an SEH frame. + slp_set_exception_state(this->exception_state); +#endif + } + + inline bool has_main_greenlet() const noexcept + { + return bool(this->main_greenlet); + } + + // Called from the ThreadStateCreator when we're in non-standard + // threading mode. In that case, there is an object in the Python + // thread state dictionary that points to us. The main greenlet + // also traverses into us, in which case it's crucial not to + // traverse back into the main greenlet. + int tp_traverse(visitproc visit, void* arg, bool traverse_main=true) + { + if (traverse_main) { + Py_VISIT(main_greenlet.borrow_o()); + } + if (traverse_main || current_greenlet != main_greenlet) { + Py_VISIT(current_greenlet.borrow_o()); + } + Py_VISIT(tracefunc.borrow()); + return 0; + } + + inline BorrowedMainGreenlet borrow_main_greenlet() const noexcept + { + assert(this->main_greenlet); + assert(this->main_greenlet.REFCNT() >= 2); + return this->main_greenlet; + }; + + inline OwnedMainGreenlet get_main_greenlet() const noexcept + { + return this->main_greenlet; + } + + /** + * In addition to returning a new reference to the currunt + * greenlet, this performs any maintenance needed. + */ + inline OwnedGreenlet get_current() + { + /* green_dealloc() cannot delete greenlets from other threads, so + it stores them in the thread dict; delete them now. */ + this->clear_deleteme_list(); + //assert(this->current_greenlet->main_greenlet == this->main_greenlet); + //assert(this->main_greenlet->main_greenlet == this->main_greenlet); + return this->current_greenlet; + } + + /** + * As for non-const get_current(); + */ + inline BorrowedGreenlet borrow_current() + { + this->clear_deleteme_list(); + return this->current_greenlet; + } + + /** + * Does no maintenance. + */ + inline OwnedGreenlet get_current() const + { + return this->current_greenlet; + } + + template + inline bool is_current(const refs::PyObjectPointer& obj) const + { + return this->current_greenlet.borrow_o() == obj.borrow_o(); + } + + inline void set_current(const OwnedGreenlet& target) + { + this->current_greenlet = target; + } + +private: + /** + * Deref and remove the greenlets from the deleteme list. Must be + * holding the GIL. + * + * If *murder* is true, then we must be called from a different + * thread than the one that these greenlets were running in. + * In that case, if the greenlet was actually running, we destroy + * the frame reference and otherwise make it appear dead before + * proceeding; otherwise, we would try (and fail) to raise an + * exception in it and wind up right back in this list. + */ + inline void clear_deleteme_list(const bool murder=false) + { + if (!this->deleteme.empty()) { + // It's possible we could add items to this list while + // running Python code if there's a thread switch, so we + // need to defensively copy it before that can happen. + deleteme_t copy = this->deleteme; + this->deleteme.clear(); // in case things come back on the list + for(deleteme_t::iterator it = copy.begin(), end = copy.end(); + it != end; + ++it ) { + PyGreenlet* to_del = *it; + if (murder) { + // Force each greenlet to appear dead; we can't raise an + // exception into it anymore anyway. + to_del->pimpl->murder_in_place(); + } + + // The only reference to these greenlets should be in + // this list, decreffing them should let them be + // deleted again, triggering calls to green_dealloc() + // in the correct thread (if we're not murdering). + // This may run arbitrary Python code and switch + // threads or greenlets! + Py_DECREF(to_del); + if (PyErr_Occurred()) { + PyErr_WriteUnraisable(nullptr); + PyErr_Clear(); + } + } + } + } + +public: + + /** + * Returns a new reference, or a false object. + */ + inline OwnedObject get_tracefunc() const + { + return tracefunc; + }; + + + inline void set_tracefunc(BorrowedObject tracefunc) + { + assert(tracefunc); + if (tracefunc == BorrowedObject(Py_None)) { + this->tracefunc.CLEAR(); + } + else { + this->tracefunc = tracefunc; + } + } + + /** + * Given a reference to a greenlet that some other thread + * attempted to delete (has a refcount of 0) store it for later + * deletion when the thread this state belongs to is current. + */ + inline void delete_when_thread_running(PyGreenlet* to_del) + { + Py_INCREF(to_del); + this->deleteme.push_back(to_del); + } + + /** + * Set to std::clock_t(-1) to disable. + */ + inline static std::clock_t& clocks_used_doing_gc() + { + return ThreadState::_clocks_used_doing_gc; + } + + ~ThreadState() + { + if (!PyInterpreterState_Head()) { + // We shouldn't get here (our callers protect us) + // but if we do, all we can do is bail early. + return; + } + + // We should not have an "origin" greenlet; that only exists + // for the temporary time during a switch, which should not + // be in progress as the thread dies. + //assert(!this->switching_state.origin); + + this->tracefunc.CLEAR(); + + // Forcibly GC as much as we can. + this->clear_deleteme_list(true); + + // The pending call did this. + assert(this->main_greenlet->thread_state() == nullptr); + + // If the main greenlet is the current greenlet, + // then we "fell off the end" and the thread died. + // It's possible that there is some other greenlet that + // switched to us, leaving a reference to the main greenlet + // on the stack, somewhere uncollectible. Try to detect that. + if (this->current_greenlet == this->main_greenlet && this->current_greenlet) { + assert(this->current_greenlet->is_currently_running_in_some_thread()); + // Drop one reference we hold. + this->current_greenlet.CLEAR(); + assert(!this->current_greenlet); + // Only our reference to the main greenlet should be left, + // But hold onto the pointer in case we need to do extra cleanup. + PyGreenlet* old_main_greenlet = this->main_greenlet.borrow(); + Py_ssize_t cnt = this->main_greenlet.REFCNT(); + this->main_greenlet.CLEAR(); + if (ThreadState::_clocks_used_doing_gc != std::clock_t(-1) + && cnt == 2 && Py_REFCNT(old_main_greenlet) == 1) { + // Highly likely that the reference is somewhere on + // the stack, not reachable by GC. Verify. + // XXX: This is O(n) in the total number of objects. + // TODO: Add a way to disable this at runtime, and + // another way to report on it. + std::clock_t begin = std::clock(); + NewReference gc(PyImport_ImportModule("gc")); + if (gc) { + OwnedObject get_referrers = gc.PyRequireAttr(ThreadState::get_referrers_name); + OwnedList refs(get_referrers.PyCall(old_main_greenlet)); + if (refs && refs.empty()) { + assert(refs.REFCNT() == 1); + // We found nothing! So we left a dangling + // reference: Probably the last thing some + // other greenlet did was call + // 'getcurrent().parent.switch()' to switch + // back to us. Clean it up. This will be the + // case on CPython 3.7 and newer, as they use + // an internal calling conversion that avoids + // creating method objects and storing them on + // the stack. + Py_DECREF(old_main_greenlet); + } + else if (refs + && refs.size() == 1 + && PyCFunction_Check(refs.at(0)) + && Py_REFCNT(refs.at(0)) == 2) { + assert(refs.REFCNT() == 1); + // Ok, we found a C method that refers to the + // main greenlet, and its only referenced + // twice, once in the list we just created, + // once from...somewhere else. If we can't + // find where else, then this is a leak. + // This happens in older versions of CPython + // that create a bound method object somewhere + // on the stack that we'll never get back to. + if (PyCFunction_GetFunction(refs.at(0).borrow()) == (PyCFunction)green_switch) { + BorrowedObject function_w = refs.at(0); + refs.clear(); // destroy the reference + // from the list. + // back to one reference. Can *it* be + // found? + assert(function_w.REFCNT() == 1); + refs = get_referrers.PyCall(function_w); + if (refs && refs.empty()) { + // Nope, it can't be found so it won't + // ever be GC'd. Drop it. + Py_CLEAR(function_w); + } + } + } + std::clock_t end = std::clock(); + ThreadState::_clocks_used_doing_gc += (end - begin); + } + } + } + + // We need to make sure this greenlet appears to be dead, + // because otherwise deallocing it would fail to raise an + // exception in it (the thread is dead) and put it back in our + // deleteme list. + if (this->current_greenlet) { + this->current_greenlet->murder_in_place(); + this->current_greenlet.CLEAR(); + } + + if (this->main_greenlet) { + // Couldn't have been the main greenlet that was running + // when the thread exited (because we already cleared this + // pointer if it was). This shouldn't be possible? + + // If the main greenlet was current when the thread died (it + // should be, right?) then we cleared its self pointer above + // when we cleared the current greenlet's main greenlet pointer. + // assert(this->main_greenlet->main_greenlet == this->main_greenlet + // || !this->main_greenlet->main_greenlet); + // // self reference, probably gone + // this->main_greenlet->main_greenlet.CLEAR(); + + // This will actually go away when the ivar is destructed. + this->main_greenlet.CLEAR(); + } + + if (PyErr_Occurred()) { + PyErr_WriteUnraisable(NULL); + PyErr_Clear(); + } + + } + +}; + +ImmortalString ThreadState::get_referrers_name(nullptr); +PythonAllocator ThreadState::allocator; +std::clock_t ThreadState::_clocks_used_doing_gc(0); + + + + + +}; // namespace greenlet + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateCreator.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateCreator.hpp new file mode 100644 index 000000000..2ec7ab55b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateCreator.hpp @@ -0,0 +1,102 @@ +#ifndef GREENLET_THREAD_STATE_CREATOR_HPP +#define GREENLET_THREAD_STATE_CREATOR_HPP + +#include +#include + +#include "greenlet_internal.hpp" +#include "greenlet_refs.hpp" +#include "greenlet_thread_support.hpp" + +#include "TThreadState.hpp" + +namespace greenlet { + + +typedef void (*ThreadStateDestructor)(ThreadState* const); + +template +class ThreadStateCreator +{ +private: + // Initialized to 1, and, if still 1, created on access. + // Set to 0 on destruction. + ThreadState* _state; + G_NO_COPIES_OF_CLS(ThreadStateCreator); + + inline bool has_initialized_state() const noexcept + { + return this->_state != (ThreadState*)1; + } + + inline bool has_state() const noexcept + { + return this->has_initialized_state() && this->_state != nullptr; + } + +public: + + // Only one of these, auto created per thread. + // Constructing the state constructs the MainGreenlet. + ThreadStateCreator() : + _state((ThreadState*)1) + { + } + + ~ThreadStateCreator() + { + if (this->has_state()) { + Destructor(this->_state); + } + + this->_state = nullptr; + } + + inline ThreadState& state() + { + // The main greenlet will own this pointer when it is created, + // which will be right after this. The plan is to give every + // greenlet a pointer to the main greenlet for the thread it + // runs in; if we are doing something cross-thread, we need to + // access the pointer from the main greenlet. Deleting the + // thread, and hence the thread-local storage, will delete the + // state pointer in the main greenlet. + if (!this->has_initialized_state()) { + // XXX: Assuming allocation never fails + this->_state = new ThreadState; + // For non-standard threading, we need to store an object + // in the Python thread state dictionary so that it can be + // DECREF'd when the thread ends (ideally; the dict could + // last longer) and clean this object up. + } + if (!this->_state) { + throw std::runtime_error("Accessing state after destruction."); + } + return *this->_state; + } + + operator ThreadState&() + { + return this->state(); + } + + operator ThreadState*() + { + return &this->state(); + } + + inline int tp_traverse(visitproc visit, void* arg) + { + if (this->has_state()) { + return this->_state->tp_traverse(visit, arg); + } + return 0; + } + +}; + + + +}; // namespace greenlet + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateDestroy.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateDestroy.cpp new file mode 100644 index 000000000..449b78871 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TThreadStateDestroy.cpp @@ -0,0 +1,217 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of the ThreadState destructors. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#ifndef T_THREADSTATE_DESTROY +#define T_THREADSTATE_DESTROY + +#include "TGreenlet.hpp" + +#include "greenlet_thread_support.hpp" +#include "greenlet_compiler_compat.hpp" +#include "TGreenletGlobals.cpp" +#include "TThreadState.hpp" +#include "TThreadStateCreator.hpp" + +namespace greenlet { + +extern "C" { + +struct ThreadState_DestroyNoGIL +{ + /** + This function uses the same lock that the PendingCallback does + */ + static void + MarkGreenletDeadAndQueueCleanup(ThreadState* const state) + { +#if GREENLET_BROKEN_THREAD_LOCAL_CLEANUP_JUST_LEAK + return; +#endif + // We are *NOT* holding the GIL. Our thread is in the middle + // of its death throes and the Python thread state is already + // gone so we can't use most Python APIs. One that is safe is + // ``Py_AddPendingCall``, unless the interpreter itself has + // been torn down. There is a limited number of calls that can + // be queued: 32 (NPENDINGCALLS) in CPython 3.10, so we + // coalesce these calls using our own queue. + + if (!MarkGreenletDeadIfNeeded(state)) { + // No state, or no greenlet + return; + } + + // XXX: Because we don't have the GIL, this is a race condition. + if (!PyInterpreterState_Head()) { + // We have to leak the thread state, if the + // interpreter has shut down when we're getting + // deallocated, we can't run the cleanup code that + // deleting it would imply. + return; + } + + AddToCleanupQueue(state); + + } + +private: + + // If the state has an allocated main greenlet: + // - mark the greenlet as dead by disassociating it from the state; + // - return 1 + // Otherwise, return 0. + static bool + MarkGreenletDeadIfNeeded(ThreadState* const state) + { + if (state && state->has_main_greenlet()) { + // mark the thread as dead ASAP. + // this is racy! If we try to throw or switch to a + // greenlet from this thread from some other thread before + // we clear the state pointer, it won't realize the state + // is dead which can crash the process. + PyGreenlet* p(state->borrow_main_greenlet().borrow()); + assert(p->pimpl->thread_state() == state || p->pimpl->thread_state() == nullptr); + dynamic_cast(p->pimpl)->thread_state(nullptr); + return true; + } + return false; + } + + static void + AddToCleanupQueue(ThreadState* const state) + { + assert(state && state->has_main_greenlet()); + + // NOTE: Because we're not holding the GIL here, some other + // Python thread could run and call ``os.fork()``, which would + // be bad if that happened while we are holding the cleanup + // lock (it wouldn't function in the child process). + // Make a best effort to try to keep the duration we hold the + // lock short. + // TODO: On platforms that support it, use ``pthread_atfork`` to + // drop this lock. + LockGuard cleanup_lock(*mod_globs->thread_states_to_destroy_lock); + + mod_globs->queue_to_destroy(state); + if (mod_globs->thread_states_to_destroy.size() == 1) { + // We added the first item to the queue. We need to schedule + // the cleanup. + + // A size greater than 1 means that we have already added the pending call, + // and in fact, it may be executing now. + // If it is executing, our lock makes sure that it will see the item we just added + // to the queue on its next iteration (after we release the lock) + // + // A size of 1 means there is no pending call, OR the pending call is + // currently executing, has dropped the lock, and is deleting the last item + // from the queue; its next iteration will go ahead and delete the item we just added. + // And the pending call we schedule here will have no work to do. + int result = AddPendingCall( + PendingCallback_DestroyQueueWithGIL, + nullptr); + if (result < 0) { + // Hmm, what can we do here? + fprintf(stderr, + "greenlet: WARNING: failed in call to Py_AddPendingCall; " + "expect a memory leak.\n"); + } + } + } + + static int + PendingCallback_DestroyQueueWithGIL(void* UNUSED(arg)) + { + // We're holding the GIL here, so no Python code should be able to + // run to call ``os.fork()``. + while (1) { + ThreadState* to_destroy; + { + LockGuard cleanup_lock(*mod_globs->thread_states_to_destroy_lock); + if (mod_globs->thread_states_to_destroy.empty()) { + break; + } + to_destroy = mod_globs->take_next_to_destroy(); + } + assert(to_destroy); + assert(to_destroy->has_main_greenlet()); + // Drop the lock while we do the actual deletion. + // This allows other calls to MarkGreenletDeadAndQueueCleanup + // to enter and add to our queue. + DestroyOneWithGIL(to_destroy); + } + return 0; + } + + static void + DestroyOneWithGIL(const ThreadState* const state) + { + // Holding the GIL. + // Passed a non-shared pointer to the actual thread state. + // state -> main greenlet + assert(state->has_main_greenlet()); + PyGreenlet* main(state->borrow_main_greenlet()); + // When we need to do cross-thread operations, we check this. + // A NULL value means the thread died some time ago. + // We do this here, rather than in a Python dealloc function + // for the greenlet, in case there's still a reference out + // there. + dynamic_cast(main->pimpl)->thread_state(nullptr); + + delete state; // Deleting this runs the destructor, DECREFs the main greenlet. + } + + + static int AddPendingCall(int (*func)(void*), void* arg) + { + // If the interpreter is in the middle of finalizing, we can't add a + // pending call. Trying to do so will end up in a SIGSEGV, as + // Py_AddPendingCall will not be able to get the interpreter and will + // try to dereference a NULL pointer. It's possible this can still + // segfault if we happen to get context switched, and maybe we should + // just always implement our own AddPendingCall, but I'd like to see if + // this works first +#if GREENLET_PY313 + if (Py_IsFinalizing()) { +#else + if (_Py_IsFinalizing()) { +#endif +#ifdef GREENLET_DEBUG + // No need to log in the general case. Yes, we'll leak, + // but we're shutting down so it should be ok. + fprintf(stderr, + "greenlet: WARNING: Interpreter is finalizing. Ignoring " + "call to Py_AddPendingCall; \n"); +#endif + return 0; + } + return Py_AddPendingCall(func, arg); + } + + + + + +}; +}; + +}; // namespace greenlet + +// The intent when GET_THREAD_STATE() is needed multiple times in a +// function is to take a reference to its return value in a local +// variable, to avoid the thread-local indirection. On some platforms +// (macOS), accessing a thread-local involves a function call (plus an +// initial function call in each function that uses a thread local); +// in contrast, static volatile variables are at some pre-computed +// offset. +typedef greenlet::ThreadStateCreator ThreadStateCreator; +static thread_local ThreadStateCreator g_thread_state_global; +#define GET_THREAD_STATE() g_thread_state_global + +#endif //T_THREADSTATE_DESTROY diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TUserGreenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TUserGreenlet.cpp new file mode 100644 index 000000000..73a81330d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/TUserGreenlet.cpp @@ -0,0 +1,662 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/** + * Implementation of greenlet::UserGreenlet. + * + * Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#ifndef T_USER_GREENLET_CPP +#define T_USER_GREENLET_CPP + +#include "greenlet_internal.hpp" +#include "TGreenlet.hpp" + +#include "TThreadStateDestroy.cpp" + + +namespace greenlet { +using greenlet::refs::BorrowedMainGreenlet; +greenlet::PythonAllocator UserGreenlet::allocator; + +void* UserGreenlet::operator new(size_t UNUSED(count)) +{ + return allocator.allocate(1); +} + + +void UserGreenlet::operator delete(void* ptr) +{ + return allocator.deallocate(static_cast(ptr), + 1); +} + + +UserGreenlet::UserGreenlet(PyGreenlet* p, BorrowedGreenlet the_parent) + : Greenlet(p), _parent(the_parent) +{ +} + +UserGreenlet::~UserGreenlet() +{ + // Python 3.11: If we don't clear out the raw frame datastack + // when deleting an unfinished greenlet, + // TestLeaks.test_untracked_memory_doesnt_increase_unfinished_thread_dealloc_in_main fails. + this->python_state.did_finish(nullptr); + this->tp_clear(); +} + + +const BorrowedMainGreenlet +UserGreenlet::main_greenlet() const +{ + return this->_main_greenlet; +} + + +BorrowedMainGreenlet +UserGreenlet::find_main_greenlet_in_lineage() const +{ + if (this->started()) { + assert(this->_main_greenlet); + return BorrowedMainGreenlet(this->_main_greenlet); + } + + if (!this->_parent) { + /* garbage collected greenlet in chain */ + // XXX: WHAT? + return BorrowedMainGreenlet(nullptr); + } + + return this->_parent->find_main_greenlet_in_lineage(); +} + + +/** + * CAUTION: This will allocate memory and may trigger garbage + * collection and arbitrary Python code. + */ +OwnedObject +UserGreenlet::throw_GreenletExit_during_dealloc(const ThreadState& current_thread_state) +{ + /* The dying greenlet cannot be a parent of ts_current + because the 'parent' field chain would hold a + reference */ + UserGreenlet::ParentIsCurrentGuard with_current_parent(this, current_thread_state); + + // We don't care about the return value, only whether an + // exception happened. Whether or not an exception happens, + // we need to restore the parent in case the greenlet gets + // resurrected. + return Greenlet::throw_GreenletExit_during_dealloc(current_thread_state); +} + +ThreadState* +UserGreenlet::thread_state() const noexcept +{ + // TODO: maybe make this throw, if the thread state isn't there? + // if (!this->main_greenlet) { + // throw std::runtime_error("No thread state"); // TODO: Better exception + // } + if (!this->_main_greenlet) { + return nullptr; + } + return this->_main_greenlet->thread_state(); +} + + +bool +UserGreenlet::was_running_in_dead_thread() const noexcept +{ + return this->_main_greenlet && !this->thread_state(); +} + +OwnedObject +UserGreenlet::g_switch() +{ + assert(this->args() || PyErr_Occurred()); + + try { + this->check_switch_allowed(); + } + catch (const PyErrOccurred&) { + this->release_args(); + throw; + } + + // Switching greenlets used to attempt to clean out ones that need + // deleted *if* we detected a thread switch. Should it still do + // that? + // An issue is that if we delete a greenlet from another thread, + // it gets queued to this thread, and ``kill_greenlet()`` switches + // back into the greenlet + + /* find the real target by ignoring dead greenlets, + and if necessary starting a greenlet. */ + switchstack_result_t err; + Greenlet* target = this; + // TODO: probably cleaner to handle the case where we do + // switch to ourself separately from the other cases. + // This can probably even further be simplified if we keep + // track of the switching_state we're going for and just call + // into g_switch() if it's not ourself. The main problem with that + // is that we would be using more stack space. + bool target_was_me = true; + bool was_initial_stub = false; + while (target) { + if (target->active()) { + if (!target_was_me) { + target->args() <<= this->args(); + assert(!this->args()); + } + err = target->g_switchstack(); + break; + } + if (!target->started()) { + // We never encounter a main greenlet that's not started. + assert(!target->main()); + UserGreenlet* real_target = static_cast(target); + assert(real_target); + void* dummymarker; + was_initial_stub = true; + if (!target_was_me) { + target->args() <<= this->args(); + assert(!this->args()); + } + try { + // This can only throw back to us while we're + // still in this greenlet. Once the new greenlet + // is bootstrapped, it has its own exception state. + err = real_target->g_initialstub(&dummymarker); + } + catch (const PyErrOccurred&) { + this->release_args(); + throw; + } + catch (const GreenletStartedWhileInPython&) { + // The greenlet was started sometime before this + // greenlet actually switched to it, i.e., + // "concurrent" calls to switch() or throw(). + // We need to retry the switch. + // Note that the current greenlet has been reset + // to this one (or we wouldn't be running!) + continue; + } + break; + } + + target = target->parent(); + target_was_me = false; + } + // The ``this`` pointer and all other stack or register based + // variables are invalid now, at least where things succeed + // above. + // But this one, probably not so much? It's not clear if it's + // safe to throw an exception at this point. + + if (err.status < 0) { + // If we get here, either g_initialstub() + // failed, or g_switchstack() failed. Either one of those + // cases SHOULD leave us in the original greenlet with a valid + // stack. + return this->on_switchstack_or_initialstub_failure(target, err, target_was_me, was_initial_stub); + } + + // err.the_new_current_greenlet would be the same as ``target``, + // if target wasn't probably corrupt. + return err.the_new_current_greenlet->g_switch_finish(err); +} + + + +Greenlet::switchstack_result_t +UserGreenlet::g_initialstub(void* mark) +{ + OwnedObject run; + + // We need to grab a reference to the current switch arguments + // in case we're entered concurrently during the call to + // GetAttr() and have to try again. + // We'll restore them when we return in that case. + // Scope them tightly to avoid ref leaks. + { + SwitchingArgs args(this->args()); + + /* save exception in case getattr clears it */ + PyErrPieces saved; + + /* + self.run is the object to call in the new greenlet. + This could run arbitrary python code and switch greenlets! + */ + run = this->self().PyRequireAttr(mod_globs->str_run); + /* restore saved exception */ + saved.PyErrRestore(); + + + /* recheck that it's safe to switch in case greenlet reparented anywhere above */ + this->check_switch_allowed(); + + /* by the time we got here another start could happen elsewhere, + * that means it should now be a regular switch. + * This can happen if the Python code is a subclass that implements + * __getattribute__ or __getattr__, or makes ``run`` a descriptor; + * all of those can run arbitrary code that switches back into + * this greenlet. + */ + if (this->stack_state.started()) { + // the successful switch cleared these out, we need to + // restore our version. They will be copied on up to the + // next target. + assert(!this->args()); + this->args() <<= args; + throw GreenletStartedWhileInPython(); + } + } + + // Sweet, if we got here, we have the go-ahead and will switch + // greenlets. + // Nothing we do from here on out should allow for a thread or + // greenlet switch: No arbitrary calls to Python, including + // decref'ing + +#if GREENLET_USE_CFRAME + /* OK, we need it, we're about to switch greenlets, save the state. */ + /* + See green_new(). This is a stack-allocated variable used + while *self* is in PyObject_Call(). + We want to defer copying the state info until we're sure + we need it and are in a stable place to do so. + */ + _PyCFrame trace_info; + + this->python_state.set_new_cframe(trace_info); +#endif + /* start the greenlet */ + ThreadState& thread_state = GET_THREAD_STATE().state(); + this->stack_state = StackState(mark, + thread_state.borrow_current()->stack_state); + this->python_state.set_initial_state(PyThreadState_GET()); + this->exception_state.clear(); + this->_main_greenlet = thread_state.get_main_greenlet(); + + /* perform the initial switch */ + switchstack_result_t err = this->g_switchstack(); + /* returns twice! + The 1st time with ``err == 1``: we are in the new greenlet. + This one owns a greenlet that used to be current. + The 2nd time with ``err <= 0``: back in the caller's + greenlet; this happens if the child finishes or switches + explicitly to us. Either way, the ``err`` variable is + created twice at the same memory location, but possibly + having different ``origin`` values. Note that it's not + constructed for the second time until the switch actually happens. + */ + if (err.status == 1) { + // In the new greenlet. + + // This never returns! Calling inner_bootstrap steals + // the contents of our run object within this stack frame, so + // it is not valid to do anything with it. + try { + this->inner_bootstrap(err.origin_greenlet.relinquish_ownership(), + run.relinquish_ownership()); + } + // Getting a C++ exception here isn't good. It's probably a + // bug in the underlying greenlet, meaning it's probably a + // C++ extension. We're going to abort anyway, but try to + // display some nice information *if* possible. Some obscure + // platforms don't properly support this (old 32-bit Arm, see see + // https://github.com/python-greenlet/greenlet/issues/385); that's not + // great, but should usually be OK because, as mentioned above, we're + // terminating anyway. + // + // The catching is tested by + // ``test_cpp.CPPTests.test_unhandled_exception_in_greenlet_aborts``. + // + // PyErrOccurred can theoretically be thrown by + // inner_bootstrap() -> g_switch_finish(), but that should + // never make it back to here. It is a std::exception and + // would be caught if it is. + catch (const std::exception& e) { + std::string base = "greenlet: Unhandled C++ exception: "; + base += e.what(); + Py_FatalError(base.c_str()); + } + catch (...) { + // Some compilers/runtimes use exceptions internally. + // It appears that GCC on Linux with libstdc++ throws an + // exception internally at process shutdown time to unwind + // stacks and clean up resources. Depending on exactly + // where we are when the process exits, that could result + // in an unknown exception getting here. If we + // Py_FatalError() or abort() here, we interfere with + // orderly process shutdown. Throwing the exception on up + // is the right thing to do. + // + // gevent's ``examples/dns_mass_resolve.py`` demonstrates this. +#ifndef NDEBUG + fprintf(stderr, + "greenlet: inner_bootstrap threw unknown exception; " + "is the process terminating?\n"); +#endif + throw; + } + Py_FatalError("greenlet: inner_bootstrap returned with no exception.\n"); + } + + + // In contrast, notice that we're keeping the origin greenlet + // around as an owned reference; we need it to call the trace + // function for the switch back into the parent. It was only + // captured at the time the switch actually happened, though, + // so we haven't been keeping an extra reference around this + // whole time. + + /* back in the parent */ + if (err.status < 0) { + /* start failed badly, restore greenlet state */ + this->stack_state = StackState(); + this->_main_greenlet.CLEAR(); + // CAUTION: This may run arbitrary Python code. + run.CLEAR(); // inner_bootstrap didn't run, we own the reference. + } + + // In the success case, the spawned code (inner_bootstrap) will + // take care of decrefing this, so we relinquish ownership so as + // to not double-decref. + + run.relinquish_ownership(); + + return err; +} + + +void +UserGreenlet::inner_bootstrap(PyGreenlet* origin_greenlet, PyObject* run) +{ + // The arguments here would be another great place for move. + // As it is, we take them as a reference so that when we clear + // them we clear what's on the stack above us. Do that NOW, and + // without using a C++ RAII object, + // so there's no way that exiting the parent frame can clear it, + // or we clear it unexpectedly. This arises in the context of the + // interpreter shutting down. See https://github.com/python-greenlet/greenlet/issues/325 + //PyObject* run = _run.relinquish_ownership(); + + /* in the new greenlet */ + assert(this->thread_state()->borrow_current() == BorrowedGreenlet(this->_self)); + // C++ exceptions cannot propagate to the parent greenlet from + // here. (TODO: Do we need a catch(...) clause, perhaps on the + // function itself? ALl we could do is terminate the program.) + // NOTE: On 32-bit Windows, the call chain is extremely + // important here in ways that are subtle, having to do with + // the depth of the SEH list. The call to restore it MUST NOT + // add a new SEH handler to the list, or we'll restore it to + // the wrong thing. + this->thread_state()->restore_exception_state(); + /* stack variables from above are no good and also will not unwind! */ + // EXCEPT: That can't be true, we access run, among others, here. + + this->stack_state.set_active(); /* running */ + + // We're about to possibly run Python code again, which + // could switch back/away to/from us, so we need to grab the + // arguments locally. + SwitchingArgs args; + args <<= this->args(); + assert(!this->args()); + + // XXX: We could clear this much earlier, right? + // Or would that introduce the possibility of running Python + // code when we don't want to? + // CAUTION: This may run arbitrary Python code. + this->_run_callable.CLEAR(); + + + // The first switch we need to manually call the trace + // function here instead of in g_switch_finish, because we + // never return there. + if (OwnedObject tracefunc = this->thread_state()->get_tracefunc()) { + OwnedGreenlet trace_origin; + trace_origin = origin_greenlet; + try { + g_calltrace(tracefunc, + args ? mod_globs->event_switch : mod_globs->event_throw, + trace_origin, + this->_self); + } + catch (const PyErrOccurred&) { + /* Turn trace errors into switch throws */ + args.CLEAR(); + } + } + + // We no longer need the origin, it was only here for + // tracing. + // We may never actually exit this stack frame so we need + // to explicitly clear it. + // This could run Python code and switch. + Py_CLEAR(origin_greenlet); + + OwnedObject result; + if (!args) { + /* pending exception */ + result = NULL; + } + else { + /* call g.run(*args, **kwargs) */ + // This could result in further switches + try { + //result = run.PyCall(args.args(), args.kwargs()); + // CAUTION: Just invoking this, before the function even + // runs, may cause memory allocations, which may trigger + // GC, which may run arbitrary Python code. + result = OwnedObject::consuming(PyObject_Call(run, args.args().borrow(), args.kwargs().borrow())); + } + catch (...) { + // Unhandled C++ exception! + + // If we declare ourselves as noexcept, if we don't catch + // this here, most platforms will just abort() the + // process. But on 64-bit Windows with older versions of + // the C runtime, this can actually corrupt memory and + // just return. We see this when compiling with the + // Windows 7.0 SDK targeting Windows Server 2008, but not + // when using the Appveyor Visual Studio 2019 image. So + // this currently only affects Python 2.7 on Windows 64. + // That is, the tests pass and the runtime aborts + // everywhere else. + // + // However, if we catch it and try to continue with a + // Python error, then all Windows 64 bit platforms corrupt + // memory. So all we can do is manually abort, hopefully + // with a good error message. (Note that the above was + // tested WITHOUT the `/EHr` switch being used at compile + // time, so MSVC may have "optimized" out important + // checking. Using that switch, we may be in a better + // place in terms of memory corruption.) But sometimes it + // can't be caught here at all, which is confusing but not + // terribly surprising; so again, the G_NOEXCEPT_WIN32 + // plus "/EHr". + // + // Hopefully the basic C stdlib is still functional enough + // for us to at least print an error. + // + // It gets more complicated than that, though, on some + // platforms, specifically at least Linux/gcc/libstdc++. They use + // an exception to unwind the stack when a background + // thread exits. (See comments about noexcept.) So this + // may not actually represent anything untoward. On those + // platforms we allow throws of this to propagate, or + // attempt to anyway. +# if defined(WIN32) || defined(_WIN32) + Py_FatalError( + "greenlet: Unhandled C++ exception from a greenlet run function. " + "Because memory is likely corrupted, terminating process."); + std::abort(); +#else + throw; +#endif + } + } + // These lines may run arbitrary code + args.CLEAR(); + Py_CLEAR(run); + + if (!result + && mod_globs->PyExc_GreenletExit.PyExceptionMatches() + && (this->args())) { + // This can happen, for example, if our only reference + // goes away after we switch back to the parent. + // See test_dealloc_switch_args_not_lost + PyErrPieces clear_error; + result <<= this->args(); + result = single_result(result); + } + this->release_args(); + this->python_state.did_finish(PyThreadState_GET()); + + result = g_handle_exit(result); + assert(this->thread_state()->borrow_current() == this->_self); + + /* jump back to parent */ + this->stack_state.set_inactive(); /* dead */ + + + // TODO: Can we decref some things here? Release our main greenlet + // and maybe parent? + for (Greenlet* parent = this->_parent; + parent; + parent = parent->parent()) { + // We need to somewhere consume a reference to + // the result; in most cases we'll never have control + // back in this stack frame again. Calling + // green_switch actually adds another reference! + // This would probably be clearer with a specific API + // to hand results to the parent. + parent->args() <<= result; + assert(!result); + // The parent greenlet now owns the result; in the + // typical case we'll never get back here to assign to + // result and thus release the reference. + try { + result = parent->g_switch(); + } + catch (const PyErrOccurred&) { + // Ignore, keep passing the error on up. + } + + /* Return here means switch to parent failed, + * in which case we throw *current* exception + * to the next parent in chain. + */ + assert(!result); + } + /* We ran out of parents, cannot continue */ + PyErr_WriteUnraisable(this->self().borrow_o()); + Py_FatalError("greenlet: ran out of parent greenlets while propagating exception; " + "cannot continue"); + std::abort(); +} + +void +UserGreenlet::run(const BorrowedObject nrun) +{ + if (this->started()) { + throw AttributeError( + "run cannot be set " + "after the start of the greenlet"); + } + this->_run_callable = nrun; +} + +const OwnedGreenlet +UserGreenlet::parent() const +{ + return this->_parent; +} + +void +UserGreenlet::parent(const BorrowedObject raw_new_parent) +{ + if (!raw_new_parent) { + throw AttributeError("can't delete attribute"); + } + + BorrowedMainGreenlet main_greenlet_of_new_parent; + BorrowedGreenlet new_parent(raw_new_parent.borrow()); // could + // throw + // TypeError! + for (BorrowedGreenlet p = new_parent; p; p = p->parent()) { + if (p == this->self()) { + throw ValueError("cyclic parent chain"); + } + main_greenlet_of_new_parent = p->main_greenlet(); + } + + if (!main_greenlet_of_new_parent) { + throw ValueError("parent must not be garbage collected"); + } + + if (this->started() + && this->_main_greenlet != main_greenlet_of_new_parent) { + throw ValueError("parent cannot be on a different thread"); + } + + this->_parent = new_parent; +} + +void +UserGreenlet::murder_in_place() +{ + this->_main_greenlet.CLEAR(); + Greenlet::murder_in_place(); +} + +bool +UserGreenlet::belongs_to_thread(const ThreadState* thread_state) const +{ + return Greenlet::belongs_to_thread(thread_state) && this->_main_greenlet == thread_state->borrow_main_greenlet(); +} + + +int +UserGreenlet::tp_traverse(visitproc visit, void* arg) +{ + Py_VISIT(this->_parent.borrow_o()); + Py_VISIT(this->_main_greenlet.borrow_o()); + Py_VISIT(this->_run_callable.borrow_o()); + + return Greenlet::tp_traverse(visit, arg); +} + +int +UserGreenlet::tp_clear() +{ + Greenlet::tp_clear(); + this->_parent.CLEAR(); + this->_main_greenlet.CLEAR(); + this->_run_callable.CLEAR(); + return 0; +} + +UserGreenlet::ParentIsCurrentGuard::ParentIsCurrentGuard(UserGreenlet* p, + const ThreadState& thread_state) + : oldparent(p->_parent), + greenlet(p) +{ + p->_parent = thread_state.get_current(); +} + +UserGreenlet::ParentIsCurrentGuard::~ParentIsCurrentGuard() +{ + this->greenlet->_parent = oldparent; + oldparent.CLEAR(); +} + +}; //namespace greenlet +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/__init__.py new file mode 100644 index 000000000..91f075a2d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/__init__.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +""" +The root of the greenlet package. +""" +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +__all__ = [ + '__version__', + '_C_API', + + 'GreenletExit', + 'error', + + 'getcurrent', + 'greenlet', + + 'gettrace', + 'settrace', +] + +# pylint:disable=no-name-in-module + +### +# Metadata +### +__version__ = '3.2.3' +from ._greenlet import _C_API # pylint:disable=no-name-in-module + +### +# Exceptions +### +from ._greenlet import GreenletExit +from ._greenlet import error + +### +# greenlets +### +from ._greenlet import getcurrent +from ._greenlet import greenlet + +### +# tracing +### +try: + from ._greenlet import gettrace + from ._greenlet import settrace +except ImportError: + # Tracing wasn't supported. + # XXX: The option to disable it was removed in 1.0, + # so this branch should be dead code. + pass + +### +# Constants +# These constants aren't documented and aren't recommended. +# In 1.0, USE_GC and USE_TRACING are always true, and USE_CONTEXT_VARS +# is the same as ``sys.version_info[:2] >= 3.7`` +### +from ._greenlet import GREENLET_USE_CONTEXT_VARS # pylint:disable=unused-import +from ._greenlet import GREENLET_USE_GC # pylint:disable=unused-import +from ._greenlet import GREENLET_USE_TRACING # pylint:disable=unused-import + +# Controlling the use of the gc module. Provisional API for this greenlet +# implementation in 2.0. +from ._greenlet import CLOCKS_PER_SEC # pylint:disable=unused-import +from ._greenlet import enable_optional_cleanup # pylint:disable=unused-import +from ._greenlet import get_clocks_used_doing_optional_cleanup # pylint:disable=unused-import + +# Other APIS in the _greenlet module are for test support. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/_greenlet.cpython-310-x86_64-linux-gnu.so b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/_greenlet.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 000000000..15d65ad73 Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/_greenlet.cpython-310-x86_64-linux-gnu.so differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.cpp new file mode 100644 index 000000000..e8d92a00b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.cpp @@ -0,0 +1,320 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +/* Format with: + * clang-format -i --style=file src/greenlet/greenlet.c + * + * + * Fix missing braces with: + * clang-tidy src/greenlet/greenlet.c -fix -checks="readability-braces-around-statements" +*/ +#include +#include +#include +#include + + +#define PY_SSIZE_T_CLEAN +#include +#include "structmember.h" // PyMemberDef + +#include "greenlet_internal.hpp" +// Code after this point can assume access to things declared in stdint.h, +// including the fixed-width types. This goes for the platform-specific switch functions +// as well. +#include "greenlet_refs.hpp" +#include "greenlet_slp_switch.hpp" + +#include "greenlet_thread_support.hpp" +#include "TGreenlet.hpp" + +#include "TGreenletGlobals.cpp" + +#include "TGreenlet.cpp" +#include "TMainGreenlet.cpp" +#include "TUserGreenlet.cpp" +#include "TBrokenGreenlet.cpp" +#include "TExceptionState.cpp" +#include "TPythonState.cpp" +#include "TStackState.cpp" + +#include "TThreadState.hpp" +#include "TThreadStateCreator.hpp" +#include "TThreadStateDestroy.cpp" + +#include "PyGreenlet.cpp" +#include "PyGreenletUnswitchable.cpp" +#include "CObjects.cpp" + +using greenlet::LockGuard; +using greenlet::LockInitError; +using greenlet::PyErrOccurred; +using greenlet::Require; + +using greenlet::g_handle_exit; +using greenlet::single_result; + +using greenlet::Greenlet; +using greenlet::UserGreenlet; +using greenlet::MainGreenlet; +using greenlet::BrokenGreenlet; +using greenlet::ThreadState; +using greenlet::PythonState; + + + +// ******* Implementation of things from included files +template +greenlet::refs::_BorrowedGreenlet& greenlet::refs::_BorrowedGreenlet::operator=(const greenlet::refs::BorrowedObject& other) +{ + this->_set_raw_pointer(static_cast(other)); + return *this; +} + +template +inline greenlet::refs::_BorrowedGreenlet::operator Greenlet*() const noexcept +{ + if (!this->p) { + return nullptr; + } + return reinterpret_cast(this->p)->pimpl; +} + +template +greenlet::refs::_BorrowedGreenlet::_BorrowedGreenlet(const BorrowedObject& p) + : BorrowedReference(nullptr) +{ + + this->_set_raw_pointer(p.borrow()); +} + +template +inline greenlet::refs::_OwnedGreenlet::operator Greenlet*() const noexcept +{ + if (!this->p) { + return nullptr; + } + return reinterpret_cast(this->p)->pimpl; +} + + + +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wmissing-field-initializers" +# pragma clang diagnostic ignored "-Wwritable-strings" +#elif defined(__GNUC__) +# pragma GCC diagnostic push +// warning: ISO C++ forbids converting a string constant to ‘char*’ +// (The python APIs aren't const correct and accept writable char*) +# pragma GCC diagnostic ignored "-Wwrite-strings" +#endif + + +/*********************************************************** + +A PyGreenlet is a range of C stack addresses that must be +saved and restored in such a way that the full range of the +stack contains valid data when we switch to it. + +Stack layout for a greenlet: + + | ^^^ | + | older data | + | | + stack_stop . |_______________| + . | | + . | greenlet data | + . | in stack | + . * |_______________| . . _____________ stack_copy + stack_saved + . | | | | + . | data | |greenlet data| + . | unrelated | | saved | + . | to | | in heap | + stack_start . | this | . . |_____________| stack_copy + | greenlet | + | | + | newer data | + | vvv | + + +Note that a greenlet's stack data is typically partly at its correct +place in the stack, and partly saved away in the heap, but always in +the above configuration: two blocks, the more recent one in the heap +and the older one still in the stack (either block may be empty). + +Greenlets are chained: each points to the previous greenlet, which is +the one that owns the data currently in the C stack above my +stack_stop. The currently running greenlet is the first element of +this chain. The main (initial) greenlet is the last one. Greenlets +whose stack is entirely in the heap can be skipped from the chain. + +The chain is not related to execution order, but only to the order +in which bits of C stack happen to belong to greenlets at a particular +point in time. + +The main greenlet doesn't have a stack_stop: it is responsible for the +complete rest of the C stack, and we don't know where it begins. We +use (char*) -1, the largest possible address. + +States: + stack_stop == NULL && stack_start == NULL: did not start yet + stack_stop != NULL && stack_start == NULL: already finished + stack_stop != NULL && stack_start != NULL: active + +The running greenlet's stack_start is undefined but not NULL. + + ***********************************************************/ + + + + +/***********************************************************/ + +/* Some functions must not be inlined: + * slp_restore_state, when inlined into slp_switch might cause + it to restore stack over its own local variables + * slp_save_state, when inlined would add its own local + variables to the saved stack, wasting space + * slp_switch, cannot be inlined for obvious reasons + * g_initialstub, when inlined would receive a pointer into its + own stack frame, leading to incomplete stack save/restore + +g_initialstub is a member function and declared virtual so that the +compiler always calls it through a vtable. + +slp_save_state and slp_restore_state are also member functions. They +are called from trampoline functions that themselves are declared as +not eligible for inlining. +*/ + +extern "C" { +static int GREENLET_NOINLINE(slp_save_state_trampoline)(char* stackref) +{ + return switching_thread_state->slp_save_state(stackref); +} +static void GREENLET_NOINLINE(slp_restore_state_trampoline)() +{ + switching_thread_state->slp_restore_state(); +} +} + + +/***********************************************************/ + + +#include "PyModule.cpp" + + + +static PyObject* +greenlet_internal_mod_init() noexcept +{ + static void* _PyGreenlet_API[PyGreenlet_API_pointers]; + + try { + CreatedModule m(greenlet_module_def); + + Require(PyType_Ready(&PyGreenlet_Type)); + Require(PyType_Ready(&PyGreenletUnswitchable_Type)); + + mod_globs = new greenlet::GreenletGlobals; + ThreadState::init(); + + m.PyAddObject("greenlet", PyGreenlet_Type); + m.PyAddObject("UnswitchableGreenlet", PyGreenletUnswitchable_Type); + m.PyAddObject("error", mod_globs->PyExc_GreenletError); + m.PyAddObject("GreenletExit", mod_globs->PyExc_GreenletExit); + + m.PyAddObject("GREENLET_USE_GC", 1); + m.PyAddObject("GREENLET_USE_TRACING", 1); + m.PyAddObject("GREENLET_USE_CONTEXT_VARS", 1L); + m.PyAddObject("GREENLET_USE_STANDARD_THREADING", 1L); + + OwnedObject clocks_per_sec = OwnedObject::consuming(PyLong_FromSsize_t(CLOCKS_PER_SEC)); + m.PyAddObject("CLOCKS_PER_SEC", clocks_per_sec); + + /* also publish module-level data as attributes of the greentype. */ + // XXX: This is weird, and enables a strange pattern of + // confusing the class greenlet with the module greenlet; with + // the exception of (possibly) ``getcurrent()``, this + // shouldn't be encouraged so don't add new items here. + for (const char* const* p = copy_on_greentype; *p; p++) { + OwnedObject o = m.PyRequireAttr(*p); + PyDict_SetItemString(PyGreenlet_Type.tp_dict, *p, o.borrow()); + } + + /* + * Expose C API + */ + + /* types */ + _PyGreenlet_API[PyGreenlet_Type_NUM] = (void*)&PyGreenlet_Type; + + /* exceptions */ + _PyGreenlet_API[PyExc_GreenletError_NUM] = (void*)mod_globs->PyExc_GreenletError; + _PyGreenlet_API[PyExc_GreenletExit_NUM] = (void*)mod_globs->PyExc_GreenletExit; + + /* methods */ + _PyGreenlet_API[PyGreenlet_New_NUM] = (void*)PyGreenlet_New; + _PyGreenlet_API[PyGreenlet_GetCurrent_NUM] = (void*)PyGreenlet_GetCurrent; + _PyGreenlet_API[PyGreenlet_Throw_NUM] = (void*)PyGreenlet_Throw; + _PyGreenlet_API[PyGreenlet_Switch_NUM] = (void*)PyGreenlet_Switch; + _PyGreenlet_API[PyGreenlet_SetParent_NUM] = (void*)PyGreenlet_SetParent; + + /* Previously macros, but now need to be functions externally. */ + _PyGreenlet_API[PyGreenlet_MAIN_NUM] = (void*)Extern_PyGreenlet_MAIN; + _PyGreenlet_API[PyGreenlet_STARTED_NUM] = (void*)Extern_PyGreenlet_STARTED; + _PyGreenlet_API[PyGreenlet_ACTIVE_NUM] = (void*)Extern_PyGreenlet_ACTIVE; + _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM] = (void*)Extern_PyGreenlet_GET_PARENT; + + /* XXX: Note that our module name is ``greenlet._greenlet``, but for + backwards compatibility with existing C code, we need the _C_API to + be directly in greenlet. + */ + const NewReference c_api_object(Require( + PyCapsule_New( + (void*)_PyGreenlet_API, + "greenlet._C_API", + NULL))); + m.PyAddObject("_C_API", c_api_object); + assert(c_api_object.REFCNT() == 2); + + // cerr << "Sizes:" + // << "\n\tGreenlet : " << sizeof(Greenlet) + // << "\n\tUserGreenlet : " << sizeof(UserGreenlet) + // << "\n\tMainGreenlet : " << sizeof(MainGreenlet) + // << "\n\tExceptionState : " << sizeof(greenlet::ExceptionState) + // << "\n\tPythonState : " << sizeof(greenlet::PythonState) + // << "\n\tStackState : " << sizeof(greenlet::StackState) + // << "\n\tSwitchingArgs : " << sizeof(greenlet::SwitchingArgs) + // << "\n\tOwnedObject : " << sizeof(greenlet::refs::OwnedObject) + // << "\n\tBorrowedObject : " << sizeof(greenlet::refs::BorrowedObject) + // << "\n\tPyGreenlet : " << sizeof(PyGreenlet) + // << endl; + + return m.borrow(); // But really it's the main reference. + } + catch (const LockInitError& e) { + PyErr_SetString(PyExc_MemoryError, e.what()); + return NULL; + } + catch (const PyErrOccurred&) { + return NULL; + } + +} + +extern "C" { + +PyMODINIT_FUNC +PyInit__greenlet(void) +{ + return greenlet_internal_mod_init(); +} + +}; // extern C + +#ifdef __clang__ +# pragma clang diagnostic pop +#elif defined(__GNUC__) +# pragma GCC diagnostic pop +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.h new file mode 100644 index 000000000..d02a16e43 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet.h @@ -0,0 +1,164 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ + +/* Greenlet object interface */ + +#ifndef Py_GREENLETOBJECT_H +#define Py_GREENLETOBJECT_H + + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/* This is deprecated and undocumented. It does not change. */ +#define GREENLET_VERSION "1.0.0" + +#ifndef GREENLET_MODULE +#define implementation_ptr_t void* +#endif + +typedef struct _greenlet { + PyObject_HEAD + PyObject* weakreflist; + PyObject* dict; + implementation_ptr_t pimpl; +} PyGreenlet; + +#define PyGreenlet_Check(op) (op && PyObject_TypeCheck(op, &PyGreenlet_Type)) + + +/* C API functions */ + +/* Total number of symbols that are exported */ +#define PyGreenlet_API_pointers 12 + +#define PyGreenlet_Type_NUM 0 +#define PyExc_GreenletError_NUM 1 +#define PyExc_GreenletExit_NUM 2 + +#define PyGreenlet_New_NUM 3 +#define PyGreenlet_GetCurrent_NUM 4 +#define PyGreenlet_Throw_NUM 5 +#define PyGreenlet_Switch_NUM 6 +#define PyGreenlet_SetParent_NUM 7 + +#define PyGreenlet_MAIN_NUM 8 +#define PyGreenlet_STARTED_NUM 9 +#define PyGreenlet_ACTIVE_NUM 10 +#define PyGreenlet_GET_PARENT_NUM 11 + +#ifndef GREENLET_MODULE +/* This section is used by modules that uses the greenlet C API */ +static void** _PyGreenlet_API = NULL; + +# define PyGreenlet_Type \ + (*(PyTypeObject*)_PyGreenlet_API[PyGreenlet_Type_NUM]) + +# define PyExc_GreenletError \ + ((PyObject*)_PyGreenlet_API[PyExc_GreenletError_NUM]) + +# define PyExc_GreenletExit \ + ((PyObject*)_PyGreenlet_API[PyExc_GreenletExit_NUM]) + +/* + * PyGreenlet_New(PyObject *args) + * + * greenlet.greenlet(run, parent=None) + */ +# define PyGreenlet_New \ + (*(PyGreenlet * (*)(PyObject * run, PyGreenlet * parent)) \ + _PyGreenlet_API[PyGreenlet_New_NUM]) + +/* + * PyGreenlet_GetCurrent(void) + * + * greenlet.getcurrent() + */ +# define PyGreenlet_GetCurrent \ + (*(PyGreenlet * (*)(void)) _PyGreenlet_API[PyGreenlet_GetCurrent_NUM]) + +/* + * PyGreenlet_Throw( + * PyGreenlet *greenlet, + * PyObject *typ, + * PyObject *val, + * PyObject *tb) + * + * g.throw(...) + */ +# define PyGreenlet_Throw \ + (*(PyObject * (*)(PyGreenlet * self, \ + PyObject * typ, \ + PyObject * val, \ + PyObject * tb)) \ + _PyGreenlet_API[PyGreenlet_Throw_NUM]) + +/* + * PyGreenlet_Switch(PyGreenlet *greenlet, PyObject *args) + * + * g.switch(*args, **kwargs) + */ +# define PyGreenlet_Switch \ + (*(PyObject * \ + (*)(PyGreenlet * greenlet, PyObject * args, PyObject * kwargs)) \ + _PyGreenlet_API[PyGreenlet_Switch_NUM]) + +/* + * PyGreenlet_SetParent(PyObject *greenlet, PyObject *new_parent) + * + * g.parent = new_parent + */ +# define PyGreenlet_SetParent \ + (*(int (*)(PyGreenlet * greenlet, PyGreenlet * nparent)) \ + _PyGreenlet_API[PyGreenlet_SetParent_NUM]) + +/* + * PyGreenlet_GetParent(PyObject* greenlet) + * + * return greenlet.parent; + * + * This could return NULL even if there is no exception active. + * If it does not return NULL, you are responsible for decrementing the + * reference count. + */ +# define PyGreenlet_GetParent \ + (*(PyGreenlet* (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM]) + +/* + * deprecated, undocumented alias. + */ +# define PyGreenlet_GET_PARENT PyGreenlet_GetParent + +# define PyGreenlet_MAIN \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_MAIN_NUM]) + +# define PyGreenlet_STARTED \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_STARTED_NUM]) + +# define PyGreenlet_ACTIVE \ + (*(int (*)(PyGreenlet*)) \ + _PyGreenlet_API[PyGreenlet_ACTIVE_NUM]) + + + + +/* Macro that imports greenlet and initializes C API */ +/* NOTE: This has actually moved to ``greenlet._greenlet._C_API``, but we + keep the older definition to be sure older code that might have a copy of + the header still works. */ +# define PyGreenlet_Import() \ + { \ + _PyGreenlet_API = (void**)PyCapsule_Import("greenlet._C_API", 0); \ + } + +#endif /* GREENLET_MODULE */ + +#ifdef __cplusplus +} +#endif +#endif /* !Py_GREENLETOBJECT_H */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_allocator.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_allocator.hpp new file mode 100644 index 000000000..b452f5444 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_allocator.hpp @@ -0,0 +1,63 @@ +#ifndef GREENLET_ALLOCATOR_HPP +#define GREENLET_ALLOCATOR_HPP + +#define PY_SSIZE_T_CLEAN +#include +#include +#include "greenlet_compiler_compat.hpp" + + +namespace greenlet +{ + // This allocator is stateless; all instances are identical. + // It can *ONLY* be used when we're sure we're holding the GIL + // (Python's allocators require the GIL). + template + struct PythonAllocator : public std::allocator { + + PythonAllocator(const PythonAllocator& UNUSED(other)) + : std::allocator() + { + } + + PythonAllocator(const std::allocator other) + : std::allocator(other) + {} + + template + PythonAllocator(const std::allocator& other) + : std::allocator(other) + { + } + + PythonAllocator() : std::allocator() {} + + T* allocate(size_t number_objects, const void* UNUSED(hint)=0) + { + void* p; + if (number_objects == 1) + p = PyObject_Malloc(sizeof(T)); + else + p = PyMem_Malloc(sizeof(T) * number_objects); + return static_cast(p); + } + + void deallocate(T* t, size_t n) + { + void* p = t; + if (n == 1) { + PyObject_Free(p); + } + else + PyMem_Free(p); + } + // This member is deprecated in C++17 and removed in C++20 + template< class U > + struct rebind { + typedef PythonAllocator other; + }; + + }; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_compiler_compat.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_compiler_compat.hpp new file mode 100644 index 000000000..af24bd83e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_compiler_compat.hpp @@ -0,0 +1,98 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +#ifndef GREENLET_COMPILER_COMPAT_HPP +#define GREENLET_COMPILER_COMPAT_HPP + +/** + * Definitions to aid with compatibility with different compilers. + * + * .. caution:: Use extreme care with noexcept. + * Some compilers and runtimes, specifically gcc/libgcc/libstdc++ on + * Linux, implement stack unwinding by throwing an uncatchable + * exception, one that specifically does not appear to be an active + * exception to the rest of the runtime. If this happens while we're in a noexcept function, + * we have violated our dynamic exception contract, and so the runtime + * will call std::terminate(), which kills the process with the + * unhelpful message "terminate called without an active exception". + * + * This has happened in this scenario: A background thread is running + * a greenlet that has made a native call and released the GIL. + * Meanwhile, the main thread finishes and starts shutting down the + * interpreter. When the background thread is scheduled again and + * attempts to obtain the GIL, it notices that the interpreter is + * exiting and calls ``pthread_exit()``. This in turn starts to unwind + * the stack by throwing that exception. But we had the ``PyCall`` + * functions annotated as noexcept, so the runtime terminated us. + * + * #2 0x00007fab26fec2b7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6 + * #3 0x00007fab26febb3c in __gxx_personality_v0 () from /lib/x86_64-linux-gnu/libstdc++.so.6 + * #4 0x00007fab26f34de6 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1 + * #6 0x00007fab276a34c6 in __GI___pthread_unwind at ./nptl/unwind.c:130 + * #7 0x00007fab2769bd3a in __do_cancel () at ../sysdeps/nptl/pthreadP.h:280 + * #8 __GI___pthread_exit (value=value@entry=0x0) at ./nptl/pthread_exit.c:36 + * #9 0x000000000052e567 in PyThread_exit_thread () at ../Python/thread_pthread.h:370 + * #10 0x00000000004d60b5 in take_gil at ../Python/ceval_gil.h:224 + * #11 0x00000000004d65f9 in PyEval_RestoreThread at ../Python/ceval.c:467 + * #12 0x000000000060cce3 in setipaddr at ../Modules/socketmodule.c:1203 + * #13 0x00000000006101cd in socket_gethostbyname + */ + +#include + +# define G_NO_COPIES_OF_CLS(Cls) private: \ + Cls(const Cls& other) = delete; \ + Cls& operator=(const Cls& other) = delete + +# define G_NO_ASSIGNMENT_OF_CLS(Cls) private: \ + Cls& operator=(const Cls& other) = delete + +# define G_NO_COPY_CONSTRUCTOR_OF_CLS(Cls) private: \ + Cls(const Cls& other) = delete; + + +// CAUTION: MSVC is stupidly picky: +// +// "The compiler ignores, without warning, any __declspec keywords +// placed after * or & and in front of the variable identifier in a +// declaration." +// (https://docs.microsoft.com/en-us/cpp/cpp/declspec?view=msvc-160) +// +// So pointer return types must be handled differently (because of the +// trailing *), or you get inscrutable compiler warnings like "error +// C2059: syntax error: ''" +// +// In C++ 11, there is a standard syntax for attributes, and +// GCC defines an attribute to use with this: [[gnu:noinline]]. +// In the future, this is expected to become standard. + +#if defined(__GNUC__) || defined(__clang__) +/* We used to check for GCC 4+ or 3.4+, but those compilers are + laughably out of date. Just assume they support it. */ +# define GREENLET_NOINLINE(name) __attribute__((noinline)) name +# define GREENLET_NOINLINE_P(rtype, name) rtype __attribute__((noinline)) name +# define UNUSED(x) UNUSED_ ## x __attribute__((__unused__)) +#elif defined(_MSC_VER) +/* We used to check for && (_MSC_VER >= 1300) but that's also out of date. */ +# define GREENLET_NOINLINE(name) __declspec(noinline) name +# define GREENLET_NOINLINE_P(rtype, name) __declspec(noinline) rtype name +# define UNUSED(x) UNUSED_ ## x +#endif + +#if defined(_MSC_VER) +# define G_NOEXCEPT_WIN32 noexcept +#else +# define G_NOEXCEPT_WIN32 +#endif + +#if defined(__GNUC__) && defined(__POWERPC__) && defined(__APPLE__) +// 32-bit PPC/MacOSX. Only known to be tested on unreleased versions +// of macOS 10.6 using a macports build gcc 14. It appears that +// running C++ destructors of thread-local variables is broken. + +// See https://github.com/python-greenlet/greenlet/pull/419 +# define GREENLET_BROKEN_THREAD_LOCAL_CLEANUP_JUST_LEAK 1 +#else +# define GREENLET_BROKEN_THREAD_LOCAL_CLEANUP_JUST_LEAK 0 +#endif + + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_cpython_compat.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_cpython_compat.hpp new file mode 100644 index 000000000..979d6f94a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_cpython_compat.hpp @@ -0,0 +1,148 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +#ifndef GREENLET_CPYTHON_COMPAT_H +#define GREENLET_CPYTHON_COMPAT_H + +/** + * Helpers for compatibility with multiple versions of CPython. + */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" + + +#if PY_VERSION_HEX >= 0x30A00B1 +# define GREENLET_PY310 1 +#else +# define GREENLET_PY310 0 +#endif + +/* +Python 3.10 beta 1 changed tstate->use_tracing to a nested cframe member. +See https://github.com/python/cpython/pull/25276 +We have to save and restore this as well. + +Python 3.13 removed PyThreadState.cframe (GH-108035). +*/ +#if GREENLET_PY310 && PY_VERSION_HEX < 0x30D0000 +# define GREENLET_USE_CFRAME 1 +#else +# define GREENLET_USE_CFRAME 0 +#endif + + +#if PY_VERSION_HEX >= 0x30B00A4 +/* +Greenlet won't compile on anything older than Python 3.11 alpha 4 (see +https://bugs.python.org/issue46090). Summary of breaking internal changes: +- Python 3.11 alpha 1 changed how frame objects are represented internally. + - https://github.com/python/cpython/pull/30122 +- Python 3.11 alpha 3 changed how recursion limits are stored. + - https://github.com/python/cpython/pull/29524 +- Python 3.11 alpha 4 changed how exception state is stored. It also includes a + change to help greenlet save and restore the interpreter frame "data stack". + - https://github.com/python/cpython/pull/30122 + - https://github.com/python/cpython/pull/30234 +*/ +# define GREENLET_PY311 1 +#else +# define GREENLET_PY311 0 +#endif + + +#if PY_VERSION_HEX >= 0x30C0000 +# define GREENLET_PY312 1 +#else +# define GREENLET_PY312 0 +#endif + +#if PY_VERSION_HEX >= 0x30D0000 +# define GREENLET_PY313 1 +#else +# define GREENLET_PY313 0 +#endif + +#if PY_VERSION_HEX >= 0x30E0000 +# define GREENLET_PY314 1 +#else +# define GREENLET_PY314 0 +#endif + +#ifndef Py_SET_REFCNT +/* Py_REFCNT and Py_SIZE macros are converted to functions +https://bugs.python.org/issue39573 */ +# define Py_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) +#endif + +#ifndef _Py_DEC_REFTOTAL +/* _Py_DEC_REFTOTAL macro has been removed from Python 3.9 by: + https://github.com/python/cpython/commit/49932fec62c616ec88da52642339d83ae719e924 + + The symbol we use to replace it was removed by at least 3.12. +*/ +# ifdef Py_REF_DEBUG +# if GREENLET_PY312 +# define _Py_DEC_REFTOTAL +# else +# define _Py_DEC_REFTOTAL _Py_RefTotal-- +# endif +# else +# define _Py_DEC_REFTOTAL +# endif +#endif +// Define these flags like Cython does if we're on an old version. +#ifndef Py_TPFLAGS_CHECKTYPES + #define Py_TPFLAGS_CHECKTYPES 0 +#endif +#ifndef Py_TPFLAGS_HAVE_INDEX + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif +#ifndef Py_TPFLAGS_HAVE_NEWBUFFER + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#ifndef Py_TPFLAGS_HAVE_VERSION_TAG + #define Py_TPFLAGS_HAVE_VERSION_TAG 0 +#endif + +#define G_TPFLAGS_DEFAULT Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_VERSION_TAG | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_HAVE_NEWBUFFER | Py_TPFLAGS_HAVE_GC + + +#if PY_VERSION_HEX < 0x03090000 +// The official version only became available in 3.9 +# define PyObject_GC_IsTracked(o) _PyObject_GC_IS_TRACKED(o) +#endif + + +// bpo-43760 added PyThreadState_EnterTracing() to Python 3.11.0a2 +#if PY_VERSION_HEX < 0x030B00A2 && !defined(PYPY_VERSION) +static inline void PyThreadState_EnterTracing(PyThreadState *tstate) +{ + tstate->tracing++; +#if PY_VERSION_HEX >= 0x030A00A1 + tstate->cframe->use_tracing = 0; +#else + tstate->use_tracing = 0; +#endif +} +#endif + +// bpo-43760 added PyThreadState_LeaveTracing() to Python 3.11.0a2 +#if PY_VERSION_HEX < 0x030B00A2 && !defined(PYPY_VERSION) +static inline void PyThreadState_LeaveTracing(PyThreadState *tstate) +{ + tstate->tracing--; + int use_tracing = (tstate->c_tracefunc != NULL + || tstate->c_profilefunc != NULL); +#if PY_VERSION_HEX >= 0x030A00A1 + tstate->cframe->use_tracing = use_tracing; +#else + tstate->use_tracing = use_tracing; +#endif +} +#endif + +#if !defined(Py_C_RECURSION_LIMIT) && defined(C_RECURSION_LIMIT) +# define Py_C_RECURSION_LIMIT C_RECURSION_LIMIT +#endif + +#endif /* GREENLET_CPYTHON_COMPAT_H */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_exceptions.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_exceptions.hpp new file mode 100644 index 000000000..617f07c2f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_exceptions.hpp @@ -0,0 +1,171 @@ +#ifndef GREENLET_EXCEPTIONS_HPP +#define GREENLET_EXCEPTIONS_HPP + +#define PY_SSIZE_T_CLEAN +#include +#include +#include + +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wunused-function" +#endif + +namespace greenlet { + + class PyErrOccurred : public std::runtime_error + { + public: + + // CAUTION: In debug builds, may run arbitrary Python code. + static const PyErrOccurred + from_current() + { + assert(PyErr_Occurred()); +#ifndef NDEBUG + // This is not exception safe, and + // not necessarily safe in general (what if it switches?) + // But we only do this in debug mode, where we are in + // tight control of what exceptions are getting raised and + // can prevent those issues. + + // You can't call PyObject_Str with a pending exception. + PyObject* typ; + PyObject* val; + PyObject* tb; + + PyErr_Fetch(&typ, &val, &tb); + PyObject* typs = PyObject_Str(typ); + PyObject* vals = PyObject_Str(val ? val : typ); + const char* typ_msg = PyUnicode_AsUTF8(typs); + const char* val_msg = PyUnicode_AsUTF8(vals); + PyErr_Restore(typ, val, tb); + + std::string msg(typ_msg); + msg += ": "; + msg += val_msg; + PyErrOccurred ex(msg); + Py_XDECREF(typs); + Py_XDECREF(vals); + + return ex; +#else + return PyErrOccurred(); +#endif + } + + PyErrOccurred() : std::runtime_error("") + { + assert(PyErr_Occurred()); + } + + PyErrOccurred(const std::string& msg) : std::runtime_error(msg) + { + assert(PyErr_Occurred()); + } + + PyErrOccurred(PyObject* exc_kind, const char* const msg) + : std::runtime_error(msg) + { + PyErr_SetString(exc_kind, msg); + } + + PyErrOccurred(PyObject* exc_kind, const std::string msg) + : std::runtime_error(msg) + { + // This copies the c_str, so we don't have any lifetime + // issues to worry about. + PyErr_SetString(exc_kind, msg.c_str()); + } + + PyErrOccurred(PyObject* exc_kind, + const std::string msg, //This is the format + //string; that's not + //usually safe! + + PyObject* borrowed_obj_one, PyObject* borrowed_obj_two) + : std::runtime_error(msg) + { + + //This is designed specifically for the + //``check_switch_allowed`` function. + + // PyObject_Str and PyObject_Repr are safe to call with + // NULL pointers; they return the string "" in that + // case. + // This function always returns null. + PyErr_Format(exc_kind, + msg.c_str(), + borrowed_obj_one, borrowed_obj_two); + } + }; + + class TypeError : public PyErrOccurred + { + public: + TypeError(const char* const what) + : PyErrOccurred(PyExc_TypeError, what) + { + } + TypeError(const std::string what) + : PyErrOccurred(PyExc_TypeError, what) + { + } + }; + + class ValueError : public PyErrOccurred + { + public: + ValueError(const char* const what) + : PyErrOccurred(PyExc_ValueError, what) + { + } + }; + + class AttributeError : public PyErrOccurred + { + public: + AttributeError(const char* const what) + : PyErrOccurred(PyExc_AttributeError, what) + { + } + }; + + /** + * Calls `Py_FatalError` when constructed, so you can't actually + * throw this. It just makes static analysis easier. + */ + class PyFatalError : public std::runtime_error + { + public: + PyFatalError(const char* const msg) + : std::runtime_error(msg) + { + Py_FatalError(msg); + } + }; + + static inline PyObject* + Require(PyObject* p, const std::string& msg="") + { + if (!p) { + throw PyErrOccurred(msg); + } + return p; + }; + + static inline void + Require(const int retval) + { + if (retval < 0) { + throw PyErrOccurred(); + } + }; + + +}; +#ifdef __clang__ +# pragma clang diagnostic pop +#endif + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_internal.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_internal.hpp new file mode 100644 index 000000000..f2b15d5fa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_internal.hpp @@ -0,0 +1,107 @@ +/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ +#ifndef GREENLET_INTERNAL_H +#define GREENLET_INTERNAL_H +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wunused-function" +#endif + +/** + * Implementation helpers. + * + * C++ templates and inline functions should go here. + */ +#define PY_SSIZE_T_CLEAN +#include "greenlet_compiler_compat.hpp" +#include "greenlet_cpython_compat.hpp" +#include "greenlet_exceptions.hpp" +#include "TGreenlet.hpp" +#include "greenlet_allocator.hpp" + +#include +#include + +#define GREENLET_MODULE +struct _greenlet; +typedef struct _greenlet PyGreenlet; +namespace greenlet { + + class ThreadState; + // We can't use the PythonAllocator for this, because we push to it + // from the thread state destructor, which doesn't have the GIL, + // and Python's allocators can only be called with the GIL. + typedef std::vector cleanup_queue_t; + +}; + + +#define implementation_ptr_t greenlet::Greenlet* + + +#include "greenlet.h" + +void +greenlet::refs::MainGreenletExactChecker(void *p) +{ + if (!p) { + return; + } + // We control the class of the main greenlet exactly. + if (Py_TYPE(p) != &PyGreenlet_Type) { + std::string err("MainGreenlet: Expected exactly a greenlet, not a "); + err += Py_TYPE(p)->tp_name; + throw greenlet::TypeError(err); + } + + // Greenlets from dead threads no longer respond to main() with a + // true value; so in that case we need to perform an additional + // check. + Greenlet* g = static_cast(p)->pimpl; + if (g->main()) { + return; + } + if (!dynamic_cast(g)) { + std::string err("MainGreenlet: Expected exactly a main greenlet, not a "); + err += Py_TYPE(p)->tp_name; + throw greenlet::TypeError(err); + } +} + + + +template +inline greenlet::Greenlet* greenlet::refs::_OwnedGreenlet::operator->() const noexcept +{ + return reinterpret_cast(this->p)->pimpl; +} + +template +inline greenlet::Greenlet* greenlet::refs::_BorrowedGreenlet::operator->() const noexcept +{ + return reinterpret_cast(this->p)->pimpl; +} + +#include +#include + + +extern PyTypeObject PyGreenlet_Type; + + + +/** + * Forward declarations needed in multiple files. + */ +static PyObject* green_switch(PyGreenlet* self, PyObject* args, PyObject* kwargs); + + +#ifdef __clang__ +# pragma clang diagnostic pop +#endif + + +#endif + +// Local Variables: +// flycheck-clang-include-path: ("../../include" "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10") +// End: diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_msvc_compat.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_msvc_compat.hpp new file mode 100644 index 000000000..c00245b05 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_msvc_compat.hpp @@ -0,0 +1,91 @@ +#ifndef GREENLET_MSVC_COMPAT_HPP +#define GREENLET_MSVC_COMPAT_HPP +/* + * Support for MSVC on Windows. + * + * Beginning with Python 3.14, some of the internal + * include files we need are not compatible with MSVC + * in C++ mode: + * + * internal\pycore_stackref.h(253): error C4576: a parenthesized type + * followed by an initializer list is a non-standard explicit type conversion syntax + * + * This file is included from ``internal/pycore_interpframe.h``, which + * we need for the ``_PyFrame_IsIncomplete`` API. + * + * Unfortunately, that API is a ``static inline`` function, as are a + * bunch of the functions it calls. The only solution seems to be to + * copy those definitions and the supporting inline functions here. + * + * Now, this makes us VERY fragile to changes in those functions. Because + * they're internal and static, the CPython devs might feel free to change + * them in even minor versions, meaning that we could runtime link and load, + * but still crash. We have that problem on all platforms though. It's just worse + * here because we have to keep copying the updated definitions. + */ +#include +#include "greenlet_cpython_compat.hpp" + +// This file is only included on 3.14+ + +extern "C" { + +// pycore_code.h ---------------- +#define _PyCode_CODE(CO) _Py_RVALUE((_Py_CODEUNIT *)(CO)->co_code_adaptive) +// End pycore_code.h ---------- + +// pycore_interpframe.h ---------- +#if !defined(Py_GIL_DISABLED) && defined(Py_STACKREF_DEBUG) + +#define Py_TAG_BITS 0 +#else +#define Py_TAG_BITS ((uintptr_t)1) +#define Py_TAG_DEFERRED (1) +#endif + + +static const _PyStackRef PyStackRef_NULL = { .bits = Py_TAG_DEFERRED}; +#define PyStackRef_IsNull(stackref) ((stackref).bits == PyStackRef_NULL.bits) + +static inline PyObject * +PyStackRef_AsPyObjectBorrow(_PyStackRef stackref) +{ + PyObject *cleared = ((PyObject *)((stackref).bits & (~Py_TAG_BITS))); + return cleared; +} + +static inline PyCodeObject *_PyFrame_GetCode(_PyInterpreterFrame *f) { + assert(!PyStackRef_IsNull(f->f_executable)); + PyObject *executable = PyStackRef_AsPyObjectBorrow(f->f_executable); + assert(PyCode_Check(executable)); + return (PyCodeObject *)executable; +} + + +static inline _Py_CODEUNIT * +_PyFrame_GetBytecode(_PyInterpreterFrame *f) +{ +#ifdef Py_GIL_DISABLED + PyCodeObject *co = _PyFrame_GetCode(f); + _PyCodeArray *tlbc = _PyCode_GetTLBCArray(co); + assert(f->tlbc_index >= 0 && f->tlbc_index < tlbc->size); + return (_Py_CODEUNIT *)tlbc->entries[f->tlbc_index]; +#else + return _PyCode_CODE(_PyFrame_GetCode(f)); +#endif +} + +static inline bool //_Py_NO_SANITIZE_THREAD +_PyFrame_IsIncomplete(_PyInterpreterFrame *frame) +{ + if (frame->owner >= FRAME_OWNED_BY_INTERPRETER) { + return true; + } + return frame->owner != FRAME_OWNED_BY_GENERATOR && + frame->instr_ptr < _PyFrame_GetBytecode(frame) + + _PyFrame_GetCode(frame)->_co_firsttraceable; +} +// pycore_interpframe.h ---------- + +} +#endif // GREENLET_MSVC_COMPAT_HPP diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_refs.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_refs.hpp new file mode 100644 index 000000000..b7e5e3f2a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_refs.hpp @@ -0,0 +1,1118 @@ +#ifndef GREENLET_REFS_HPP +#define GREENLET_REFS_HPP + +#define PY_SSIZE_T_CLEAN +#include + +#include + +//#include "greenlet_internal.hpp" +#include "greenlet_compiler_compat.hpp" +#include "greenlet_cpython_compat.hpp" +#include "greenlet_exceptions.hpp" + +struct _greenlet; +struct _PyMainGreenlet; + +typedef struct _greenlet PyGreenlet; +extern PyTypeObject PyGreenlet_Type; + + +#ifdef GREENLET_USE_STDIO +#include +using std::cerr; +using std::endl; +#endif + +namespace greenlet +{ + class Greenlet; + + namespace refs + { + // Type checkers throw a TypeError if the argument is not + // null, and isn't of the required Python type. + // (We can't use most of the defined type checkers + // like PyList_Check, etc, directly, because they are + // implemented as macros.) + typedef void (*TypeChecker)(void*); + + void + NoOpChecker(void*) + { + return; + } + + void + GreenletChecker(void *p) + { + if (!p) { + return; + } + + PyTypeObject* typ = Py_TYPE(p); + // fast, common path. (PyObject_TypeCheck is a macro or + // static inline function, and it also does a + // direct comparison of the type pointers, but its fast + // path only handles one type) + if (typ == &PyGreenlet_Type) { + return; + } + + if (!PyObject_TypeCheck(p, &PyGreenlet_Type)) { + std::string err("GreenletChecker: Expected any type of greenlet, not "); + err += Py_TYPE(p)->tp_name; + throw TypeError(err); + } + } + + void + MainGreenletExactChecker(void *p); + + template + class PyObjectPointer; + + template + class OwnedReference; + + + template + class BorrowedReference; + + typedef BorrowedReference BorrowedObject; + typedef OwnedReference OwnedObject; + + class ImmortalObject; + class ImmortalString; + + template + class _OwnedGreenlet; + + typedef _OwnedGreenlet OwnedGreenlet; + typedef _OwnedGreenlet OwnedMainGreenlet; + + template + class _BorrowedGreenlet; + + typedef _BorrowedGreenlet BorrowedGreenlet; + + void + ContextExactChecker(void *p) + { + if (!p) { + return; + } + if (!PyContext_CheckExact(p)) { + throw TypeError( + "greenlet context must be a contextvars.Context or None" + ); + } + } + + typedef OwnedReference OwnedContext; + } +} + +namespace greenlet { + + + namespace refs { + // A set of classes to make reference counting rules in python + // code explicit. + // + // Rules of use: + // (1) Functions returning a new reference that the caller of the + // function is expected to dispose of should return a + // ``OwnedObject`` object. This object automatically releases its + // reference when it goes out of scope. It works like a ``std::shared_ptr`` + // and can be copied or used as a function parameter (but don't do + // that). Note that constructing a ``OwnedObject`` from a + // PyObject* steals the reference. + // (2) Parameters to functions should be either a + // ``OwnedObject&``, or, more generally, a ``PyObjectPointer&``. + // If the function needs to create its own new reference, it can + // do so by copying to a local ``OwnedObject``. + // (3) Functions returning an existing pointer that is NOT + // incref'd, and which the caller MUST NOT decref, + // should return a ``BorrowedObject``. + + // XXX: The following two paragraphs do not hold for all platforms. + // Notably, 32-bit PPC Linux passes structs by reference, not by + // value, so this actually doesn't work. (Although that's the only + // platform that doesn't work on.) DO NOT ATTEMPT IT. The + // unfortunate consequence of that is that the slots which we + // *know* are already type safe will wind up calling the type + // checker function (when we had the slots accepting + // BorrowedGreenlet, this was bypassed), so this slows us down. + // TODO: Optimize this again. + + // For a class with a single pointer member, whose constructor + // does nothing but copy a pointer parameter into the member, and + // which can then be converted back to the pointer type, compilers + // generate code that's the same as just passing the pointer. + // That is, func(BorrowedObject x) called like ``PyObject* p = + // ...; f(p)`` has 0 overhead. Similarly, they "unpack" to the + // pointer type with 0 overhead. + // + // If there are no virtual functions, no complex inheritance (maybe?) and + // no destructor, these can be directly used as parameters in + // Python callbacks like tp_init: the layout is the same as a + // single pointer. Only subclasses with trivial constructors that + // do nothing but set the single pointer member are safe to use + // that way. + + + // This is the base class for things that can be done with a + // PyObject pointer. It assumes nothing about memory management. + // NOTE: Nothing is virtual, so subclasses shouldn't add new + // storage fields or try to override these methods. + template + class PyObjectPointer + { + public: + typedef T PyType; + protected: + T* p; + public: + PyObjectPointer(T* it=nullptr) : p(it) + { + TC(p); + } + + // We don't allow automatic casting to PyObject* at this + // level, because then we could be passed to Py_DECREF/INCREF, + // but we want nothing to do with memory management. If you + // know better, then you can use the get() method, like on a + // std::shared_ptr. Except we name it borrow() to clarify that + // if this is a reference-tracked object, the pointer you get + // back will go away when the object does. + // TODO: This should probably not exist here, but be moved + // down to relevant sub-types. + + T* borrow() const noexcept + { + return this->p; + } + + PyObject* borrow_o() const noexcept + { + return reinterpret_cast(this->p); + } + + T* operator->() const noexcept + { + return this->p; + } + + bool is_None() const noexcept + { + return this->p == Py_None; + } + + PyObject* acquire_or_None() const noexcept + { + PyObject* result = this->p ? reinterpret_cast(this->p) : Py_None; + Py_INCREF(result); + return result; + } + + explicit operator bool() const noexcept + { + return this->p != nullptr; + } + + bool operator!() const noexcept + { + return this->p == nullptr; + } + + Py_ssize_t REFCNT() const noexcept + { + return p ? Py_REFCNT(p) : -42; + } + + PyTypeObject* TYPE() const noexcept + { + return p ? Py_TYPE(p) : nullptr; + } + + inline OwnedObject PyStr() const noexcept; + inline const std::string as_str() const noexcept; + inline OwnedObject PyGetAttr(const ImmortalObject& name) const noexcept; + inline OwnedObject PyRequireAttr(const char* const name) const; + inline OwnedObject PyRequireAttr(const ImmortalString& name) const; + inline OwnedObject PyCall(const BorrowedObject& arg) const; + inline OwnedObject PyCall(PyGreenlet* arg) const ; + inline OwnedObject PyCall(PyObject* arg) const ; + // PyObject_Call(this, args, kwargs); + inline OwnedObject PyCall(const BorrowedObject args, + const BorrowedObject kwargs) const; + inline OwnedObject PyCall(const OwnedObject& args, + const OwnedObject& kwargs) const; + + protected: + void _set_raw_pointer(void* t) + { + TC(t); + p = reinterpret_cast(t); + } + void* _get_raw_pointer() const + { + return p; + } + }; + +#ifdef GREENLET_USE_STDIO + template + std::ostream& operator<<(std::ostream& os, const PyObjectPointer& s) + { + const std::type_info& t = typeid(s); + os << t.name() + << "(addr=" << s.borrow() + << ", refcnt=" << s.REFCNT() + << ", value=" << s.as_str() + << ")"; + + return os; + } +#endif + + template + inline bool operator==(const PyObjectPointer& lhs, const PyObject* const rhs) noexcept + { + return static_cast(lhs.borrow_o()) == static_cast(rhs); + } + + template + inline bool operator==(const PyObjectPointer& lhs, const PyObjectPointer& rhs) noexcept + { + return lhs.borrow_o() == rhs.borrow_o(); + } + + template + inline bool operator!=(const PyObjectPointer& lhs, + const PyObjectPointer& rhs) noexcept + { + return lhs.borrow_o() != rhs.borrow_o(); + } + + template + class OwnedReference : public PyObjectPointer + { + private: + friend class OwnedList; + + protected: + explicit OwnedReference(T* it) : PyObjectPointer(it) + { + } + + public: + + // Constructors + + static OwnedReference consuming(PyObject* p) + { + return OwnedReference(reinterpret_cast(p)); + } + + static OwnedReference owning(T* p) + { + OwnedReference result(p); + Py_XINCREF(result.p); + return result; + } + + OwnedReference() : PyObjectPointer(nullptr) + {} + + explicit OwnedReference(const PyObjectPointer<>& other) + : PyObjectPointer(nullptr) + { + T* op = other.borrow(); + TC(op); + this->p = other.borrow(); + Py_XINCREF(this->p); + } + + // It would be good to make use of the C++11 distinction + // between move and copy operations, e.g., constructing from a + // pointer should be a move operation. + // In the common case of ``OwnedObject x = Py_SomeFunction()``, + // the call to the copy constructor will be elided completely. + OwnedReference(const OwnedReference& other) + : PyObjectPointer(other.p) + { + Py_XINCREF(this->p); + } + + static OwnedReference None() + { + Py_INCREF(Py_None); + return OwnedReference(Py_None); + } + + // We can assign from exactly our type without any extra checking + OwnedReference& operator=(const OwnedReference& other) + { + Py_XINCREF(other.p); + const T* tmp = this->p; + this->p = other.p; + Py_XDECREF(tmp); + return *this; + } + + OwnedReference& operator=(const BorrowedReference other) + { + return this->operator=(other.borrow()); + } + + OwnedReference& operator=(T* const other) + { + TC(other); + Py_XINCREF(other); + T* tmp = this->p; + this->p = other; + Py_XDECREF(tmp); + return *this; + } + + // We can assign from an arbitrary reference type + // if it passes our check. + template + OwnedReference& operator=(const OwnedReference& other) + { + X* op = other.borrow(); + TC(op); + return this->operator=(reinterpret_cast(op)); + } + + inline void steal(T* other) + { + assert(this->p == nullptr); + TC(other); + this->p = other; + } + + T* relinquish_ownership() + { + T* result = this->p; + this->p = nullptr; + return result; + } + + T* acquire() const + { + // Return a new reference. + // TODO: This may go away when we have reference objects + // throughout the code. + Py_XINCREF(this->p); + return this->p; + } + + // Nothing else declares a destructor, we're the leaf, so we + // should be able to get away without virtual. + ~OwnedReference() + { + Py_CLEAR(this->p); + } + + void CLEAR() + { + Py_CLEAR(this->p); + assert(this->p == nullptr); + } + }; + + static inline + void operator<<=(PyObject*& target, OwnedObject& o) + { + target = o.relinquish_ownership(); + } + + + class NewReference : public OwnedObject + { + private: + G_NO_COPIES_OF_CLS(NewReference); + public: + // Consumes the reference. Only use this + // for API return values. + NewReference(PyObject* it) : OwnedObject(it) + { + } + }; + + class NewDictReference : public NewReference + { + private: + G_NO_COPIES_OF_CLS(NewDictReference); + public: + NewDictReference() : NewReference(PyDict_New()) + { + if (!this->p) { + throw PyErrOccurred(); + } + } + + void SetItem(const char* const key, PyObject* value) + { + Require(PyDict_SetItemString(this->p, key, value)); + } + + void SetItem(const PyObjectPointer<>& key, PyObject* value) + { + Require(PyDict_SetItem(this->p, key.borrow_o(), value)); + } + }; + + template + class _OwnedGreenlet: public OwnedReference + { + private: + protected: + _OwnedGreenlet(T* it) : OwnedReference(it) + {} + + public: + _OwnedGreenlet() : OwnedReference() + {} + + _OwnedGreenlet(const _OwnedGreenlet& other) : OwnedReference(other) + { + } + _OwnedGreenlet(OwnedMainGreenlet& other) : + OwnedReference(reinterpret_cast(other.acquire())) + { + } + _OwnedGreenlet(const BorrowedGreenlet& other); + // Steals a reference. + static _OwnedGreenlet consuming(PyGreenlet* it) + { + return _OwnedGreenlet(reinterpret_cast(it)); + } + + inline _OwnedGreenlet& operator=(const OwnedGreenlet& other) + { + return this->operator=(other.borrow()); + } + + inline _OwnedGreenlet& operator=(const BorrowedGreenlet& other); + + _OwnedGreenlet& operator=(const OwnedMainGreenlet& other) + { + PyGreenlet* owned = other.acquire(); + Py_XDECREF(this->p); + this->p = reinterpret_cast(owned); + return *this; + } + + _OwnedGreenlet& operator=(T* const other) + { + OwnedReference::operator=(other); + return *this; + } + + T* relinquish_ownership() + { + T* result = this->p; + this->p = nullptr; + return result; + } + + PyObject* relinquish_ownership_o() + { + return reinterpret_cast(relinquish_ownership()); + } + + inline Greenlet* operator->() const noexcept; + inline operator Greenlet*() const noexcept; + }; + + template + class BorrowedReference : public PyObjectPointer + { + public: + // Allow implicit creation from PyObject* pointers as we + // transition to using these classes. Also allow automatic + // conversion to PyObject* for passing to C API calls and even + // for Py_INCREF/DECREF, because we ourselves do no memory management. + BorrowedReference(T* it) : PyObjectPointer(it) + {} + + BorrowedReference(const PyObjectPointer& ref) : PyObjectPointer(ref.borrow()) + {} + + BorrowedReference() : PyObjectPointer(nullptr) + {} + + operator T*() const + { + return this->p; + } + }; + + typedef BorrowedReference BorrowedObject; + //typedef BorrowedReference BorrowedGreenlet; + + template + class _BorrowedGreenlet : public BorrowedReference + { + public: + _BorrowedGreenlet() : + BorrowedReference(nullptr) + {} + + _BorrowedGreenlet(T* it) : + BorrowedReference(it) + {} + + _BorrowedGreenlet(const BorrowedObject& it); + + _BorrowedGreenlet(const OwnedGreenlet& it) : + BorrowedReference(it.borrow()) + {} + + _BorrowedGreenlet& operator=(const BorrowedObject& other); + + // We get one of these for PyGreenlet, but one for PyObject + // is handy as well + operator PyObject*() const + { + return reinterpret_cast(this->p); + } + Greenlet* operator->() const noexcept; + operator Greenlet*() const noexcept; + }; + + typedef _BorrowedGreenlet BorrowedGreenlet; + + template + _OwnedGreenlet::_OwnedGreenlet(const BorrowedGreenlet& other) + : OwnedReference(reinterpret_cast(other.borrow())) + { + Py_XINCREF(this->p); + } + + + class BorrowedMainGreenlet + : public _BorrowedGreenlet + { + public: + BorrowedMainGreenlet(const OwnedMainGreenlet& it) : + _BorrowedGreenlet(it.borrow()) + {} + BorrowedMainGreenlet(PyGreenlet* it=nullptr) + : _BorrowedGreenlet(it) + {} + }; + + template + _OwnedGreenlet& _OwnedGreenlet::operator=(const BorrowedGreenlet& other) + { + return this->operator=(other.borrow()); + } + + + class ImmortalObject : public PyObjectPointer<> + { + private: + G_NO_ASSIGNMENT_OF_CLS(ImmortalObject); + public: + explicit ImmortalObject(PyObject* it) : PyObjectPointer<>(it) + { + } + + ImmortalObject(const ImmortalObject& other) + : PyObjectPointer<>(other.p) + { + + } + + /** + * Become the new owner of the object. Does not change the + * reference count. + */ + ImmortalObject& operator=(PyObject* it) + { + assert(this->p == nullptr); + this->p = it; + return *this; + } + + static ImmortalObject consuming(PyObject* it) + { + return ImmortalObject(it); + } + + inline operator PyObject*() const + { + return this->p; + } + }; + + class ImmortalString : public ImmortalObject + { + private: + G_NO_COPIES_OF_CLS(ImmortalString); + const char* str; + public: + ImmortalString(const char* const str) : + ImmortalObject(str ? Require(PyUnicode_InternFromString(str)) : nullptr) + { + this->str = str; + } + + inline ImmortalString& operator=(const char* const str) + { + if (!this->p) { + this->p = Require(PyUnicode_InternFromString(str)); + this->str = str; + } + else { + assert(this->str == str); + } + return *this; + } + + inline operator std::string() const + { + return this->str; + } + + }; + + class ImmortalEventName : public ImmortalString + { + private: + G_NO_COPIES_OF_CLS(ImmortalEventName); + public: + ImmortalEventName(const char* const str) : ImmortalString(str) + {} + }; + + class ImmortalException : public ImmortalObject + { + private: + G_NO_COPIES_OF_CLS(ImmortalException); + public: + ImmortalException(const char* const name, PyObject* base=nullptr) : + ImmortalObject(name + // Python 2.7 isn't const correct + ? Require(PyErr_NewException((char*)name, base, nullptr)) + : nullptr) + {} + + inline bool PyExceptionMatches() const + { + return PyErr_ExceptionMatches(this->p) > 0; + } + + }; + + template + inline OwnedObject PyObjectPointer::PyStr() const noexcept + { + if (!this->p) { + return OwnedObject(); + } + return OwnedObject::consuming(PyObject_Str(reinterpret_cast(this->p))); + } + + template + inline const std::string PyObjectPointer::as_str() const noexcept + { + // NOTE: This is not Python exception safe. + if (this->p) { + // The Python APIs return a cached char* value that's only valid + // as long as the original object stays around, and we're + // about to (probably) toss it. Hence the copy to std::string. + OwnedObject py_str = this->PyStr(); + if (!py_str) { + return "(nil)"; + } + return PyUnicode_AsUTF8(py_str.borrow()); + } + return "(nil)"; + } + + template + inline OwnedObject PyObjectPointer::PyGetAttr(const ImmortalObject& name) const noexcept + { + assert(this->p); + return OwnedObject::consuming(PyObject_GetAttr(reinterpret_cast(this->p), name)); + } + + template + inline OwnedObject PyObjectPointer::PyRequireAttr(const char* const name) const + { + assert(this->p); + return OwnedObject::consuming(Require(PyObject_GetAttrString(this->p, name), name)); + } + + template + inline OwnedObject PyObjectPointer::PyRequireAttr(const ImmortalString& name) const + { + assert(this->p); + return OwnedObject::consuming(Require( + PyObject_GetAttr( + reinterpret_cast(this->p), + name + ), + name + )); + } + + template + inline OwnedObject PyObjectPointer::PyCall(const BorrowedObject& arg) const + { + return this->PyCall(arg.borrow()); + } + + template + inline OwnedObject PyObjectPointer::PyCall(PyGreenlet* arg) const + { + return this->PyCall(reinterpret_cast(arg)); + } + + template + inline OwnedObject PyObjectPointer::PyCall(PyObject* arg) const + { + assert(this->p); + return OwnedObject::consuming(PyObject_CallFunctionObjArgs(this->p, arg, NULL)); + } + + template + inline OwnedObject PyObjectPointer::PyCall(const BorrowedObject args, + const BorrowedObject kwargs) const + { + assert(this->p); + return OwnedObject::consuming(PyObject_Call(this->p, args, kwargs)); + } + + template + inline OwnedObject PyObjectPointer::PyCall(const OwnedObject& args, + const OwnedObject& kwargs) const + { + assert(this->p); + return OwnedObject::consuming(PyObject_Call(this->p, args.borrow(), kwargs.borrow())); + } + + inline void + ListChecker(void * p) + { + if (!p) { + return; + } + if (!PyList_Check(p)) { + throw TypeError("Expected a list"); + } + } + + class OwnedList : public OwnedReference + { + private: + G_NO_ASSIGNMENT_OF_CLS(OwnedList); + public: + // TODO: Would like to use move. + explicit OwnedList(const OwnedObject& other) + : OwnedReference(other) + { + } + + OwnedList& operator=(const OwnedObject& other) + { + if (other && PyList_Check(other.p)) { + // Valid list. Own a new reference to it, discard the + // reference to what we did own. + PyObject* new_ptr = other.p; + Py_INCREF(new_ptr); + Py_XDECREF(this->p); + this->p = new_ptr; + } + else { + // Either the other object was NULL (an error) or it + // wasn't a list. Either way, we're now invalidated. + Py_XDECREF(this->p); + this->p = nullptr; + } + return *this; + } + + inline bool empty() const + { + return PyList_GET_SIZE(p) == 0; + } + + inline Py_ssize_t size() const + { + return PyList_GET_SIZE(p); + } + + inline BorrowedObject at(const Py_ssize_t index) const + { + return PyList_GET_ITEM(p, index); + } + + inline void clear() + { + PyList_SetSlice(p, 0, PyList_GET_SIZE(p), NULL); + } + }; + + // Use this to represent the module object used at module init + // time. + // This could either be a borrowed (Py2) or new (Py3) reference; + // either way, we don't want to do any memory management + // on it here, Python itself will handle that. + // XXX: Actually, that's not quite right. On Python 3, if an + // exception occurs before we return to the interpreter, this will + // leak; but all previous versions also had that problem. + class CreatedModule : public PyObjectPointer<> + { + private: + G_NO_COPIES_OF_CLS(CreatedModule); + public: + CreatedModule(PyModuleDef& mod_def) : PyObjectPointer<>( + Require(PyModule_Create(&mod_def))) + { + } + + // PyAddObject(): Add a reference to the object to the module. + // On return, the reference count of the object is unchanged. + // + // The docs warn that PyModule_AddObject only steals the + // reference on success, so if it fails after we've incref'd + // or allocated, we're responsible for the decref. + void PyAddObject(const char* name, const long new_bool) + { + OwnedObject p = OwnedObject::consuming(Require(PyBool_FromLong(new_bool))); + this->PyAddObject(name, p); + } + + void PyAddObject(const char* name, const OwnedObject& new_object) + { + // The caller already owns a reference they will decref + // when their variable goes out of scope, we still need to + // incref/decref. + this->PyAddObject(name, new_object.borrow()); + } + + void PyAddObject(const char* name, const ImmortalObject& new_object) + { + this->PyAddObject(name, new_object.borrow()); + } + + void PyAddObject(const char* name, PyTypeObject& type) + { + this->PyAddObject(name, reinterpret_cast(&type)); + } + + void PyAddObject(const char* name, PyObject* new_object) + { + Py_INCREF(new_object); + try { + Require(PyModule_AddObject(this->p, name, new_object)); + } + catch (const PyErrOccurred&) { + Py_DECREF(p); + throw; + } + } + }; + + class PyErrFetchParam : public PyObjectPointer<> + { + // Not an owned object, because we can't be initialized with + // one, and we only sometimes acquire ownership. + private: + G_NO_COPIES_OF_CLS(PyErrFetchParam); + public: + // To allow declaring these and passing them to + // PyErr_Fetch we implement the empty constructor, + // and the address operator. + PyErrFetchParam() : PyObjectPointer<>(nullptr) + { + } + + PyObject** operator&() + { + return &this->p; + } + + // This allows us to pass one directly without the &, + // BUT it has higher precedence than the bool operator + // if it's not explicit. + operator PyObject**() + { + return &this->p; + } + + // We don't want to be able to pass these to Py_DECREF and + // such so we don't have the implicit PyObject* conversion. + + inline PyObject* relinquish_ownership() + { + PyObject* result = this->p; + this->p = nullptr; + return result; + } + + ~PyErrFetchParam() + { + Py_XDECREF(p); + } + }; + + class OwnedErrPiece : public OwnedObject + { + private: + + public: + // Unlike OwnedObject, this increments the refcount. + OwnedErrPiece(PyObject* p=nullptr) : OwnedObject(p) + { + this->acquire(); + } + + PyObject** operator&() + { + return &this->p; + } + + inline operator PyObject*() const + { + return this->p; + } + + operator PyTypeObject*() const + { + return reinterpret_cast(this->p); + } + }; + + class PyErrPieces + { + private: + OwnedErrPiece type; + OwnedErrPiece instance; + OwnedErrPiece traceback; + bool restored; + public: + // Takes new references; if we're destroyed before + // restoring the error, we drop the references. + PyErrPieces(PyObject* t, PyObject* v, PyObject* tb) : + type(t), + instance(v), + traceback(tb), + restored(0) + { + this->normalize(); + } + + PyErrPieces() : + restored(0) + { + // PyErr_Fetch transfers ownership to us, so + // we don't actually need to INCREF; but we *do* + // need to DECREF if we're not restored. + PyErrFetchParam t, v, tb; + PyErr_Fetch(&t, &v, &tb); + type.steal(t.relinquish_ownership()); + instance.steal(v.relinquish_ownership()); + traceback.steal(tb.relinquish_ownership()); + } + + void PyErrRestore() + { + // can only do this once + assert(!this->restored); + this->restored = true; + PyErr_Restore( + this->type.relinquish_ownership(), + this->instance.relinquish_ownership(), + this->traceback.relinquish_ownership()); + assert(!this->type && !this->instance && !this->traceback); + } + + private: + void normalize() + { + // First, check the traceback argument, replacing None, + // with NULL + if (traceback.is_None()) { + traceback = nullptr; + } + + if (traceback && !PyTraceBack_Check(traceback.borrow())) { + throw PyErrOccurred(PyExc_TypeError, + "throw() third argument must be a traceback object"); + } + + if (PyExceptionClass_Check(type)) { + // If we just had a type, we'll now have a type and + // instance. + // The type's refcount will have gone up by one + // because of the instance and the instance will have + // a refcount of one. Either way, we owned, and still + // do own, exactly one reference. + PyErr_NormalizeException(&type, &instance, &traceback); + + } + else if (PyExceptionInstance_Check(type)) { + /* Raising an instance --- usually that means an + object that is a subclass of BaseException, but on + Python 2, that can also mean an arbitrary old-style + object. The value should be a dummy. */ + if (instance && !instance.is_None()) { + throw PyErrOccurred( + PyExc_TypeError, + "instance exception may not have a separate value"); + } + /* Normalize to raise , */ + this->instance = this->type; + this->type = PyExceptionInstance_Class(instance.borrow()); + + /* + It would be tempting to do this: + + Py_ssize_t type_count = Py_REFCNT(Py_TYPE(instance.borrow())); + this->type = PyExceptionInstance_Class(instance.borrow()); + assert(this->type.REFCNT() == type_count + 1); + + But that doesn't work on Python 2 in the case of + old-style instances: The result of Py_TYPE is going to + be the global shared that all + old-style classes have, while the return of Instance_Class() + will be the Python-level class object. The two are unrelated. + */ + } + else { + /* Not something you can raise. throw() fails. */ + PyErr_Format(PyExc_TypeError, + "exceptions must be classes, or instances, not %s", + Py_TYPE(type.borrow())->tp_name); + throw PyErrOccurred(); + } + } + }; + + // PyArg_Parse's O argument returns a borrowed reference. + class PyArgParseParam : public BorrowedObject + { + private: + G_NO_COPIES_OF_CLS(PyArgParseParam); + public: + explicit PyArgParseParam(PyObject* p=nullptr) : BorrowedObject(p) + { + } + + inline PyObject** operator&() + { + return &this->p; + } + }; + +};}; + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_slp_switch.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_slp_switch.hpp new file mode 100644 index 000000000..bd4b7ae1a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_slp_switch.hpp @@ -0,0 +1,99 @@ +#ifndef GREENLET_SLP_SWITCH_HPP +#define GREENLET_SLP_SWITCH_HPP + +#include "greenlet_compiler_compat.hpp" +#include "greenlet_refs.hpp" + +/* + * the following macros are spliced into the OS/compiler + * specific code, in order to simplify maintenance. + */ +// We can save about 10% of the time it takes to switch greenlets if +// we thread the thread state through the slp_save_state() and the +// following slp_restore_state() calls from +// slp_switch()->g_switchstack() (which already needs to access it). +// +// However: +// +// that requires changing the prototypes and implementations of the +// switching functions. If we just change the prototype of +// slp_switch() to accept the argument and update the macros, without +// changing the implementation of slp_switch(), we get crashes on +// 64-bit Linux and 32-bit x86 (for reasons that aren't 100% clear); +// on the other hand, 64-bit macOS seems to be fine. Also, 64-bit +// windows is an issue because slp_switch is written fully in assembly +// and currently ignores its argument so some code would have to be +// adjusted there to pass the argument on to the +// ``slp_save_state_asm()`` function (but interestingly, because of +// the calling convention, the extra argument is just ignored and +// things function fine, albeit slower, if we just modify +// ``slp_save_state_asm`()` to fetch the pointer to pass to the +// macro.) +// +// Our compromise is to use a *glabal*, untracked, weak, pointer +// to the necessary thread state during the process of switching only. +// This is safe because we're protected by the GIL, and if we're +// running this code, the thread isn't exiting. This also nets us a +// 10-12% speed improvement. + +static greenlet::Greenlet* volatile switching_thread_state = nullptr; + + +extern "C" { +static int GREENLET_NOINLINE(slp_save_state_trampoline)(char* stackref); +static void GREENLET_NOINLINE(slp_restore_state_trampoline)(); +} + + +#define SLP_SAVE_STATE(stackref, stsizediff) \ +do { \ + assert(switching_thread_state); \ + stackref += STACK_MAGIC; \ + if (slp_save_state_trampoline((char*)stackref)) \ + return -1; \ + if (!switching_thread_state->active()) \ + return 1; \ + stsizediff = switching_thread_state->stack_start() - (char*)stackref; \ +} while (0) + +#define SLP_RESTORE_STATE() slp_restore_state_trampoline() + +#define SLP_EVAL +extern "C" { +#define slp_switch GREENLET_NOINLINE(slp_switch) +#include "slp_platformselect.h" +} +#undef slp_switch + +#ifndef STACK_MAGIC +# error \ + "greenlet needs to be ported to this platform, or taught how to detect your compiler properly." +#endif /* !STACK_MAGIC */ + + + +#ifdef EXTERNAL_ASM +/* CCP addition: Make these functions, to be called from assembler. + * The token include file for the given platform should enable the + * EXTERNAL_ASM define so that this is included. + */ +extern "C" { +intptr_t +slp_save_state_asm(intptr_t* ref) +{ + intptr_t diff; + SLP_SAVE_STATE(ref, diff); + return diff; +} + +void +slp_restore_state_asm(void) +{ + SLP_RESTORE_STATE(); +} + +extern int slp_switch(void); +}; +#endif + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_thread_support.hpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_thread_support.hpp new file mode 100644 index 000000000..3ded7d2b7 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/greenlet_thread_support.hpp @@ -0,0 +1,31 @@ +#ifndef GREENLET_THREAD_SUPPORT_HPP +#define GREENLET_THREAD_SUPPORT_HPP + +/** + * Defines various utility functions to help greenlet integrate well + * with threads. This used to be needed when we supported Python + * 2.7 on Windows, which used a very old compiler. We wrote an + * alternative implementation using Python APIs and POSIX or Windows + * APIs, but that's no longer needed. So this file is a shadow of its + * former self --- but may be needed in the future. + */ + +#include +#include +#include + +#include "greenlet_compiler_compat.hpp" + +namespace greenlet { + typedef std::mutex Mutex; + typedef std::lock_guard LockGuard; + class LockInitError : public std::runtime_error + { + public: + LockInitError(const char* what) : std::runtime_error(what) + {}; + }; +}; + + +#endif /* GREENLET_THREAD_SUPPORT_HPP */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/setup_switch_x64_masm.cmd b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/setup_switch_x64_masm.cmd new file mode 100644 index 000000000..092859555 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/setup_switch_x64_masm.cmd @@ -0,0 +1,2 @@ +call "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\vcvarsall.bat" amd64 +ml64 /nologo /c /Fo switch_x64_masm.obj switch_x64_masm.asm diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_aarch64_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_aarch64_gcc.h new file mode 100644 index 000000000..058617c40 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_aarch64_gcc.h @@ -0,0 +1,124 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 07-Sep-16 Add clang support using x register naming. Fredrik Fornwall + * 13-Apr-13 Add support for strange GCC caller-save decisions + * 08-Apr-13 File creation. Michael Matz + * + * NOTES + * + * Simply save all callee saved registers + * + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 +#define REGS_TO_SAVE "x19", "x20", "x21", "x22", "x23", "x24", "x25", "x26", \ + "x27", "x28", "x30" /* aka lr */, \ + "v8", "v9", "v10", "v11", \ + "v12", "v13", "v14", "v15" + +/* + * Recall: + asm asm-qualifiers ( AssemblerTemplate + : OutputOperands + [ : InputOperands + [ : Clobbers ] ]) + + or (if asm-qualifiers contains 'goto') + + asm asm-qualifiers ( AssemblerTemplate + : OutputOperands + : InputOperands + : Clobbers + : GotoLabels) + + and OutputOperands are + + [ [asmSymbolicName] ] constraint (cvariablename) + + When a name is given, refer to it as ``%[the name]``. + When not given, ``%i`` where ``i`` is the zero-based index. + + constraints starting with ``=`` means only writing; ``+`` means + reading and writing. + + This is followed by ``r`` (must be register) or ``m`` (must be memory) + and these can be combined. + + The ``cvariablename`` is actually an lvalue expression. + + In AArch65, 31 general purpose registers. If named X0... they are + 64-bit. If named W0... they are the bottom 32 bits of the + corresponding 64 bit register. + + XZR and WZR are hardcoded to 0, and ignore writes. + + Arguments are in X0..X7. C++ uses X0 for ``this``. X0 holds simple return + values (?) + + Whenever a W register is written, the top half of the X register is zeroed. + */ + +static int +slp_switch(void) +{ + int err; + void *fp; + /* Windowz uses a 32-bit long on a 64-bit platform, unlike the rest of + the world, and in theory we can be compiled with GCC/llvm on 64-bit + windows. So we need a fixed-width type. + */ + int64_t *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("str x29, %0" : "=m"(fp) : : ); + __asm__ ("mov %0, sp" : "=r" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "add sp,sp,%0\n" + "add x29,x29,%0\n" + : + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + /* SLP_SAVE_STATE macro contains some return statements + (of -1 and 1). It falls through only when + the return value of slp_save_state() is zero, which + is placed in x0. + In that case we (slp_switch) also want to return zero + (also in x0 of course). + Now, some GCC versions (seen with 4.8) think it's a + good idea to save/restore x0 around the call to + slp_restore_state(), instead of simply zeroing it + at the return below. But slp_restore_state + writes random values to the stack slot used for this + save/restore (from when it once was saved above in + SLP_SAVE_STATE, when it was still uninitialized), so + "restoring" that precious zero actually makes us + return random values. There are some ways to make + GCC not use that zero value in the normal return path + (e.g. making err volatile, but that costs a little + stack space), and the simplest is to call a function + that returns an unknown value (which happens to be zero), + so the saved/restored value is unused. + + Thus, this line stores a 0 into the ``err`` variable + (which must be held in a register for this instruction, + of course). The ``w`` qualifier causes the instruction + to use W0 instead of X0, otherwise we get a warning + about a value size mismatch (because err is an int, + and aarch64 platforms are LP64: 32-bit int, 64 bit long + and pointer). + */ + __asm__ volatile ("mov %w0, #0" : "=r" (err)); + } + __asm__ volatile ("ldr x29, %0" : : "m" (fp) :); + __asm__ volatile ("" : : : REGS_TO_SAVE); + return err; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_alpha_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_alpha_unix.h new file mode 100644 index 000000000..7e07abfc2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_alpha_unix.h @@ -0,0 +1,30 @@ +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "$9", "$10", "$11", "$12", "$13", "$14", "$15", \ + "$f2", "$f3", "$f4", "$f5", "$f6", "$f7", "$f8", "$f9" + +static int +slp_switch(void) +{ + int ret; + long *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("mov $30, %0" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "addq $30, %0, $30\n\t" + : /* no outputs */ + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("mov $31, %0" : "=r" (ret) : ); + return ret; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_amd64_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_amd64_unix.h new file mode 100644 index 000000000..d4701105f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_amd64_unix.h @@ -0,0 +1,87 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 3-May-13 Ralf Schmitt + * Add support for strange GCC caller-save decisions + * (ported from switch_aarch64_gcc.h) + * 18-Aug-11 Alexey Borzenkov + * Correctly save rbp, csr and cw + * 01-Apr-04 Hye-Shik Chang + * Ported from i386 to amd64. + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for spark + * 31-Avr-02 Armin Rigo + * Added ebx, esi and edi register-saves. + * 01-Mar-02 Samual M. Rushing + * Ported from i386. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +/* #define STACK_MAGIC 3 */ +/* the above works fine with gcc 2.96, but 2.95.3 wants this */ +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "r12", "r13", "r14", "r15" + +static int +slp_switch(void) +{ + int err; + void* rbp; + void* rbx; + unsigned int csr; + unsigned short cw; + /* This used to be declared 'register', but that does nothing in + modern compilers and is explicitly forbidden in some new + standards. */ + long *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("fstcw %0" : "=m" (cw)); + __asm__ volatile ("stmxcsr %0" : "=m" (csr)); + __asm__ volatile ("movq %%rbp, %0" : "=m" (rbp)); + __asm__ volatile ("movq %%rbx, %0" : "=m" (rbx)); + __asm__ ("movq %%rsp, %0" : "=g" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "addq %0, %%rsp\n" + "addq %0, %%rbp\n" + : + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + __asm__ volatile ("xorq %%rax, %%rax" : "=a" (err)); + } + __asm__ volatile ("movq %0, %%rbx" : : "m" (rbx)); + __asm__ volatile ("movq %0, %%rbp" : : "m" (rbp)); + __asm__ volatile ("ldmxcsr %0" : : "m" (csr)); + __asm__ volatile ("fldcw %0" : : "m" (cw)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_gcc.h new file mode 100644 index 000000000..655003aa1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_gcc.h @@ -0,0 +1,79 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 14-Aug-06 File creation. Ported from Arm Thumb. Sylvain Baro + * 3-Sep-06 Commented out saving of r1-r3 (r4 already commented out) as I + * read that these do not need to be saved. Also added notes and + * errors related to the frame pointer. Richard Tew. + * + * NOTES + * + * It is not possible to detect if fp is used or not, so the supplied + * switch function needs to support it, so that you can remove it if + * it does not apply to you. + * + * POSSIBLE ERRORS + * + * "fp cannot be used in asm here" + * + * - Try commenting out "fp" in REGS_TO_SAVE. + * + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 +#define REG_SP "sp" +#define REG_SPSP "sp,sp" +#ifdef __thumb__ +#define REG_FP "r7" +#define REG_FPFP "r7,r7" +#define REGS_TO_SAVE_GENERAL "r4", "r5", "r6", "r8", "r9", "r10", "r11", "lr" +#else +#define REG_FP "fp" +#define REG_FPFP "fp,fp" +#define REGS_TO_SAVE_GENERAL "r4", "r5", "r6", "r7", "r8", "r9", "r10", "lr" +#endif +#if defined(__SOFTFP__) +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL +#elif defined(__VFP_FP__) +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL, "d8", "d9", "d10", "d11", \ + "d12", "d13", "d14", "d15" +#elif defined(__MAVERICK__) +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL, "mvf4", "mvf5", "mvf6", "mvf7", \ + "mvf8", "mvf9", "mvf10", "mvf11", \ + "mvf12", "mvf13", "mvf14", "mvf15" +#else +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL, "f4", "f5", "f6", "f7" +#endif + +static int +#ifdef __GNUC__ +__attribute__((optimize("no-omit-frame-pointer"))) +#endif +slp_switch(void) +{ + void *fp; + int *stackref, stsizediff; + int result; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("mov r0," REG_FP "\n\tstr r0,%0" : "=m" (fp) : : "r0"); + __asm__ ("mov %0," REG_SP : "=r" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "add " REG_SPSP ",%0\n" + "add " REG_FPFP ",%0\n" + : + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("ldr r0,%1\n\tmov " REG_FP ",r0\n\tmov %0, #0" : "=r" (result) : "m" (fp) : "r0"); + __asm__ volatile ("" : : : REGS_TO_SAVE); + return result; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_ios.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_ios.h new file mode 100644 index 000000000..9e640e15f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm32_ios.h @@ -0,0 +1,67 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 31-May-15 iOS support. Ported from arm32. Proton + * + * NOTES + * + * It is not possible to detect if fp is used or not, so the supplied + * switch function needs to support it, so that you can remove it if + * it does not apply to you. + * + * POSSIBLE ERRORS + * + * "fp cannot be used in asm here" + * + * - Try commenting out "fp" in REGS_TO_SAVE. + * + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 0 +#define REG_SP "sp" +#define REG_SPSP "sp,sp" +#define REG_FP "r7" +#define REG_FPFP "r7,r7" +#define REGS_TO_SAVE_GENERAL "r4", "r5", "r6", "r8", "r10", "r11", "lr" +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL, "d8", "d9", "d10", "d11", \ + "d12", "d13", "d14", "d15" + +static int +#ifdef __GNUC__ +__attribute__((optimize("no-omit-frame-pointer"))) +#endif +slp_switch(void) +{ + void *fp; + int *stackref, stsizediff, result; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("str " REG_FP ",%0" : "=m" (fp)); + __asm__ ("mov %0," REG_SP : "=r" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "add " REG_SPSP ",%0\n" + "add " REG_FPFP ",%0\n" + : + : "r" (stsizediff) + : REGS_TO_SAVE /* Clobber registers, force compiler to + * recalculate address of void *fp from REG_SP or REG_FP */ + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ( + "ldr " REG_FP ", %1\n\t" + "mov %0, #0" + : "=r" (result) + : "m" (fp) + : REGS_TO_SAVE /* Force compiler to restore saved registers after this */ + ); + return result; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.asm b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.asm new file mode 100644 index 000000000..29f9c225e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.asm @@ -0,0 +1,53 @@ + AREA switch_arm64_masm, CODE, READONLY; + GLOBAL slp_switch [FUNC] + EXTERN slp_save_state_asm + EXTERN slp_restore_state_asm + +slp_switch + ; push callee saved registers to stack + stp x19, x20, [sp, #-16]! + stp x21, x22, [sp, #-16]! + stp x23, x24, [sp, #-16]! + stp x25, x26, [sp, #-16]! + stp x27, x28, [sp, #-16]! + stp x29, x30, [sp, #-16]! + stp d8, d9, [sp, #-16]! + stp d10, d11, [sp, #-16]! + stp d12, d13, [sp, #-16]! + stp d14, d15, [sp, #-16]! + + ; call slp_save_state_asm with stack pointer + mov x0, sp + bl slp_save_state_asm + + ; early return for return value of 1 and -1 + cmp x0, #-1 + b.eq RETURN + cmp x0, #1 + b.eq RETURN + + ; increment stack and frame pointer + add sp, sp, x0 + add x29, x29, x0 + + bl slp_restore_state_asm + + ; store return value for successful completion of routine + mov x0, #0 + +RETURN + ; pop registers from stack + ldp d14, d15, [sp], #16 + ldp d12, d13, [sp], #16 + ldp d10, d11, [sp], #16 + ldp d8, d9, [sp], #16 + ldp x29, x30, [sp], #16 + ldp x27, x28, [sp], #16 + ldp x25, x26, [sp], #16 + ldp x23, x24, [sp], #16 + ldp x21, x22, [sp], #16 + ldp x19, x20, [sp], #16 + + ret + + END diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.obj b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.obj new file mode 100644 index 000000000..f6f220e43 Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_masm.obj differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_msvc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_msvc.h new file mode 100644 index 000000000..7ab7f45be --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_arm64_msvc.h @@ -0,0 +1,17 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 21-Oct-21 Niyas Sait + * First version to enable win/arm64 support. + */ + +#define STACK_REFPLUS 1 +#define STACK_MAGIC 0 + +/* Use the generic support for an external assembly language slp_switch function. */ +#define EXTERNAL_ASM + +#ifdef SLP_EVAL +/* This always uses the external masm assembly file. */ +#endif \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_csky_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_csky_gcc.h new file mode 100644 index 000000000..ac469d3a0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_csky_gcc.h @@ -0,0 +1,48 @@ +#ifdef SLP_EVAL +#define STACK_MAGIC 0 +#define REG_FP "r8" +#ifdef __CSKYABIV2__ +#define REGS_TO_SAVE_GENERAL "r4", "r5", "r6", "r7", "r9", "r10", "r11", "r15",\ + "r16", "r17", "r18", "r19", "r20", "r21", "r22",\ + "r23", "r24", "r25" + +#if defined (__CSKY_HARD_FLOAT__) || (__CSKY_VDSP__) +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL, "vr8", "vr9", "vr10", "vr11", "vr12",\ + "vr13", "vr14", "vr15" +#else +#define REGS_TO_SAVE REGS_TO_SAVE_GENERAL +#endif +#else +#define REGS_TO_SAVE "r9", "r10", "r11", "r12", "r13", "r15" +#endif + + +static int +#ifdef __GNUC__ +__attribute__((optimize("no-omit-frame-pointer"))) +#endif +slp_switch(void) +{ + int *stackref, stsizediff; + int result; + + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ ("mov %0, sp" : "=r" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "addu sp,%0\n" + "addu "REG_FP",%0\n" + : + : "r" (stsizediff) + ); + + SLP_RESTORE_STATE(); + } + __asm__ volatile ("movi %0, 0" : "=r" (result)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + + return result; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_loongarch64_linux.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_loongarch64_linux.h new file mode 100644 index 000000000..9eaf34ef4 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_loongarch64_linux.h @@ -0,0 +1,31 @@ +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "s0", "s1", "s2", "s3", "s4", "s5", \ + "s6", "s7", "s8", "fp", \ + "f24", "f25", "f26", "f27", "f28", "f29", "f30", "f31" + +static int +slp_switch(void) +{ + int ret; + long *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("move %0, $sp" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "add.d $sp, $sp, %0\n\t" + : /* no outputs */ + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("move %0, $zero" : "=r" (ret) : ); + return ret; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_m68k_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_m68k_gcc.h new file mode 100644 index 000000000..da761c2da --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_m68k_gcc.h @@ -0,0 +1,38 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 2014-01-06 Andreas Schwab + * File created. + */ + +#ifdef SLP_EVAL + +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "%d2", "%d3", "%d4", "%d5", "%d6", "%d7", \ + "%a2", "%a3", "%a4" + +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + void *fp, *a5; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("move.l %%fp, %0" : "=m"(fp)); + __asm__ volatile ("move.l %%a5, %0" : "=m"(a5)); + __asm__ ("move.l %%sp, %0" : "=r"(stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ("add.l %0, %%sp; add.l %0, %%fp" : : "r"(stsizediff)); + SLP_RESTORE_STATE(); + __asm__ volatile ("clr.l %0" : "=g" (err)); + } + __asm__ volatile ("move.l %0, %%a5" : : "m"(a5)); + __asm__ volatile ("move.l %0, %%fp" : : "m"(fp)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + return err; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_mips_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_mips_unix.h new file mode 100644 index 000000000..b9003e947 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_mips_unix.h @@ -0,0 +1,64 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 20-Sep-14 Matt Madison + * Re-code the saving of the gp register for MIPS64. + * 05-Jan-08 Thiemo Seufer + * Ported from ppc. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "$16", "$17", "$18", "$19", "$20", "$21", "$22", \ + "$23", "$30" +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; +#ifdef __mips64 + uint64_t gpsave; +#endif + __asm__ __volatile__ ("" : : : REGS_TO_SAVE); +#ifdef __mips64 + __asm__ __volatile__ ("sd $28,%0" : "=m" (gpsave) : : ); +#endif + __asm__ ("move %0, $29" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ __volatile__ ( +#ifdef __mips64 + "daddu $29, %0\n" +#else + "addu $29, %0\n" +#endif + : /* no outputs */ + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } +#ifdef __mips64 + __asm__ __volatile__ ("ld $28,%0" : : "m" (gpsave) : ); +#endif + __asm__ __volatile__ ("" : : : REGS_TO_SAVE); + __asm__ __volatile__ ("move %0, $0" : "=r" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_aix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_aix.h new file mode 100644 index 000000000..e7e0b8776 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_aix.h @@ -0,0 +1,103 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 16-Oct-20 Jesse Gorzinski + * Copied from Linux PPC64 implementation + * 04-Sep-18 Alexey Borzenkov + * Workaround a gcc bug using manual save/restore of r30 + * 21-Mar-18 Tulio Magno Quites Machado Filho + * Added r30 to the list of saved registers in order to fully comply with + * both ppc64 ELFv1 ABI and the ppc64le ELFv2 ABI, that classify this + * register as a nonvolatile register used for local variables. + * 21-Mar-18 Laszlo Boszormenyi + * Save r2 (TOC pointer) manually. + * 10-Dec-13 Ulrich Weigand + * Support ELFv2 ABI. Save float/vector registers. + * 09-Mar-12 Michael Ellerman + * 64-bit implementation, copied from 32-bit. + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 04-Oct-02 Gustavo Niemeyer + * Ported from MacOS version. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + * 31-Jul-12 Trevor Bowen + * Changed memory constraints to register only. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 6 + +#if defined(__ALTIVEC__) +#define ALTIVEC_REGS \ + "v20", "v21", "v22", "v23", "v24", "v25", "v26", "v27", \ + "v28", "v29", "v30", "v31", +#else +#define ALTIVEC_REGS +#endif + +#define REGS_TO_SAVE "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "r31", \ + "fr14", "fr15", "fr16", "fr17", "fr18", "fr19", "fr20", "fr21", \ + "fr22", "fr23", "fr24", "fr25", "fr26", "fr27", "fr28", "fr29", \ + "fr30", "fr31", \ + ALTIVEC_REGS \ + "cr2", "cr3", "cr4" + +static int +slp_switch(void) +{ + int err; + long *stackref, stsizediff; + void * toc; + void * r30; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("std 2, %0" : "=m" (toc)); + __asm__ volatile ("std 30, %0" : "=m" (r30)); + __asm__ ("mr %0, 1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "mr 11, %0\n" + "add 1, 1, 11\n" + : /* no outputs */ + : "r" (stsizediff) + : "11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("ld 30, %0" : : "m" (r30)); + __asm__ volatile ("ld 2, %0" : : "m" (toc)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_linux.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_linux.h new file mode 100644 index 000000000..3c324d00c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc64_linux.h @@ -0,0 +1,105 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 04-Sep-18 Alexey Borzenkov + * Workaround a gcc bug using manual save/restore of r30 + * 21-Mar-18 Tulio Magno Quites Machado Filho + * Added r30 to the list of saved registers in order to fully comply with + * both ppc64 ELFv1 ABI and the ppc64le ELFv2 ABI, that classify this + * register as a nonvolatile register used for local variables. + * 21-Mar-18 Laszlo Boszormenyi + * Save r2 (TOC pointer) manually. + * 10-Dec-13 Ulrich Weigand + * Support ELFv2 ABI. Save float/vector registers. + * 09-Mar-12 Michael Ellerman + * 64-bit implementation, copied from 32-bit. + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 04-Oct-02 Gustavo Niemeyer + * Ported from MacOS version. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + * 31-Jul-12 Trevor Bowen + * Changed memory constraints to register only. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#if _CALL_ELF == 2 +#define STACK_MAGIC 4 +#else +#define STACK_MAGIC 6 +#endif + +#if defined(__ALTIVEC__) +#define ALTIVEC_REGS \ + "v20", "v21", "v22", "v23", "v24", "v25", "v26", "v27", \ + "v28", "v29", "v30", "v31", +#else +#define ALTIVEC_REGS +#endif + +#define REGS_TO_SAVE "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "r31", \ + "fr14", "fr15", "fr16", "fr17", "fr18", "fr19", "fr20", "fr21", \ + "fr22", "fr23", "fr24", "fr25", "fr26", "fr27", "fr28", "fr29", \ + "fr30", "fr31", \ + ALTIVEC_REGS \ + "cr2", "cr3", "cr4" + +static int +slp_switch(void) +{ + int err; + long *stackref, stsizediff; + void * toc; + void * r30; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("std 2, %0" : "=m" (toc)); + __asm__ volatile ("std 30, %0" : "=m" (r30)); + __asm__ ("mr %0, 1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "mr 11, %0\n" + "add 1, 1, 11\n" + : /* no outputs */ + : "r" (stsizediff) + : "11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("ld 30, %0" : : "m" (r30)); + __asm__ volatile ("ld 2, %0" : : "m" (toc)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_aix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_aix.h new file mode 100644 index 000000000..6d93c1326 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_aix.h @@ -0,0 +1,87 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 07-Mar-11 Floris Bruynooghe + * Do not add stsizediff to general purpose + * register (GPR) 30 as this is a non-volatile and + * unused by the PowerOpen Environment, therefore + * this was modifying a user register instead of the + * frame pointer (which does not seem to exist). + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 04-Oct-02 Gustavo Niemeyer + * Ported from MacOS version. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 3 + +/* !!!!WARNING!!!! need to add "r31" in the next line if this header file + * is meant to be compiled non-dynamically! + */ +#define REGS_TO_SAVE "r13", "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "cr2", "cr3", "cr4" +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ ("mr %0, 1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "mr 11, %0\n" + "add 1, 1, 11\n" + : /* no outputs */ + : "r" (stsizediff) + : "11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_linux.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_linux.h new file mode 100644 index 000000000..e83ad70a5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_linux.h @@ -0,0 +1,84 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 04-Oct-02 Gustavo Niemeyer + * Ported from MacOS version. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + * 31-Jul-12 Trevor Bowen + * Changed memory constraints to register only. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 3 + +/* !!!!WARNING!!!! need to add "r31" in the next line if this header file + * is meant to be compiled non-dynamically! + */ +#define REGS_TO_SAVE "r13", "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "cr2", "cr3", "cr4" +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ ("mr %0, 1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "mr 11, %0\n" + "add 1, 1, 11\n" + "add 30, 30, 11\n" + : /* no outputs */ + : "r" (stsizediff) + : "11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_macosx.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_macosx.h new file mode 100644 index 000000000..bd414c68e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_macosx.h @@ -0,0 +1,82 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 3 + +/* !!!!WARNING!!!! need to add "r31" in the next line if this header file + * is meant to be compiled non-dynamically! + */ +#define REGS_TO_SAVE "r13", "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "cr2", "cr3", "cr4" + +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ ("; asm block 2\n\tmr %0, r1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "; asm block 3\n" + "\tmr r11, %0\n" + "\tadd r1, r1, r11\n" + "\tadd r30, r30, r11\n" + : /* no outputs */ + : "r" (stsizediff) + : "r11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_unix.h new file mode 100644 index 000000000..bb188080a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_ppc_unix.h @@ -0,0 +1,82 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'r31' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 14-Jan-04 Bob Ippolito + * added cr2-cr4 to the registers to be saved. + * Open questions: Should we save FP registers? + * What about vector registers? + * Differences between darwin and unix? + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 04-Oct-02 Gustavo Niemeyer + * Ported from MacOS version. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 29-Jun-02 Christian Tismer + * Added register 13-29, 31 saves. The same way as + * Armin Rigo did for the x86_unix version. + * This seems to be now fully functional! + * 04-Mar-02 Hye-Shik Chang + * Ported from i386. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 3 + +/* !!!!WARNING!!!! need to add "r31" in the next line if this header file + * is meant to be compiled non-dynamically! + */ +#define REGS_TO_SAVE "r13", "r14", "r15", "r16", "r17", "r18", "r19", "r20", \ + "r21", "r22", "r23", "r24", "r25", "r26", "r27", "r28", "r29", \ + "cr2", "cr3", "cr4" +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ ("mr %0, 1" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "mr 11, %0\n" + "add 1, 1, 11\n" + "add 30, 30, 11\n" + : /* no outputs */ + : "r" (stsizediff) + : "11" + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("li %0, 0" : "=r" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_riscv_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_riscv_unix.h new file mode 100644 index 000000000..876112223 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_riscv_unix.h @@ -0,0 +1,41 @@ +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "s1", "s2", "s3", "s4", "s5", \ + "s6", "s7", "s8", "s9", "s10", "s11", "fs0", "fs1", \ + "fs2", "fs3", "fs4", "fs5", "fs6", "fs7", "fs8", "fs9", \ + "fs10", "fs11" + +static int +slp_switch(void) +{ + int ret; + long fp; + long *stackref, stsizediff; + + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("mv %0, fp" : "=r" (fp) : ); + __asm__ volatile ("mv %0, sp" : "=r" (stackref) : ); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "add sp, sp, %0\n\t" + "add fp, fp, %0\n\t" + : /* no outputs */ + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); +#if __riscv_xlen == 32 + __asm__ volatile ("lw fp, %0" : : "m" (fp)); +#else + __asm__ volatile ("ld fp, %0" : : "m" (fp)); +#endif + __asm__ volatile ("mv %0, zero" : "=r" (ret) : ); + return ret; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_s390_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_s390_unix.h new file mode 100644 index 000000000..9199367f8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_s390_unix.h @@ -0,0 +1,87 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 25-Jan-12 Alexey Borzenkov + * Fixed Linux/S390 port to work correctly with + * different optimization options both on 31-bit + * and 64-bit. Thanks to Stefan Raabe for lots + * of testing. + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 06-Oct-02 Gustavo Niemeyer + * Ported to Linux/S390. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#ifdef __s390x__ +#define STACK_MAGIC 20 /* 20 * 8 = 160 bytes of function call area */ +#else +#define STACK_MAGIC 24 /* 24 * 4 = 96 bytes of function call area */ +#endif + +/* Technically, r11-r13 also need saving, but function prolog starts + with stm(g) and since there are so many saved registers already + it won't be optimized, resulting in all r6-r15 being saved */ +#define REGS_TO_SAVE "r6", "r7", "r8", "r9", "r10", "r14", \ + "f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", \ + "f8", "f9", "f10", "f11", "f12", "f13", "f14", "f15" + +static int +slp_switch(void) +{ + int ret; + long *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); +#ifdef __s390x__ + __asm__ volatile ("lgr %0, 15" : "=r" (stackref) : ); +#else + __asm__ volatile ("lr %0, 15" : "=r" (stackref) : ); +#endif + { + SLP_SAVE_STATE(stackref, stsizediff); +/* N.B. + r11 may be used as the frame pointer, and in that case it cannot be + clobbered and needs offsetting just like the stack pointer (but in cases + where frame pointer isn't used we might clobber it accidentally). What's + scary is that r11 is 2nd (and even 1st when GOT is used) callee saved + register that gcc would chose for surviving function calls. However, + since r6-r10 are clobbered above, their cost for reuse is reduced, so + gcc IRA will chose them over r11 (not seeing r11 is implicitly saved), + making it relatively safe to offset in all cases. :) */ + __asm__ volatile ( +#ifdef __s390x__ + "agr 15, %0\n\t" + "agr 11, %0" +#else + "ar 15, %0\n\t" + "ar 11, %0" +#endif + : /* no outputs */ + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("lhi %0, 0" : "=r" (ret) : ); + return ret; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sh_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sh_gcc.h new file mode 100644 index 000000000..5ecc3b394 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sh_gcc.h @@ -0,0 +1,36 @@ +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL +#define STACK_MAGIC 0 +#define REGS_TO_SAVE "r8", "r9", "r10", "r11", "r13", \ + "fr12", "fr13", "fr14", "fr15" + +// r12 Global context pointer, GP +// r14 Frame pointer, FP +// r15 Stack pointer, SP + +static int +slp_switch(void) +{ + int err; + void* fp; + int *stackref, stsizediff; + __asm__ volatile("" : : : REGS_TO_SAVE); + __asm__ volatile("mov.l r14, %0" : "=m"(fp) : :); + __asm__("mov r15, %0" : "=r"(stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile( + "add %0, r15\n" + "add %0, r14\n" + : /* no outputs */ + : "r"(stsizediff)); + SLP_RESTORE_STATE(); + __asm__ volatile("mov r0, %0" : "=r"(err) : :); + } + __asm__ volatile("mov.l %0, r14" : : "m"(fp) :); + __asm__ volatile("" : : : REGS_TO_SAVE); + return err; +} + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sparc_sun_gcc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sparc_sun_gcc.h new file mode 100644 index 000000000..96990c391 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_sparc_sun_gcc.h @@ -0,0 +1,92 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 16-May-15 Alexey Borzenkov + * Move stack spilling code inside save/restore functions + * 30-Aug-13 Floris Bruynooghe + Clean the register windows again before returning. + This does not clobber the PIC register as it leaves + the current window intact and is required for multi- + threaded code to work correctly. + * 08-Mar-11 Floris Bruynooghe + * No need to set return value register explicitly + * before the stack and framepointer are adjusted + * as none of the other registers are influenced by + * this. Also don't needlessly clean the windows + * ('ta %0" :: "i" (ST_CLEAN_WINDOWS)') as that + * clobbers the gcc PIC register (%l7). + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * added support for SunOS sparc with gcc + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + + +#define STACK_MAGIC 0 + + +#if defined(__sparcv9) +#define SLP_FLUSHW __asm__ volatile ("flushw") +#else +#define SLP_FLUSHW __asm__ volatile ("ta 3") /* ST_FLUSH_WINDOWS */ +#endif + +/* On sparc we need to spill register windows inside save/restore functions */ +#define SLP_BEFORE_SAVE_STATE() SLP_FLUSHW +#define SLP_BEFORE_RESTORE_STATE() SLP_FLUSHW + + +static int +slp_switch(void) +{ + int err; + int *stackref, stsizediff; + + /* Put current stack pointer into stackref. + * Register spilling is done in save/restore. + */ + __asm__ volatile ("mov %%sp, %0" : "=r" (stackref)); + + { + /* Thou shalt put SLP_SAVE_STATE into a local block */ + /* Copy the current stack onto the heap */ + SLP_SAVE_STATE(stackref, stsizediff); + + /* Increment stack and frame pointer by stsizediff */ + __asm__ volatile ( + "add %0, %%sp, %%sp\n\t" + "add %0, %%fp, %%fp" + : : "r" (stsizediff)); + + /* Copy new stack from it's save store on the heap */ + SLP_RESTORE_STATE(); + + __asm__ volatile ("mov %1, %0" : "=r" (err) : "i" (0)); + return err; + } +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x32_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x32_unix.h new file mode 100644 index 000000000..893369c7a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x32_unix.h @@ -0,0 +1,63 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 17-Aug-12 Fantix King + * Ported from amd64. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 0 + +#define REGS_TO_SAVE "r12", "r13", "r14", "r15" + + +static int +slp_switch(void) +{ + void* ebp; + void* ebx; + unsigned int csr; + unsigned short cw; + int err; + int *stackref, stsizediff; + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("fstcw %0" : "=m" (cw)); + __asm__ volatile ("stmxcsr %0" : "=m" (csr)); + __asm__ volatile ("movl %%ebp, %0" : "=m" (ebp)); + __asm__ volatile ("movl %%ebx, %0" : "=m" (ebx)); + __asm__ ("movl %%esp, %0" : "=g" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "addl %0, %%esp\n" + "addl %0, %%ebp\n" + : + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + } + __asm__ volatile ("movl %0, %%ebx" : : "m" (ebx)); + __asm__ volatile ("movl %0, %%ebp" : : "m" (ebp)); + __asm__ volatile ("ldmxcsr %0" : : "m" (csr)); + __asm__ volatile ("fldcw %0" : : "m" (cw)); + __asm__ volatile ("" : : : REGS_TO_SAVE); + __asm__ volatile ("xorl %%eax, %%eax" : "=a" (err)); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.asm b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.asm new file mode 100644 index 000000000..f5c72a27d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.asm @@ -0,0 +1,111 @@ +; +; stack switching code for MASM on x641 +; Kristjan Valur Jonsson, sept 2005 +; + + +;prototypes for our calls +slp_save_state_asm PROTO +slp_restore_state_asm PROTO + + +pushxmm MACRO reg + sub rsp, 16 + .allocstack 16 + movaps [rsp], reg ; faster than movups, but we must be aligned + ; .savexmm128 reg, offset (don't know what offset is, no documentation) +ENDM +popxmm MACRO reg + movaps reg, [rsp] ; faster than movups, but we must be aligned + add rsp, 16 +ENDM + +pushreg MACRO reg + push reg + .pushreg reg +ENDM +popreg MACRO reg + pop reg +ENDM + + +.code +slp_switch PROC FRAME + ;realign stack to 16 bytes after return address push, makes the following faster + sub rsp,8 + .allocstack 8 + + pushxmm xmm15 + pushxmm xmm14 + pushxmm xmm13 + pushxmm xmm12 + pushxmm xmm11 + pushxmm xmm10 + pushxmm xmm9 + pushxmm xmm8 + pushxmm xmm7 + pushxmm xmm6 + + pushreg r15 + pushreg r14 + pushreg r13 + pushreg r12 + + pushreg rbp + pushreg rbx + pushreg rdi + pushreg rsi + + sub rsp, 10h ;allocate the singlefunction argument (must be multiple of 16) + .allocstack 10h +.endprolog + + lea rcx, [rsp+10h] ;load stack base that we are saving + call slp_save_state_asm ;pass stackpointer, return offset in eax + cmp rax, 1 + je EXIT1 + cmp rax, -1 + je EXIT2 + ;actual stack switch: + add rsp, rax + call slp_restore_state_asm + xor rax, rax ;return 0 + +EXIT: + + add rsp, 10h + popreg rsi + popreg rdi + popreg rbx + popreg rbp + + popreg r12 + popreg r13 + popreg r14 + popreg r15 + + popxmm xmm6 + popxmm xmm7 + popxmm xmm8 + popxmm xmm9 + popxmm xmm10 + popxmm xmm11 + popxmm xmm12 + popxmm xmm13 + popxmm xmm14 + popxmm xmm15 + + add rsp, 8 + ret + +EXIT1: + mov rax, 1 + jmp EXIT + +EXIT2: + sar rax, 1 + jmp EXIT + +slp_switch ENDP + +END \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.obj b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.obj new file mode 100644 index 000000000..64e3e6b89 Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_masm.obj differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_msvc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_msvc.h new file mode 100644 index 000000000..601ea5605 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x64_msvc.h @@ -0,0 +1,60 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 26-Sep-02 Christian Tismer + * again as a result of virtualized stack access, + * the compiler used less registers. Needed to + * explicit mention registers in order to get them saved. + * Thanks to Jeff Senn for pointing this out and help. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 01-Mar-02 Christian Tismer + * Initial final version after lots of iterations for i386. + */ + +/* Avoid alloca redefined warning on mingw64 */ +#ifndef alloca +#define alloca _alloca +#endif + +#define STACK_REFPLUS 1 +#define STACK_MAGIC 0 + +/* Use the generic support for an external assembly language slp_switch function. */ +#define EXTERNAL_ASM + +#ifdef SLP_EVAL +/* This always uses the external masm assembly file. */ +#endif + +/* + * further self-processing support + */ + +/* we have IsBadReadPtr available, so we can peek at objects */ +/* +#define STACKLESS_SPY + +#ifdef IMPLEMENT_STACKLESSMODULE +#include "Windows.h" +#define CANNOT_READ_MEM(p, bytes) IsBadReadPtr(p, bytes) + +static int IS_ON_STACK(void*p) +{ + int stackref; + intptr_t stackbase = ((intptr_t)&stackref) & 0xfffff000; + return (intptr_t)p >= stackbase && (intptr_t)p < stackbase + 0x00100000; +} + +#endif +*/ \ No newline at end of file diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_msvc.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_msvc.h new file mode 100644 index 000000000..0f3a59f52 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_msvc.h @@ -0,0 +1,326 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 26-Sep-02 Christian Tismer + * again as a result of virtualized stack access, + * the compiler used less registers. Needed to + * explicit mention registers in order to get them saved. + * Thanks to Jeff Senn for pointing this out and help. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for sparc + * 01-Mar-02 Christian Tismer + * Initial final version after lots of iterations for i386. + */ + +#define alloca _alloca + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +#define STACK_MAGIC 0 + +/* Some magic to quell warnings and keep slp_switch() from crashing when built + with VC90. Disable global optimizations, and the warning: frame pointer + register 'ebp' modified by inline assembly code. + + We used to just disable global optimizations ("g") but upstream stackless + Python, as well as stackman, turn off all optimizations. + +References: +https://github.com/stackless-dev/stackman/blob/dbc72fe5207a2055e658c819fdeab9731dee78b9/stackman/platforms/switch_x86_msvc.h +https://github.com/stackless-dev/stackless/blob/main-slp/Stackless/platf/switch_x86_msvc.h +*/ +#define WIN32_LEAN_AND_MEAN +#include + +#pragma optimize("", off) /* so that autos are stored on the stack */ +#pragma warning(disable:4731) +#pragma warning(disable:4733) /* disable warning about modifying FS[0] */ + +/** + * Most modern compilers and environments handle C++ exceptions without any + * special help from us. MSVC on 32-bit windows is an exception. There, C++ + * exceptions are dealt with using Windows' Structured Exception Handling + * (SEH). + * + * SEH is implemented as a singly linked list of nodes. The + * head of this list is stored in the Thread Information Block, which itself + * is pointed to from the FS register. It's the first field in the structure, + * or offset 0, so we can access it using assembly FS:[0], or the compiler + * intrinsics and field offset information from the headers (as we do below). + * Somewhat unusually, the tail of the list doesn't have prev == NULL, it has + * prev == 0xFFFFFFFF. + * + * SEH was designed for C, and traditionally uses the MSVC compiler + * intrinsincs __try{}/__except{}. It is also utilized for C++ exceptions by + * MSVC; there, every throw of a C++ exception raises a SEH error with the + * ExceptionCode 0xE06D7363; the SEH handler list is then traversed to + * deal with the exception. + * + * If the SEH list is corrupt, then when a C++ exception is thrown the program + * will abruptly exit with exit code 1. This does not use std::terminate(), so + * std::set_terminate() is useless to debug this. + * + * The SEH list is closely tied to the call stack; entering a function that + * uses __try{} or most C++ functions will push a new handler onto the front + * of the list. Returning from the function will remove the handler. Saving + * and restoring the head node of the SEH list (FS:[0]) per-greenlet is NOT + * ENOUGH to make SEH or exceptions work. + * + * Stack switching breaks SEH because the call stack no longer necessarily + * matches the SEH list. For example, given greenlet A that switches to + * greenlet B, at the moment of entering greenlet B, we will have any SEH + * handlers from greenlet A on the SEH list; greenlet B can then add its own + * handlers to the SEH list. When greenlet B switches back to greenlet A, + * greenlet B's handlers would still be on the SEH stack, but when switch() + * returns control to greenlet A, we have replaced the contents of the stack + * in memory, so all the address that greenlet B added to the SEH list are now + * invalid: part of the call stack has been unwound, but the SEH list was out + * of sync with the call stack. The net effect is that exception handling + * stops working. + * + * Thus, when switching greenlets, we need to be sure that the SEH list + * matches the effective call stack, "cutting out" any handlers that were + * pushed by the greenlet that switched out and which are no longer valid. + * + * The easiest way to do this is to capture the SEH list at the time the main + * greenlet for a thread is created, and, when initially starting a greenlet, + * start a new SEH list for it, which contains nothing but the handler + * established for the new greenlet itself, with the tail being the handlers + * for the main greenlet. If we then save and restore the SEH per-greenlet, + * they won't interfere with each others SEH lists. (No greenlet can unwind + * the call stack past the handlers established by the main greenlet). + * + * By observation, a new thread starts with three SEH handlers on the list. By + * the time we get around to creating the main greenlet, though, there can be + * many more, established by transient calls that lead to the creation of the + * main greenlet. Therefore, 3 is a magic constant telling us when to perform + * the initial slice. + * + * All of this can be debugged using a vectored exception handler, which + * operates independently of the SEH handler list, and is called first. + * Walking the SEH list at key points can also be helpful. + * + * References: + * https://en.wikipedia.org/wiki/Win32_Thread_Information_Block + * https://devblogs.microsoft.com/oldnewthing/20100730-00/?p=13273 + * https://docs.microsoft.com/en-us/cpp/cpp/try-except-statement?view=msvc-160 + * https://docs.microsoft.com/en-us/cpp/cpp/structured-exception-handling-c-cpp?view=msvc-160 + * https://docs.microsoft.com/en-us/windows/win32/debug/structured-exception-handling + * https://docs.microsoft.com/en-us/windows/win32/debug/using-a-vectored-exception-handler + * https://bytepointer.com/resources/pietrek_crash_course_depths_of_win32_seh.htm + */ +#define GREENLET_NEEDS_EXCEPTION_STATE_SAVED + + +typedef struct _GExceptionRegistration { + struct _GExceptionRegistration* prev; + void* handler_f; +} GExceptionRegistration; + +static void +slp_set_exception_state(const void *const seh_state) +{ + // Because the stack from from which we do this is ALSO a handler, and + // that one we want to keep, we need to relink the current SEH handler + // frame to point to this one, cutting out the middle men, as it were. + // + // Entering a try block doesn't change the SEH frame, but entering a + // function containing a try block does. + GExceptionRegistration* current_seh_state = (GExceptionRegistration*)__readfsdword(FIELD_OFFSET(NT_TIB, ExceptionList)); + current_seh_state->prev = (GExceptionRegistration*)seh_state; +} + + +static GExceptionRegistration* +x86_slp_get_third_oldest_handler() +{ + GExceptionRegistration* a = NULL; /* Closest to the top */ + GExceptionRegistration* b = NULL; /* second */ + GExceptionRegistration* c = NULL; + GExceptionRegistration* seh_state = (GExceptionRegistration*)__readfsdword(FIELD_OFFSET(NT_TIB, ExceptionList)); + a = b = c = seh_state; + + while (seh_state && seh_state != (GExceptionRegistration*)0xFFFFFFFF) { + if ((void*)seh_state->prev < (void*)100) { + fprintf(stderr, "\tERROR: Broken SEH chain.\n"); + return NULL; + } + a = b; + b = c; + c = seh_state; + + seh_state = seh_state->prev; + } + return a ? a : (b ? b : c); +} + + +static void* +slp_get_exception_state() +{ + // XXX: There appear to be three SEH handlers on the stack already at the + // start of the thread. Is that a guarantee? Almost certainly not. Yet in + // all observed cases it has been three. This is consistent with + // faulthandler off or on, and optimizations off or on. It may not be + // consistent with other operating system versions, though: we only have + // CI on one or two versions (don't ask what there are). + // In theory we could capture the number of handlers on the chain when + // PyInit__greenlet is called: there are probably only the default + // handlers at that point (unless we're embedded and people have used + // __try/__except or a C++ handler)? + return x86_slp_get_third_oldest_handler(); +} + +static int +slp_switch(void) +{ + /* MASM syntax is typically reversed from other assemblers. + It is usually + */ + int *stackref, stsizediff; + /* store the structured exception state for this stack */ + DWORD seh_state = __readfsdword(FIELD_OFFSET(NT_TIB, ExceptionList)); + __asm mov stackref, esp; + /* modify EBX, ESI and EDI in order to get them preserved */ + __asm mov ebx, ebx; + __asm xchg esi, edi; + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm { + mov eax, stsizediff + add esp, eax + add ebp, eax + } + SLP_RESTORE_STATE(); + } + __writefsdword(FIELD_OFFSET(NT_TIB, ExceptionList), seh_state); + return 0; +} + +/* re-enable ebp warning and global optimizations. */ +#pragma optimize("", on) +#pragma warning(default:4731) +#pragma warning(default:4733) /* disable warning about modifying FS[0] */ + + +#endif + +/* + * further self-processing support + */ + +/* we have IsBadReadPtr available, so we can peek at objects */ +#define STACKLESS_SPY + +#ifdef GREENLET_DEBUG + +#define CANNOT_READ_MEM(p, bytes) IsBadReadPtr(p, bytes) + +static int IS_ON_STACK(void*p) +{ + int stackref; + int stackbase = ((int)&stackref) & 0xfffff000; + return (int)p >= stackbase && (int)p < stackbase + 0x00100000; +} + +static void +x86_slp_show_seh_chain() +{ + GExceptionRegistration* seh_state = (GExceptionRegistration*)__readfsdword(FIELD_OFFSET(NT_TIB, ExceptionList)); + fprintf(stderr, "====== SEH Chain ======\n"); + while (seh_state && seh_state != (GExceptionRegistration*)0xFFFFFFFF) { + fprintf(stderr, "\tSEH_chain addr: %p handler: %p prev: %p\n", + seh_state, + seh_state->handler_f, seh_state->prev); + if ((void*)seh_state->prev < (void*)100) { + fprintf(stderr, "\tERROR: Broken chain.\n"); + break; + } + seh_state = seh_state->prev; + } + fprintf(stderr, "====== End SEH Chain ======\n"); + fflush(NULL); + return; +} + +//addVectoredExceptionHandler constants: +//CALL_FIRST means call this exception handler first; +//CALL_LAST means call this exception handler last +#define CALL_FIRST 1 +#define CALL_LAST 0 + +LONG WINAPI +GreenletVectorHandler(PEXCEPTION_POINTERS ExceptionInfo) +{ + // We get one of these for every C++ exception, with code + // E06D7363 + // This is a special value that means "C++ exception from MSVC" + // https://devblogs.microsoft.com/oldnewthing/20100730-00/?p=13273 + // + // Install in the module init function with: + // AddVectoredExceptionHandler(CALL_FIRST, GreenletVectorHandler); + PEXCEPTION_RECORD ExceptionRecord = ExceptionInfo->ExceptionRecord; + + fprintf(stderr, + "GOT VECTORED EXCEPTION:\n" + "\tExceptionCode : %p\n" + "\tExceptionFlags : %p\n" + "\tExceptionAddr : %p\n" + "\tNumberparams : %ld\n", + ExceptionRecord->ExceptionCode, + ExceptionRecord->ExceptionFlags, + ExceptionRecord->ExceptionAddress, + ExceptionRecord->NumberParameters + ); + if (ExceptionRecord->ExceptionFlags & 1) { + fprintf(stderr, "\t\tEH_NONCONTINUABLE\n" ); + } + if (ExceptionRecord->ExceptionFlags & 2) { + fprintf(stderr, "\t\tEH_UNWINDING\n" ); + } + if (ExceptionRecord->ExceptionFlags & 4) { + fprintf(stderr, "\t\tEH_EXIT_UNWIND\n" ); + } + if (ExceptionRecord->ExceptionFlags & 8) { + fprintf(stderr, "\t\tEH_STACK_INVALID\n" ); + } + if (ExceptionRecord->ExceptionFlags & 0x10) { + fprintf(stderr, "\t\tEH_NESTED_CALL\n" ); + } + if (ExceptionRecord->ExceptionFlags & 0x20) { + fprintf(stderr, "\t\tEH_TARGET_UNWIND\n" ); + } + if (ExceptionRecord->ExceptionFlags & 0x40) { + fprintf(stderr, "\t\tEH_COLLIDED_UNWIND\n" ); + } + fprintf(stderr, "\n"); + fflush(NULL); + for(DWORD i = 0; i < ExceptionRecord->NumberParameters; i++) { + fprintf(stderr, "\t\t\tParam %ld: %lX\n", i, ExceptionRecord->ExceptionInformation[i]); + } + + if (ExceptionRecord->NumberParameters == 3) { + fprintf(stderr, "\tAbout to traverse SEH chain\n"); + // C++ Exception records have 3 params. + x86_slp_show_seh_chain(); + } + + return EXCEPTION_CONTINUE_SEARCH; +} + + + + +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_unix.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_unix.h new file mode 100644 index 000000000..493fa6baa --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/platform/switch_x86_unix.h @@ -0,0 +1,105 @@ +/* + * this is the internal transfer function. + * + * HISTORY + * 3-May-13 Ralf Schmitt + * Add support for strange GCC caller-save decisions + * (ported from switch_aarch64_gcc.h) + * 19-Aug-11 Alexey Borzenkov + * Correctly save ebp, ebx and cw + * 07-Sep-05 (py-dev mailing list discussion) + * removed 'ebx' from the register-saved. !!!! WARNING !!!! + * It means that this file can no longer be compiled statically! + * It is now only suitable as part of a dynamic library! + * 24-Nov-02 Christian Tismer + * needed to add another magic constant to insure + * that f in slp_eval_frame(PyFrameObject *f) + * STACK_REFPLUS will probably be 1 in most cases. + * gets included into the saved stack area. + * 17-Sep-02 Christian Tismer + * after virtualizing stack save/restore, the + * stack size shrunk a bit. Needed to introduce + * an adjustment STACK_MAGIC per platform. + * 15-Sep-02 Gerd Woetzel + * slightly changed framework for spark + * 31-Avr-02 Armin Rigo + * Added ebx, esi and edi register-saves. + * 01-Mar-02 Samual M. Rushing + * Ported from i386. + */ + +#define STACK_REFPLUS 1 + +#ifdef SLP_EVAL + +/* #define STACK_MAGIC 3 */ +/* the above works fine with gcc 2.96, but 2.95.3 wants this */ +#define STACK_MAGIC 0 + +#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5) +# define ATTR_NOCLONE __attribute__((noclone)) +#else +# define ATTR_NOCLONE +#endif + +static int +slp_switch(void) +{ + int err; +#ifdef _WIN32 + void *seh; +#endif + void *ebp, *ebx; + unsigned short cw; + int *stackref, stsizediff; + __asm__ volatile ("" : : : "esi", "edi"); + __asm__ volatile ("fstcw %0" : "=m" (cw)); + __asm__ volatile ("movl %%ebp, %0" : "=m" (ebp)); + __asm__ volatile ("movl %%ebx, %0" : "=m" (ebx)); +#ifdef _WIN32 + __asm__ volatile ( + "movl %%fs:0x0, %%eax\n" + "movl %%eax, %0\n" + : "=m" (seh) + : + : "eax"); +#endif + __asm__ ("movl %%esp, %0" : "=g" (stackref)); + { + SLP_SAVE_STATE(stackref, stsizediff); + __asm__ volatile ( + "addl %0, %%esp\n" + "addl %0, %%ebp\n" + : + : "r" (stsizediff) + ); + SLP_RESTORE_STATE(); + __asm__ volatile ("xorl %%eax, %%eax" : "=a" (err)); + } +#ifdef _WIN32 + __asm__ volatile ( + "movl %0, %%eax\n" + "movl %%eax, %%fs:0x0\n" + : + : "m" (seh) + : "eax"); +#endif + __asm__ volatile ("movl %0, %%ebx" : : "m" (ebx)); + __asm__ volatile ("movl %0, %%ebp" : : "m" (ebp)); + __asm__ volatile ("fldcw %0" : : "m" (cw)); + __asm__ volatile ("" : : : "esi", "edi"); + return err; +} + +#endif + +/* + * further self-processing support + */ + +/* + * if you want to add self-inspection tools, place them + * here. See the x86_msvc for the necessary defines. + * These features are highly experimental und not + * essential yet. + */ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/slp_platformselect.h b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/slp_platformselect.h new file mode 100644 index 000000000..d9b7d0a71 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/slp_platformselect.h @@ -0,0 +1,77 @@ +/* + * Platform Selection for Stackless Python + */ +#ifdef __cplusplus +extern "C" { +#endif + +#if defined(MS_WIN32) && !defined(MS_WIN64) && defined(_M_IX86) && defined(_MSC_VER) +# include "platform/switch_x86_msvc.h" /* MS Visual Studio on X86 */ +#elif defined(MS_WIN64) && defined(_M_X64) && defined(_MSC_VER) || defined(__MINGW64__) +# include "platform/switch_x64_msvc.h" /* MS Visual Studio on X64 */ +#elif defined(MS_WIN64) && defined(_M_ARM64) +# include "platform/switch_arm64_msvc.h" /* MS Visual Studio on ARM64 */ +#elif defined(__GNUC__) && defined(__amd64__) && defined(__ILP32__) +# include "platform/switch_x32_unix.h" /* gcc on amd64 with x32 ABI */ +#elif defined(__GNUC__) && defined(__amd64__) +# include "platform/switch_amd64_unix.h" /* gcc on amd64 */ +#elif defined(__GNUC__) && defined(__i386__) +# include "platform/switch_x86_unix.h" /* gcc on X86 */ +#elif defined(__GNUC__) && defined(__powerpc64__) && (defined(__linux__) || defined(__FreeBSD__)) +# include "platform/switch_ppc64_linux.h" /* gcc on PowerPC 64-bit */ +#elif defined(__GNUC__) && defined(__PPC__) && (defined(__linux__) || defined(__FreeBSD__)) +# include "platform/switch_ppc_linux.h" /* gcc on PowerPC */ +#elif defined(__GNUC__) && defined(__POWERPC__) && defined(__APPLE__) +# include "platform/switch_ppc_macosx.h" /* Apple MacOS X on 32-bit PowerPC */ +#elif defined(__GNUC__) && defined(__powerpc64__) && defined(_AIX) +# include "platform/switch_ppc64_aix.h" /* gcc on AIX/PowerPC 64-bit */ +#elif defined(__GNUC__) && defined(_ARCH_PPC) && defined(_AIX) +# include "platform/switch_ppc_aix.h" /* gcc on AIX/PowerPC */ +#elif defined(__GNUC__) && defined(__powerpc__) && defined(__NetBSD__) +#include "platform/switch_ppc_unix.h" /* gcc on NetBSD/powerpc */ +#elif defined(__GNUC__) && defined(sparc) +# include "platform/switch_sparc_sun_gcc.h" /* SunOS sparc with gcc */ +#elif defined(__GNUC__) && defined(__sparc__) +# include "platform/switch_sparc_sun_gcc.h" /* NetBSD sparc with gcc */ +#elif defined(__SUNPRO_C) && defined(sparc) && defined(sun) +# include "platform/switch_sparc_sun_gcc.h" /* SunStudio on amd64 */ +#elif defined(__SUNPRO_C) && defined(__amd64__) && defined(sun) +# include "platform/switch_amd64_unix.h" /* SunStudio on amd64 */ +#elif defined(__SUNPRO_C) && defined(__i386__) && defined(sun) +# include "platform/switch_x86_unix.h" /* SunStudio on x86 */ +#elif defined(__GNUC__) && defined(__s390__) && defined(__linux__) +# include "platform/switch_s390_unix.h" /* Linux/S390 */ +#elif defined(__GNUC__) && defined(__s390x__) && defined(__linux__) +# include "platform/switch_s390_unix.h" /* Linux/S390 zSeries (64-bit) */ +#elif defined(__GNUC__) && defined(__arm__) +# ifdef __APPLE__ +# include +# endif +# if TARGET_OS_IPHONE +# include "platform/switch_arm32_ios.h" /* iPhone OS on arm32 */ +# else +# include "platform/switch_arm32_gcc.h" /* gcc using arm32 */ +# endif +#elif defined(__GNUC__) && defined(__mips__) && defined(__linux__) +# include "platform/switch_mips_unix.h" /* Linux/MIPS */ +#elif defined(__GNUC__) && defined(__aarch64__) +# include "platform/switch_aarch64_gcc.h" /* Aarch64 ABI */ +#elif defined(__GNUC__) && defined(__mc68000__) +# include "platform/switch_m68k_gcc.h" /* gcc on m68k */ +#elif defined(__GNUC__) && defined(__csky__) +#include "platform/switch_csky_gcc.h" /* gcc on csky */ +# elif defined(__GNUC__) && defined(__riscv) +# include "platform/switch_riscv_unix.h" /* gcc on RISC-V */ +#elif defined(__GNUC__) && defined(__alpha__) +# include "platform/switch_alpha_unix.h" /* gcc on DEC Alpha */ +#elif defined(MS_WIN32) && defined(__llvm__) && defined(__aarch64__) +# include "platform/switch_aarch64_gcc.h" /* LLVM Aarch64 ABI for Windows */ +#elif defined(__GNUC__) && defined(__loongarch64) && defined(__linux__) +# include "platform/switch_loongarch64_linux.h" /* LoongArch64 */ +#elif defined(__GNUC__) && defined(__sh__) +# include "platform/switch_sh_gcc.h" /* SuperH */ +#endif + +#ifdef __cplusplus +}; +#endif diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/__init__.py new file mode 100644 index 000000000..5929f2a77 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/__init__.py @@ -0,0 +1,240 @@ +# -*- coding: utf-8 -*- +""" +Tests for greenlet. + +""" +import os +import sys +import unittest + +from gc import collect +from gc import get_objects +from threading import active_count as active_thread_count +from time import sleep +from time import time + +import psutil + +from greenlet import greenlet as RawGreenlet +from greenlet import getcurrent + +from greenlet._greenlet import get_pending_cleanup_count +from greenlet._greenlet import get_total_main_greenlets + +from . import leakcheck + +PY312 = sys.version_info[:2] >= (3, 12) +PY313 = sys.version_info[:2] >= (3, 13) +# XXX: First tested on 3.14a7. Revisit all uses of this on later versions to ensure they +# are still valid. +PY314 = sys.version_info[:2] >= (3, 14) + +WIN = sys.platform.startswith("win") +RUNNING_ON_GITHUB_ACTIONS = os.environ.get('GITHUB_ACTIONS') +RUNNING_ON_TRAVIS = os.environ.get('TRAVIS') or RUNNING_ON_GITHUB_ACTIONS +RUNNING_ON_APPVEYOR = os.environ.get('APPVEYOR') +RUNNING_ON_CI = RUNNING_ON_TRAVIS or RUNNING_ON_APPVEYOR +RUNNING_ON_MANYLINUX = os.environ.get('GREENLET_MANYLINUX') + +class TestCaseMetaClass(type): + # wrap each test method with + # a) leak checks + def __new__(cls, classname, bases, classDict): + # pylint and pep8 fight over what this should be called (mcs or cls). + # pylint gets it right, but we can't scope disable pep8, so we go with + # its convention. + # pylint: disable=bad-mcs-classmethod-argument + check_totalrefcount = True + + # Python 3: must copy, we mutate the classDict. Interestingly enough, + # it doesn't actually error out, but under 3.6 we wind up wrapping + # and re-wrapping the same items over and over and over. + for key, value in list(classDict.items()): + if key.startswith('test') and callable(value): + classDict.pop(key) + if check_totalrefcount: + value = leakcheck.wrap_refcount(value) + classDict[key] = value + return type.__new__(cls, classname, bases, classDict) + + +class TestCase(unittest.TestCase, metaclass=TestCaseMetaClass): + + cleanup_attempt_sleep_duration = 0.001 + cleanup_max_sleep_seconds = 1 + + def wait_for_pending_cleanups(self, + initial_active_threads=None, + initial_main_greenlets=None): + initial_active_threads = initial_active_threads or self.threads_before_test + initial_main_greenlets = initial_main_greenlets or self.main_greenlets_before_test + sleep_time = self.cleanup_attempt_sleep_duration + # NOTE: This is racy! A Python-level thread object may be dead + # and gone, but the C thread may not yet have fired its + # destructors and added to the queue. There's no particular + # way to know that's about to happen. We try to watch the + # Python threads to make sure they, at least, have gone away. + # Counting the main greenlets, which we can easily do deterministically, + # also helps. + + # Always sleep at least once to let other threads run + sleep(sleep_time) + quit_after = time() + self.cleanup_max_sleep_seconds + # TODO: We could add an API that calls us back when a particular main greenlet is deleted? + # It would have to drop the GIL + while ( + get_pending_cleanup_count() + or active_thread_count() > initial_active_threads + or (not self.expect_greenlet_leak + and get_total_main_greenlets() > initial_main_greenlets)): + sleep(sleep_time) + if time() > quit_after: + print("Time limit exceeded.") + print("Threads: Waiting for only", initial_active_threads, + "-->", active_thread_count()) + print("MGlets : Waiting for only", initial_main_greenlets, + "-->", get_total_main_greenlets()) + break + collect() + + def count_objects(self, kind=list, exact_kind=True): + # pylint:disable=unidiomatic-typecheck + # Collect the garbage. + for _ in range(3): + collect() + if exact_kind: + return sum( + 1 + for x in get_objects() + if type(x) is kind + ) + # instances + return sum( + 1 + for x in get_objects() + if isinstance(x, kind) + ) + + greenlets_before_test = 0 + threads_before_test = 0 + main_greenlets_before_test = 0 + expect_greenlet_leak = False + + def count_greenlets(self): + """ + Find all the greenlets and subclasses tracked by the GC. + """ + return self.count_objects(RawGreenlet, False) + + def setUp(self): + # Ensure the main greenlet exists, otherwise the first test + # gets a false positive leak + super().setUp() + getcurrent() + self.threads_before_test = active_thread_count() + self.main_greenlets_before_test = get_total_main_greenlets() + self.wait_for_pending_cleanups(self.threads_before_test, self.main_greenlets_before_test) + self.greenlets_before_test = self.count_greenlets() + + def tearDown(self): + if getattr(self, 'skipTearDown', False): + return + + self.wait_for_pending_cleanups(self.threads_before_test, self.main_greenlets_before_test) + super().tearDown() + + def get_expected_returncodes_for_aborted_process(self): + import signal + # The child should be aborted in an unusual way. On POSIX + # platforms, this is done with abort() and signal.SIGABRT, + # which is reflected in a negative return value; however, on + # Windows, even though we observe the child print "Fatal + # Python error: Aborted" and in older versions of the C + # runtime "This application has requested the Runtime to + # terminate it in an unusual way," it always has an exit code + # of 3. This is interesting because 3 is the error code for + # ERROR_PATH_NOT_FOUND; BUT: the C runtime abort() function + # also uses this code. + # + # If we link to the static C library on Windows, the error + # code changes to '0xc0000409' (hex(3221226505)), which + # apparently is STATUS_STACK_BUFFER_OVERRUN; but "What this + # means is that nowadays when you get a + # STATUS_STACK_BUFFER_OVERRUN, it doesn’t actually mean that + # there is a stack buffer overrun. It just means that the + # application decided to terminate itself with great haste." + # + # + # On windows, we've also seen '0xc0000005' (hex(3221225477)). + # That's "Access Violation" + # + # See + # https://devblogs.microsoft.com/oldnewthing/20110519-00/?p=10623 + # and + # https://docs.microsoft.com/en-us/previous-versions/k089yyh0(v=vs.140)?redirectedfrom=MSDN + # and + # https://devblogs.microsoft.com/oldnewthing/20190108-00/?p=100655 + expected_exit = ( + -signal.SIGABRT, + # But beginning on Python 3.11, the faulthandler + # that prints the C backtraces sometimes segfaults after + # reporting the exception but before printing the stack. + # This has only been seen on linux/gcc. + -signal.SIGSEGV, + ) if not WIN else ( + 3, + 0xc0000409, + 0xc0000005, + ) + return expected_exit + + def get_process_uss(self): + """ + Return the current process's USS in bytes. + + uss is available on Linux, macOS, Windows. Also known as + "Unique Set Size", this is the memory which is unique to a + process and which would be freed if the process was terminated + right now. + + If this is not supported by ``psutil``, this raises the + :exc:`unittest.SkipTest` exception. + """ + try: + return psutil.Process().memory_full_info().uss + except AttributeError as e: + raise unittest.SkipTest("uss not supported") from e + + def run_script(self, script_name, show_output=True): + import subprocess + script = os.path.join( + os.path.dirname(__file__), + script_name, + ) + + try: + return subprocess.check_output([sys.executable, script], + encoding='utf-8', + stderr=subprocess.STDOUT) + except subprocess.CalledProcessError as ex: + if show_output: + print('-----') + print('Failed to run script', script) + print('~~~~~') + print(ex.output) + print('------') + raise + + + def assertScriptRaises(self, script_name, exitcodes=None): + import subprocess + with self.assertRaises(subprocess.CalledProcessError) as exc: + output = self.run_script(script_name, show_output=False) + __traceback_info__ = output + # We're going to fail the assertion if we get here, at least + # preserve the output in the traceback. + + if exitcodes is None: + exitcodes = self.get_expected_returncodes_for_aborted_process() + self.assertIn(exc.exception.returncode, exitcodes) + return exc.exception diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.c b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.c new file mode 100644 index 000000000..05e81c03a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.c @@ -0,0 +1,231 @@ +/* This is a set of functions used by test_extension_interface.py to test the + * Greenlet C API. + */ + +#include "../greenlet.h" + +#ifndef Py_RETURN_NONE +# define Py_RETURN_NONE return Py_INCREF(Py_None), Py_None +#endif + +#define TEST_MODULE_NAME "_test_extension" + +static PyObject* +test_switch(PyObject* self, PyObject* greenlet) +{ + PyObject* result = NULL; + + if (greenlet == NULL || !PyGreenlet_Check(greenlet)) { + PyErr_BadArgument(); + return NULL; + } + + result = PyGreenlet_Switch((PyGreenlet*)greenlet, NULL, NULL); + if (result == NULL) { + if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_AssertionError, + "greenlet.switch() failed for some reason."); + } + return NULL; + } + Py_INCREF(result); + return result; +} + +static PyObject* +test_switch_kwargs(PyObject* self, PyObject* args, PyObject* kwargs) +{ + PyGreenlet* g = NULL; + PyObject* result = NULL; + + PyArg_ParseTuple(args, "O!", &PyGreenlet_Type, &g); + + if (g == NULL || !PyGreenlet_Check(g)) { + PyErr_BadArgument(); + return NULL; + } + + result = PyGreenlet_Switch(g, NULL, kwargs); + if (result == NULL) { + if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_AssertionError, + "greenlet.switch() failed for some reason."); + } + return NULL; + } + Py_XINCREF(result); + return result; +} + +static PyObject* +test_getcurrent(PyObject* self) +{ + PyGreenlet* g = PyGreenlet_GetCurrent(); + if (g == NULL || !PyGreenlet_Check(g) || !PyGreenlet_ACTIVE(g)) { + PyErr_SetString(PyExc_AssertionError, + "getcurrent() returned an invalid greenlet"); + Py_XDECREF(g); + return NULL; + } + Py_DECREF(g); + Py_RETURN_NONE; +} + +static PyObject* +test_setparent(PyObject* self, PyObject* arg) +{ + PyGreenlet* current; + PyGreenlet* greenlet = NULL; + + if (arg == NULL || !PyGreenlet_Check(arg)) { + PyErr_BadArgument(); + return NULL; + } + if ((current = PyGreenlet_GetCurrent()) == NULL) { + return NULL; + } + greenlet = (PyGreenlet*)arg; + if (PyGreenlet_SetParent(greenlet, current)) { + Py_DECREF(current); + return NULL; + } + Py_DECREF(current); + if (PyGreenlet_Switch(greenlet, NULL, NULL) == NULL) { + return NULL; + } + Py_RETURN_NONE; +} + +static PyObject* +test_new_greenlet(PyObject* self, PyObject* callable) +{ + PyObject* result = NULL; + PyGreenlet* greenlet = PyGreenlet_New(callable, NULL); + + if (!greenlet) { + return NULL; + } + + result = PyGreenlet_Switch(greenlet, NULL, NULL); + Py_CLEAR(greenlet); + if (result == NULL) { + return NULL; + } + + Py_INCREF(result); + return result; +} + +static PyObject* +test_raise_dead_greenlet(PyObject* self) +{ + PyErr_SetString(PyExc_GreenletExit, "test GreenletExit exception."); + return NULL; +} + +static PyObject* +test_raise_greenlet_error(PyObject* self) +{ + PyErr_SetString(PyExc_GreenletError, "test greenlet.error exception"); + return NULL; +} + +static PyObject* +test_throw(PyObject* self, PyGreenlet* g) +{ + const char msg[] = "take that sucka!"; + PyObject* msg_obj = Py_BuildValue("s", msg); + PyGreenlet_Throw(g, PyExc_ValueError, msg_obj, NULL); + Py_DECREF(msg_obj); + if (PyErr_Occurred()) { + return NULL; + } + Py_RETURN_NONE; +} + +static PyObject* +test_throw_exact(PyObject* self, PyObject* args) +{ + PyGreenlet* g = NULL; + PyObject* typ = NULL; + PyObject* val = NULL; + PyObject* tb = NULL; + + if (!PyArg_ParseTuple(args, "OOOO:throw", &g, &typ, &val, &tb)) { + return NULL; + } + + PyGreenlet_Throw(g, typ, val, tb); + if (PyErr_Occurred()) { + return NULL; + } + Py_RETURN_NONE; +} + +static PyMethodDef test_methods[] = { + {"test_switch", + (PyCFunction)test_switch, + METH_O, + "Switch to the provided greenlet sending provided arguments, and \n" + "return the results."}, + {"test_switch_kwargs", + (PyCFunction)test_switch_kwargs, + METH_VARARGS | METH_KEYWORDS, + "Switch to the provided greenlet sending the provided keyword args."}, + {"test_getcurrent", + (PyCFunction)test_getcurrent, + METH_NOARGS, + "Test PyGreenlet_GetCurrent()"}, + {"test_setparent", + (PyCFunction)test_setparent, + METH_O, + "Se the parent of the provided greenlet and switch to it."}, + {"test_new_greenlet", + (PyCFunction)test_new_greenlet, + METH_O, + "Test PyGreenlet_New()"}, + {"test_raise_dead_greenlet", + (PyCFunction)test_raise_dead_greenlet, + METH_NOARGS, + "Just raise greenlet.GreenletExit"}, + {"test_raise_greenlet_error", + (PyCFunction)test_raise_greenlet_error, + METH_NOARGS, + "Just raise greenlet.error"}, + {"test_throw", + (PyCFunction)test_throw, + METH_O, + "Throw a ValueError at the provided greenlet"}, + {"test_throw_exact", + (PyCFunction)test_throw_exact, + METH_VARARGS, + "Throw exactly the arguments given at the provided greenlet"}, + {NULL, NULL, 0, NULL} +}; + + +#define INITERROR return NULL + +static struct PyModuleDef moduledef = {PyModuleDef_HEAD_INIT, + TEST_MODULE_NAME, + NULL, + 0, + test_methods, + NULL, + NULL, + NULL, + NULL}; + +PyMODINIT_FUNC +PyInit__test_extension(void) +{ + PyObject* module = NULL; + module = PyModule_Create(&moduledef); + + if (module == NULL) { + return NULL; + } + + PyGreenlet_Import(); + return module; +} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.cpython-310-x86_64-linux-gnu.so b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 000000000..902c76ade Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension.cpython-310-x86_64-linux-gnu.so differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpp b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpp new file mode 100644 index 000000000..5cbe6a769 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpp @@ -0,0 +1,226 @@ +/* This is a set of functions used to test C++ exceptions are not + * broken during greenlet switches + */ + +#include "../greenlet.h" +#include "../greenlet_compiler_compat.hpp" +#include +#include + +struct exception_t { + int depth; + exception_t(int depth) : depth(depth) {} +}; + +/* Functions are called via pointers to prevent inlining */ +static void (*p_test_exception_throw_nonstd)(int depth); +static void (*p_test_exception_throw_std)(); +static PyObject* (*p_test_exception_switch_recurse)(int depth, int left); + +static void +test_exception_throw_nonstd(int depth) +{ + throw exception_t(depth); +} + +static void +test_exception_throw_std() +{ + throw std::runtime_error("Thrown from an extension."); +} + +static PyObject* +test_exception_switch_recurse(int depth, int left) +{ + if (left > 0) { + return p_test_exception_switch_recurse(depth, left - 1); + } + + PyObject* result = NULL; + PyGreenlet* self = PyGreenlet_GetCurrent(); + if (self == NULL) + return NULL; + + try { + if (PyGreenlet_Switch(PyGreenlet_GET_PARENT(self), NULL, NULL) == NULL) { + Py_DECREF(self); + return NULL; + } + p_test_exception_throw_nonstd(depth); + PyErr_SetString(PyExc_RuntimeError, + "throwing C++ exception didn't work"); + } + catch (const exception_t& e) { + if (e.depth != depth) + PyErr_SetString(PyExc_AssertionError, "depth mismatch"); + else + result = PyLong_FromLong(depth); + } + catch (...) { + PyErr_SetString(PyExc_RuntimeError, "unexpected C++ exception"); + } + + Py_DECREF(self); + return result; +} + +/* test_exception_switch(int depth) + * - recurses depth times + * - switches to parent inside try/catch block + * - throws an exception that (expected to be caught in the same function) + * - verifies depth matches (exceptions shouldn't be caught in other greenlets) + */ +static PyObject* +test_exception_switch(PyObject* UNUSED(self), PyObject* args) +{ + int depth; + if (!PyArg_ParseTuple(args, "i", &depth)) + return NULL; + return p_test_exception_switch_recurse(depth, depth); +} + + +static PyObject* +py_test_exception_throw_nonstd(PyObject* self, PyObject* args) +{ + if (!PyArg_ParseTuple(args, "")) + return NULL; + p_test_exception_throw_nonstd(0); + PyErr_SetString(PyExc_AssertionError, "unreachable code running after throw"); + return NULL; +} + +static PyObject* +py_test_exception_throw_std(PyObject* self, PyObject* args) +{ + if (!PyArg_ParseTuple(args, "")) + return NULL; + p_test_exception_throw_std(); + PyErr_SetString(PyExc_AssertionError, "unreachable code running after throw"); + return NULL; +} + +static PyObject* +py_test_call(PyObject* self, PyObject* arg) +{ + PyObject* noargs = PyTuple_New(0); + PyObject* ret = PyObject_Call(arg, noargs, nullptr); + Py_DECREF(noargs); + return ret; +} + + + +/* test_exception_switch_and_do_in_g2(g2func) + * - creates new greenlet g2 to run g2func + * - switches to g2 inside try/catch block + * - verifies that no exception has been caught + * + * it is used together with test_exception_throw to verify that unhandled + * exceptions thrown in one greenlet do not propagate to other greenlet nor + * segfault the process. + */ +static PyObject* +test_exception_switch_and_do_in_g2(PyObject* self, PyObject* args) +{ + PyObject* g2func = NULL; + PyObject* result = NULL; + + if (!PyArg_ParseTuple(args, "O", &g2func)) + return NULL; + PyGreenlet* g2 = PyGreenlet_New(g2func, NULL); + if (!g2) { + return NULL; + } + + try { + result = PyGreenlet_Switch(g2, NULL, NULL); + if (!result) { + return NULL; + } + } + catch (const exception_t& e) { + /* if we are here the memory can be already corrupted and the program + * might crash before below py-level exception might become printed. + * -> print something to stderr to make it clear that we had entered + * this catch block. + * See comments in inner_bootstrap() + */ +#if defined(WIN32) || defined(_WIN32) + fprintf(stderr, "C++ exception unexpectedly caught in g1\n"); + PyErr_SetString(PyExc_AssertionError, "C++ exception unexpectedly caught in g1"); + Py_XDECREF(result); + return NULL; +#else + throw; +#endif + } + + Py_XDECREF(result); + Py_RETURN_NONE; +} + +static PyMethodDef test_methods[] = { + {"test_exception_switch", + (PyCFunction)&test_exception_switch, + METH_VARARGS, + "Switches to parent twice, to test exception handling and greenlet " + "switching."}, + {"test_exception_switch_and_do_in_g2", + (PyCFunction)&test_exception_switch_and_do_in_g2, + METH_VARARGS, + "Creates new greenlet g2 to run g2func and switches to it inside try/catch " + "block. Used together with test_exception_throw to verify that unhandled " + "C++ exceptions thrown in a greenlet doe not corrupt memory."}, + {"test_exception_throw_nonstd", + (PyCFunction)&py_test_exception_throw_nonstd, + METH_VARARGS, + "Throws non-standard C++ exception. Calling this function directly should abort the process." + }, + {"test_exception_throw_std", + (PyCFunction)&py_test_exception_throw_std, + METH_VARARGS, + "Throws standard C++ exception. Calling this function directly should abort the process." + }, + {"test_call", + (PyCFunction)&py_test_call, + METH_O, + "Call the given callable. Unlike calling it directly, this creates a " + "new C-level stack frame, which may be helpful in testing." + }, + {NULL, NULL, 0, NULL} +}; + + +static struct PyModuleDef moduledef = {PyModuleDef_HEAD_INIT, + "greenlet.tests._test_extension_cpp", + NULL, + 0, + test_methods, + NULL, + NULL, + NULL, + NULL}; + +PyMODINIT_FUNC +PyInit__test_extension_cpp(void) +{ + PyObject* module = NULL; + + module = PyModule_Create(&moduledef); + + if (module == NULL) { + return NULL; + } + + PyGreenlet_Import(); + if (_PyGreenlet_API == NULL) { + return NULL; + } + + p_test_exception_throw_nonstd = test_exception_throw_nonstd; + p_test_exception_throw_std = test_exception_throw_std; + p_test_exception_switch_recurse = test_exception_switch_recurse; + + return module; +} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpython-310-x86_64-linux-gnu.so b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 000000000..ad3e02931 Binary files /dev/null and b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/_test_extension_cpp.cpython-310-x86_64-linux-gnu.so differ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_clearing_run_switches.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_clearing_run_switches.py new file mode 100644 index 000000000..6dd1492ff --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_clearing_run_switches.py @@ -0,0 +1,47 @@ +# -*- coding: utf-8 -*- +""" +If we have a run callable passed to the constructor or set as an +attribute, but we don't actually use that (because ``__getattribute__`` +or the like interferes), then when we clear callable before beginning +to run, there's an opportunity for Python code to run. + +""" +import greenlet + +g = None +main = greenlet.getcurrent() + +results = [] + +class RunCallable: + + def __del__(self): + results.append(('RunCallable', '__del__')) + main.switch('from RunCallable') + + +class G(greenlet.greenlet): + + def __getattribute__(self, name): + if name == 'run': + results.append(('G.__getattribute__', 'run')) + return run_func + return object.__getattribute__(self, name) + + +def run_func(): + results.append(('run_func', 'enter')) + + +g = G(RunCallable()) +# Try to start G. It will get to the point where it deletes +# its run callable C++ variable in inner_bootstrap. That triggers +# the __del__ method, which switches back to main before g +# actually even starts running. +x = g.switch() +results.append(('main: g.switch()', x)) +# In the C++ code, this results in g->g_switch() appearing to return, even though +# it has yet to run. +print('In main with', x, flush=True) +g.switch() +print('RESULTS', results) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_cpp_exception.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_cpp_exception.py new file mode 100644 index 000000000..fa4dc2eb3 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_cpp_exception.py @@ -0,0 +1,33 @@ +# -*- coding: utf-8 -*- +""" +Helper for testing a C++ exception throw aborts the process. + +Takes one argument, the name of the function in :mod:`_test_extension_cpp` to call. +""" +import sys +import greenlet +from greenlet.tests import _test_extension_cpp +print('fail_cpp_exception is running') + +def run_unhandled_exception_in_greenlet_aborts(): + def _(): + _test_extension_cpp.test_exception_switch_and_do_in_g2( + _test_extension_cpp.test_exception_throw_nonstd + ) + g1 = greenlet.greenlet(_) + g1.switch() + + +func_name = sys.argv[1] +try: + func = getattr(_test_extension_cpp, func_name) +except AttributeError: + if func_name == run_unhandled_exception_in_greenlet_aborts.__name__: + func = run_unhandled_exception_in_greenlet_aborts + elif func_name == 'run_as_greenlet_target': + g = greenlet.greenlet(_test_extension_cpp.test_exception_throw_std) + func = g.switch + else: + raise +print('raising', func, flush=True) +func() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_initialstub_already_started.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_initialstub_already_started.py new file mode 100644 index 000000000..c1a44efd2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_initialstub_already_started.py @@ -0,0 +1,78 @@ +""" +Testing initialstub throwing an already started exception. +""" + +import greenlet + +a = None +b = None +c = None +main = greenlet.getcurrent() + +# If we switch into a dead greenlet, +# we go looking for its parents. +# if a parent is not yet started, we start it. + +results = [] + +def a_run(*args): + #results.append('A') + results.append(('Begin A', args)) + + +def c_run(): + results.append('Begin C') + b.switch('From C') + results.append('C done') + +class A(greenlet.greenlet): pass + +class B(greenlet.greenlet): + doing_it = False + def __getattribute__(self, name): + if name == 'run' and not self.doing_it: + assert greenlet.getcurrent() is c + self.doing_it = True + results.append('Switch to b from B.__getattribute__ in ' + + type(greenlet.getcurrent()).__name__) + b.switch() + results.append('B.__getattribute__ back from main in ' + + type(greenlet.getcurrent()).__name__) + if name == 'run': + name = '_B_run' + return object.__getattribute__(self, name) + + def _B_run(self, *arg): + results.append(('Begin B', arg)) + results.append('_B_run switching to main') + main.switch('From B') + +class C(greenlet.greenlet): + pass +a = A(a_run) +b = B(parent=a) +c = C(c_run, b) + +# Start a child; while running, it will start B, +# but starting B will ALSO start B. +result = c.switch() +results.append(('main from c', result)) + +# Switch back to C, which was in the middle of switching +# already. This will throw the ``GreenletStartedWhileInPython`` +# exception, which results in parent A getting started (B is finished) +c.switch() + +results.append(('A dead?', a.dead, 'B dead?', b.dead, 'C dead?', c.dead)) + +# A and B should both be dead now. +assert a.dead +assert b.dead +assert not c.dead + +result = c.switch() +results.append(('main from c.2', result)) +# Now C is dead +assert c.dead + +print("RESULTS:", results) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_slp_switch.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_slp_switch.py new file mode 100644 index 000000000..09905269f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_slp_switch.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- +""" +A test helper for seeing what happens when slp_switch() +fails. +""" +# pragma: no cover + +import greenlet + + +print('fail_slp_switch is running', flush=True) + +runs = [] +def func(): + runs.append(1) + greenlet.getcurrent().parent.switch() + runs.append(2) + greenlet.getcurrent().parent.switch() + runs.append(3) + +g = greenlet._greenlet.UnswitchableGreenlet(func) +g.switch() +assert runs == [1] +g.switch() +assert runs == [1, 2] +g.force_slp_switch_error = True + +# This should crash. +g.switch() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets.py new file mode 100644 index 000000000..e151b19a6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets.py @@ -0,0 +1,44 @@ +""" +Uses a trace function to switch greenlets at unexpected times. + +In the trace function, we switch from the current greenlet to another +greenlet, which switches +""" +import greenlet + +g1 = None +g2 = None + +switch_to_g2 = False + +def tracefunc(*args): + print('TRACE', *args) + global switch_to_g2 + if switch_to_g2: + switch_to_g2 = False + g2.switch() + print('\tLEAVE TRACE', *args) + +def g1_run(): + print('In g1_run') + global switch_to_g2 + switch_to_g2 = True + from_parent = greenlet.getcurrent().parent.switch() + print('Return to g1_run') + print('From parent', from_parent) + +def g2_run(): + #g1.switch() + greenlet.getcurrent().parent.switch() + +greenlet.settrace(tracefunc) + +g1 = greenlet.greenlet(g1_run) +g2 = greenlet.greenlet(g2_run) + +# This switch didn't actually finish! +# And if it did, it would raise TypeError +# because g1_run() doesn't take any arguments. +g1.switch(1) +print('Back in main') +g1.switch(2) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets2.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets2.py new file mode 100644 index 000000000..1f6b66bc6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_three_greenlets2.py @@ -0,0 +1,55 @@ +""" +Like fail_switch_three_greenlets, but the call into g1_run would actually be +valid. +""" +import greenlet + +g1 = None +g2 = None + +switch_to_g2 = True + +results = [] + +def tracefunc(*args): + results.append(('trace', args[0])) + print('TRACE', *args) + global switch_to_g2 + if switch_to_g2: + switch_to_g2 = False + g2.switch('g2 from tracefunc') + print('\tLEAVE TRACE', *args) + +def g1_run(arg): + results.append(('g1 arg', arg)) + print('In g1_run') + from_parent = greenlet.getcurrent().parent.switch('from g1_run') + results.append(('g1 from parent', from_parent)) + return 'g1 done' + +def g2_run(arg): + #g1.switch() + results.append(('g2 arg', arg)) + parent = greenlet.getcurrent().parent.switch('from g2_run') + global switch_to_g2 + switch_to_g2 = False + results.append(('g2 from parent', parent)) + return 'g2 done' + + +greenlet.settrace(tracefunc) + +g1 = greenlet.greenlet(g1_run) +g2 = greenlet.greenlet(g2_run) + +x = g1.switch('g1 from main') +results.append(('main g1', x)) +print('Back in main', x) +x = g1.switch('g2 from main') +results.append(('main g2', x)) +print('back in amain again', x) +x = g1.switch('g1 from main 2') +results.append(('main g1.2', x)) +x = g2.switch() +results.append(('main g2.2', x)) +print("RESULTS:", results) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_two_greenlets.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_two_greenlets.py new file mode 100644 index 000000000..3e52345a6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/fail_switch_two_greenlets.py @@ -0,0 +1,41 @@ +""" +Uses a trace function to switch greenlets at unexpected times. + +In the trace function, we switch from the current greenlet to another +greenlet, which switches +""" +import greenlet + +g1 = None +g2 = None + +switch_to_g2 = False + +def tracefunc(*args): + print('TRACE', *args) + global switch_to_g2 + if switch_to_g2: + switch_to_g2 = False + g2.switch() + print('\tLEAVE TRACE', *args) + +def g1_run(): + print('In g1_run') + global switch_to_g2 + switch_to_g2 = True + greenlet.getcurrent().parent.switch() + print('Return to g1_run') + print('Falling off end of g1_run') + +def g2_run(): + g1.switch() + print('Falling off end of g2') + +greenlet.settrace(tracefunc) + +g1 = greenlet.greenlet(g1_run) +g2 = greenlet.greenlet(g2_run) + +g1.switch() +print('Falling off end of main') +g2.switch() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/leakcheck.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/leakcheck.py new file mode 100644 index 000000000..a5152fb26 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/leakcheck.py @@ -0,0 +1,319 @@ +# Copyright (c) 2018 gevent community +# Copyright (c) 2021 greenlet community +# +# This was originally part of gevent's test suite. The main author +# (Jason Madden) vendored a copy of it into greenlet. +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +from __future__ import print_function + +import os +import sys +import gc + +from functools import wraps +import unittest + + +import objgraph + +# graphviz 0.18 (Nov 7 2021), available only on Python 3.6 and newer, +# has added type hints (sigh). It wants to use ``typing.Literal`` for +# some stuff, but that's only available on Python 3.9+. If that's not +# found, it creates a ``unittest.mock.MagicMock`` object and annotates +# with that. These are GC'able objects, and doing almost *anything* +# with them results in an explosion of objects. For example, trying to +# compare them for equality creates new objects. This causes our +# leakchecks to fail, with reports like: +# +# greenlet.tests.leakcheck.LeakCheckError: refcount increased by [337, 1333, 343, 430, 530, 643, 769] +# _Call 1820 +546 +# dict 4094 +76 +# MagicProxy 585 +73 +# tuple 2693 +66 +# _CallList 24 +3 +# weakref 1441 +1 +# function 5996 +1 +# type 736 +1 +# cell 592 +1 +# MagicMock 8 +1 +# +# To avoid this, we *could* filter this type of object out early. In +# principle it could leak, but we don't use mocks in greenlet, so it +# doesn't leak from us. However, a further issue is that ``MagicMock`` +# objects have subobjects that are also GC'able, like ``_Call``, and +# those create new mocks of their own too. So we'd have to filter them +# as well, and they're not public. That's OK, we can workaround the +# problem by being very careful to never compare by equality or other +# user-defined operators, only using object identity or other builtin +# functions. + +RUNNING_ON_GITHUB_ACTIONS = os.environ.get('GITHUB_ACTIONS') +RUNNING_ON_TRAVIS = os.environ.get('TRAVIS') or RUNNING_ON_GITHUB_ACTIONS +RUNNING_ON_APPVEYOR = os.environ.get('APPVEYOR') +RUNNING_ON_CI = RUNNING_ON_TRAVIS or RUNNING_ON_APPVEYOR +RUNNING_ON_MANYLINUX = os.environ.get('GREENLET_MANYLINUX') +SKIP_LEAKCHECKS = RUNNING_ON_MANYLINUX or os.environ.get('GREENLET_SKIP_LEAKCHECKS') +SKIP_FAILING_LEAKCHECKS = os.environ.get('GREENLET_SKIP_FAILING_LEAKCHECKS') +ONLY_FAILING_LEAKCHECKS = os.environ.get('GREENLET_ONLY_FAILING_LEAKCHECKS') + +def ignores_leakcheck(func): + """ + Ignore the given object during leakchecks. + + Can be applied to a method, in which case the method will run, but + will not be subject to leak checks. + + If applied to a class, the entire class will be skipped during leakchecks. This + is intended to be used for classes that are very slow and cause problems such as + test timeouts; typically it will be used for classes that are subclasses of a base + class and specify variants of behaviour (such as pool sizes). + """ + func.ignore_leakcheck = True + return func + +def fails_leakcheck(func): + """ + Mark that the function is known to leak. + """ + func.fails_leakcheck = True + if SKIP_FAILING_LEAKCHECKS: + func = unittest.skip("Skipping known failures")(func) + return func + +class LeakCheckError(AssertionError): + pass + +if hasattr(sys, 'getobjects'): + # In a Python build with ``--with-trace-refs``, make objgraph + # trace *all* the objects, not just those that are tracked by the + # GC + class _MockGC(object): + def get_objects(self): + return sys.getobjects(0) # pylint:disable=no-member + def __getattr__(self, name): + return getattr(gc, name) + objgraph.gc = _MockGC() + fails_strict_leakcheck = fails_leakcheck +else: + def fails_strict_leakcheck(func): + """ + Decorator for a function that is known to fail when running + strict (``sys.getobjects()``) leakchecks. + + This type of leakcheck finds all objects, even those, such as + strings, which are not tracked by the garbage collector. + """ + return func + +class ignores_types_in_strict_leakcheck(object): + def __init__(self, types): + self.types = types + def __call__(self, func): + func.leakcheck_ignore_types = self.types + return func + +class _RefCountChecker(object): + + # Some builtin things that we ignore + # XXX: Those things were ignored by gevent, but they're important here, + # presumably. + IGNORED_TYPES = () #(tuple, dict, types.FrameType, types.TracebackType) + + def __init__(self, testcase, function): + self.testcase = testcase + self.function = function + self.deltas = [] + self.peak_stats = {} + self.ignored_types = () + + # The very first time we are called, we have already been + # self.setUp() by the test runner, so we don't need to do it again. + self.needs_setUp = False + + def _include_object_p(self, obj): + # pylint:disable=too-many-return-statements + # + # See the comment block at the top. We must be careful to + # avoid invoking user-defined operations. + if obj is self: + return False + kind = type(obj) + # ``self._include_object_p == obj`` returns NotImplemented + # for non-function objects, which causes the interpreter + # to try to reverse the order of arguments...which leads + # to the explosion of mock objects. We don't want that, so we implement + # the check manually. + if kind == type(self._include_object_p): + try: + # pylint:disable=not-callable + exact_method_equals = self._include_object_p.__eq__(obj) + except AttributeError: + # Python 2.7 methods may only have __cmp__, and that raises a + # TypeError for non-method arguments + # pylint:disable=no-member + exact_method_equals = self._include_object_p.__cmp__(obj) == 0 + + if exact_method_equals is not NotImplemented and exact_method_equals: + return False + + # Similarly, we need to check identity in our __dict__ to avoid mock explosions. + for x in self.__dict__.values(): + if obj is x: + return False + + + if kind in self.ignored_types or kind in self.IGNORED_TYPES: + return False + + return True + + def _growth(self): + return objgraph.growth(limit=None, peak_stats=self.peak_stats, + filter=self._include_object_p) + + def _report_diff(self, growth): + if not growth: + return "" + + lines = [] + width = max(len(name) for name, _, _ in growth) + for name, count, delta in growth: + lines.append('%-*s%9d %+9d' % (width, name, count, delta)) + + diff = '\n'.join(lines) + return diff + + + def _run_test(self, args, kwargs): + gc_enabled = gc.isenabled() + gc.disable() + + if self.needs_setUp: + self.testcase.setUp() + self.testcase.skipTearDown = False + try: + self.function(self.testcase, *args, **kwargs) + finally: + self.testcase.tearDown() + self.testcase.doCleanups() + self.testcase.skipTearDown = True + self.needs_setUp = True + if gc_enabled: + gc.enable() + + def _growth_after(self): + # Grab post snapshot + # pylint:disable=no-member + if 'urlparse' in sys.modules: + sys.modules['urlparse'].clear_cache() + if 'urllib.parse' in sys.modules: + sys.modules['urllib.parse'].clear_cache() + + return self._growth() + + def _check_deltas(self, growth): + # Return false when we have decided there is no leak, + # true if we should keep looping, raises an assertion + # if we have decided there is a leak. + + deltas = self.deltas + if not deltas: + # We haven't run yet, no data, keep looping + return True + + if gc.garbage: + raise LeakCheckError("Generated uncollectable garbage %r" % (gc.garbage,)) + + + # the following configurations are classified as "no leak" + # [0, 0] + # [x, 0, 0] + # [... a, b, c, d] where a+b+c+d = 0 + # + # the following configurations are classified as "leak" + # [... z, z, z] where z > 0 + + if deltas[-2:] == [0, 0] and len(deltas) in (2, 3): + return False + + if deltas[-3:] == [0, 0, 0]: + return False + + if len(deltas) >= 4 and sum(deltas[-4:]) == 0: + return False + + if len(deltas) >= 3 and deltas[-1] > 0 and deltas[-1] == deltas[-2] and deltas[-2] == deltas[-3]: + diff = self._report_diff(growth) + raise LeakCheckError('refcount increased by %r\n%s' % (deltas, diff)) + + # OK, we don't know for sure yet. Let's search for more + if sum(deltas[-3:]) <= 0 or sum(deltas[-4:]) <= 0 or deltas[-4:].count(0) >= 2: + # this is suspicious, so give a few more runs + limit = 11 + else: + limit = 7 + if len(deltas) >= limit: + raise LeakCheckError('refcount increased by %r\n%s' + % (deltas, + self._report_diff(growth))) + + # We couldn't decide yet, keep going + return True + + def __call__(self, args, kwargs): + for _ in range(3): + gc.collect() + + expect_failure = getattr(self.function, 'fails_leakcheck', False) + if expect_failure: + self.testcase.expect_greenlet_leak = True + self.ignored_types = getattr(self.function, "leakcheck_ignore_types", ()) + + # Capture state before; the incremental will be + # updated by each call to _growth_after + growth = self._growth() + + try: + while self._check_deltas(growth): + self._run_test(args, kwargs) + + growth = self._growth_after() + + self.deltas.append(sum((stat[2] for stat in growth))) + except LeakCheckError: + if not expect_failure: + raise + else: + if expect_failure: + raise LeakCheckError("Expected %s to leak but it did not." % (self.function,)) + +def wrap_refcount(method): + if getattr(method, 'ignore_leakcheck', False) or SKIP_LEAKCHECKS: + return method + + @wraps(method) + def wrapper(self, *args, **kwargs): # pylint:disable=too-many-branches + if getattr(self, 'ignore_leakcheck', False): + raise unittest.SkipTest("This class ignored during leakchecks") + if ONLY_FAILING_LEAKCHECKS and not getattr(method, 'fails_leakcheck', False): + raise unittest.SkipTest("Only running tests that fail leakchecks.") + return _RefCountChecker(self, method)(args, kwargs) + + return wrapper diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_contextvars.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_contextvars.py new file mode 100644 index 000000000..b0d1ccf34 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_contextvars.py @@ -0,0 +1,312 @@ +from __future__ import print_function + +import gc +import sys +import unittest + +from functools import partial +from unittest import skipUnless +from unittest import skipIf + +from greenlet import greenlet +from greenlet import getcurrent +from . import TestCase +from . import PY314 + +try: + from contextvars import Context + from contextvars import ContextVar + from contextvars import copy_context + # From the documentation: + # + # Important: Context Variables should be created at the top module + # level and never in closures. Context objects hold strong + # references to context variables which prevents context variables + # from being properly garbage collected. + ID_VAR = ContextVar("id", default=None) + VAR_VAR = ContextVar("var", default=None) + ContextVar = None +except ImportError: + Context = ContextVar = copy_context = None + +# We don't support testing if greenlet's built-in context var support is disabled. +@skipUnless(Context is not None, "ContextVar not supported") +class ContextVarsTests(TestCase): + def _new_ctx_run(self, *args, **kwargs): + return copy_context().run(*args, **kwargs) + + def _increment(self, greenlet_id, callback, counts, expect): + ctx_var = ID_VAR + if expect is None: + self.assertIsNone(ctx_var.get()) + else: + self.assertEqual(ctx_var.get(), expect) + ctx_var.set(greenlet_id) + for _ in range(2): + counts[ctx_var.get()] += 1 + callback() + + def _test_context(self, propagate_by): + # pylint:disable=too-many-branches + ID_VAR.set(0) + + callback = getcurrent().switch + counts = dict((i, 0) for i in range(5)) + + lets = [ + greenlet(partial( + partial( + copy_context().run, + self._increment + ) if propagate_by == "run" else self._increment, + greenlet_id=i, + callback=callback, + counts=counts, + expect=( + i - 1 if propagate_by == "share" else + 0 if propagate_by in ("set", "run") else None + ) + )) + for i in range(1, 5) + ] + + for let in lets: + if propagate_by == "set": + let.gr_context = copy_context() + elif propagate_by == "share": + let.gr_context = getcurrent().gr_context + + for i in range(2): + counts[ID_VAR.get()] += 1 + for let in lets: + let.switch() + + if propagate_by == "run": + # Must leave each context.run() in reverse order of entry + for let in reversed(lets): + let.switch() + else: + # No context.run(), so fine to exit in any order. + for let in lets: + let.switch() + + for let in lets: + self.assertTrue(let.dead) + # When using run(), we leave the run() as the greenlet dies, + # and there's no context "underneath". When not using run(), + # gr_context still reflects the context the greenlet was + # running in. + if propagate_by == 'run': + self.assertIsNone(let.gr_context) + else: + self.assertIsNotNone(let.gr_context) + + + if propagate_by == "share": + self.assertEqual(counts, {0: 1, 1: 1, 2: 1, 3: 1, 4: 6}) + else: + self.assertEqual(set(counts.values()), set([2])) + + def test_context_propagated_by_context_run(self): + self._new_ctx_run(self._test_context, "run") + + def test_context_propagated_by_setting_attribute(self): + self._new_ctx_run(self._test_context, "set") + + def test_context_not_propagated(self): + self._new_ctx_run(self._test_context, None) + + def test_context_shared(self): + self._new_ctx_run(self._test_context, "share") + + def test_break_ctxvars(self): + let1 = greenlet(copy_context().run) + let2 = greenlet(copy_context().run) + let1.switch(getcurrent().switch) + let2.switch(getcurrent().switch) + # Since let2 entered the current context and let1 exits its own, the + # interpreter emits: + # RuntimeError: cannot exit context: thread state references a different context object + let1.switch() + + def test_not_broken_if_using_attribute_instead_of_context_run(self): + let1 = greenlet(getcurrent().switch) + let2 = greenlet(getcurrent().switch) + let1.gr_context = copy_context() + let2.gr_context = copy_context() + let1.switch() + let2.switch() + let1.switch() + let2.switch() + + def test_context_assignment_while_running(self): + # pylint:disable=too-many-statements + ID_VAR.set(None) + + def target(): + self.assertIsNone(ID_VAR.get()) + self.assertIsNone(gr.gr_context) + + # Context is created on first use + ID_VAR.set(1) + self.assertIsInstance(gr.gr_context, Context) + self.assertEqual(ID_VAR.get(), 1) + self.assertEqual(gr.gr_context[ID_VAR], 1) + + # Clearing the context makes it get re-created as another + # empty context when next used + old_context = gr.gr_context + gr.gr_context = None # assign None while running + self.assertIsNone(ID_VAR.get()) + self.assertIsNone(gr.gr_context) + ID_VAR.set(2) + self.assertIsInstance(gr.gr_context, Context) + self.assertEqual(ID_VAR.get(), 2) + self.assertEqual(gr.gr_context[ID_VAR], 2) + + new_context = gr.gr_context + getcurrent().parent.switch((old_context, new_context)) + # parent switches us back to old_context + + self.assertEqual(ID_VAR.get(), 1) + gr.gr_context = new_context # assign non-None while running + self.assertEqual(ID_VAR.get(), 2) + + getcurrent().parent.switch() + # parent switches us back to no context + self.assertIsNone(ID_VAR.get()) + self.assertIsNone(gr.gr_context) + gr.gr_context = old_context + self.assertEqual(ID_VAR.get(), 1) + + getcurrent().parent.switch() + # parent switches us back to no context + self.assertIsNone(ID_VAR.get()) + self.assertIsNone(gr.gr_context) + + gr = greenlet(target) + + with self.assertRaisesRegex(AttributeError, "can't delete context attribute"): + del gr.gr_context + + self.assertIsNone(gr.gr_context) + old_context, new_context = gr.switch() + self.assertIs(new_context, gr.gr_context) + self.assertEqual(old_context[ID_VAR], 1) + self.assertEqual(new_context[ID_VAR], 2) + self.assertEqual(new_context.run(ID_VAR.get), 2) + gr.gr_context = old_context # assign non-None while suspended + gr.switch() + self.assertIs(gr.gr_context, new_context) + gr.gr_context = None # assign None while suspended + gr.switch() + self.assertIs(gr.gr_context, old_context) + gr.gr_context = None + gr.switch() + self.assertIsNone(gr.gr_context) + + # Make sure there are no reference leaks + gr = None + gc.collect() + # Python 3.14 elides reference counting operations + # in some cases. See https://github.com/python/cpython/pull/130708 + self.assertEqual(sys.getrefcount(old_context), 2 if not PY314 else 1) + self.assertEqual(sys.getrefcount(new_context), 2 if not PY314 else 1) + + def test_context_assignment_different_thread(self): + import threading + VAR_VAR.set(None) + ctx = Context() + + is_running = threading.Event() + should_suspend = threading.Event() + did_suspend = threading.Event() + should_exit = threading.Event() + holder = [] + + def greenlet_in_thread_fn(): + VAR_VAR.set(1) + is_running.set() + should_suspend.wait(10) + VAR_VAR.set(2) + getcurrent().parent.switch() + holder.append(VAR_VAR.get()) + + def thread_fn(): + gr = greenlet(greenlet_in_thread_fn) + gr.gr_context = ctx + holder.append(gr) + gr.switch() + did_suspend.set() + should_exit.wait(10) + gr.switch() + del gr + greenlet() # trigger cleanup + + thread = threading.Thread(target=thread_fn, daemon=True) + thread.start() + is_running.wait(10) + gr = holder[0] + + # Can't access or modify context if the greenlet is running + # in a different thread + with self.assertRaisesRegex(ValueError, "running in a different"): + getattr(gr, 'gr_context') + with self.assertRaisesRegex(ValueError, "running in a different"): + gr.gr_context = None + + should_suspend.set() + did_suspend.wait(10) + + # OK to access and modify context if greenlet is suspended + self.assertIs(gr.gr_context, ctx) + self.assertEqual(gr.gr_context[VAR_VAR], 2) + gr.gr_context = None + + should_exit.set() + thread.join(10) + + self.assertEqual(holder, [gr, None]) + + # Context can still be accessed/modified when greenlet is dead: + self.assertIsNone(gr.gr_context) + gr.gr_context = ctx + self.assertIs(gr.gr_context, ctx) + + # Otherwise we leak greenlets on some platforms. + # XXX: Should be able to do this automatically + del holder[:] + gr = None + thread = None + + def test_context_assignment_wrong_type(self): + g = greenlet() + with self.assertRaisesRegex(TypeError, + "greenlet context must be a contextvars.Context or None"): + g.gr_context = self + + +@skipIf(Context is not None, "ContextVar supported") +class NoContextVarsTests(TestCase): + def test_contextvars_errors(self): + let1 = greenlet(getcurrent().switch) + self.assertFalse(hasattr(let1, 'gr_context')) + with self.assertRaises(AttributeError): + getattr(let1, 'gr_context') + + with self.assertRaises(AttributeError): + let1.gr_context = None + + let1.switch() + + with self.assertRaises(AttributeError): + getattr(let1, 'gr_context') + + with self.assertRaises(AttributeError): + let1.gr_context = None + + del let1 + + +if __name__ == '__main__': + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_cpp.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_cpp.py new file mode 100644 index 000000000..2d0cc9c97 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_cpp.py @@ -0,0 +1,73 @@ +from __future__ import print_function +from __future__ import absolute_import + +import subprocess +import unittest + +import greenlet +from . import _test_extension_cpp +from . import TestCase +from . import WIN + +class CPPTests(TestCase): + def test_exception_switch(self): + greenlets = [] + for i in range(4): + g = greenlet.greenlet(_test_extension_cpp.test_exception_switch) + g.switch(i) + greenlets.append(g) + for i, g in enumerate(greenlets): + self.assertEqual(g.switch(), i) + + def _do_test_unhandled_exception(self, target): + import os + import sys + script = os.path.join( + os.path.dirname(__file__), + 'fail_cpp_exception.py', + ) + args = [sys.executable, script, target.__name__ if not isinstance(target, str) else target] + __traceback_info__ = args + with self.assertRaises(subprocess.CalledProcessError) as exc: + subprocess.check_output( + args, + encoding='utf-8', + stderr=subprocess.STDOUT + ) + + ex = exc.exception + expected_exit = self.get_expected_returncodes_for_aborted_process() + self.assertIn(ex.returncode, expected_exit) + self.assertIn('fail_cpp_exception is running', ex.output) + return ex.output + + + def test_unhandled_nonstd_exception_aborts(self): + # verify that plain unhandled throw aborts + self._do_test_unhandled_exception(_test_extension_cpp.test_exception_throw_nonstd) + + def test_unhandled_std_exception_aborts(self): + # verify that plain unhandled throw aborts + self._do_test_unhandled_exception(_test_extension_cpp.test_exception_throw_std) + + @unittest.skipIf(WIN, "XXX: This does not crash on Windows") + # Meaning the exception is getting lost somewhere... + def test_unhandled_std_exception_as_greenlet_function_aborts(self): + # verify that plain unhandled throw aborts + output = self._do_test_unhandled_exception('run_as_greenlet_target') + self.assertIn( + # We really expect this to be prefixed with "greenlet: Unhandled C++ exception:" + # as added by our handler for std::exception (see TUserGreenlet.cpp), but + # that's not correct everywhere --- our handler never runs before std::terminate + # gets called (for example, on arm32). + 'Thrown from an extension.', + output + ) + + def test_unhandled_exception_in_greenlet_aborts(self): + # verify that unhandled throw called in greenlet aborts too + self._do_test_unhandled_exception('run_unhandled_exception_in_greenlet_aborts') + + +if __name__ == '__main__': + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_extension_interface.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_extension_interface.py new file mode 100644 index 000000000..34b66567f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_extension_interface.py @@ -0,0 +1,115 @@ +from __future__ import print_function +from __future__ import absolute_import + +import sys + +import greenlet +from . import _test_extension +from . import TestCase + +# pylint:disable=c-extension-no-member + +class CAPITests(TestCase): + def test_switch(self): + self.assertEqual( + 50, _test_extension.test_switch(greenlet.greenlet(lambda: 50))) + + def test_switch_kwargs(self): + def adder(x, y): + return x * y + g = greenlet.greenlet(adder) + self.assertEqual(6, _test_extension.test_switch_kwargs(g, x=3, y=2)) + + def test_setparent(self): + # pylint:disable=disallowed-name + def foo(): + def bar(): + greenlet.getcurrent().parent.switch() + + # This final switch should go back to the main greenlet, since + # the test_setparent() function in the C extension should have + # reparented this greenlet. + greenlet.getcurrent().parent.switch() + raise AssertionError("Should never have reached this code") + child = greenlet.greenlet(bar) + child.switch() + greenlet.getcurrent().parent.switch(child) + greenlet.getcurrent().parent.throw( + AssertionError("Should never reach this code")) + foo_child = greenlet.greenlet(foo).switch() + self.assertEqual(None, _test_extension.test_setparent(foo_child)) + + def test_getcurrent(self): + _test_extension.test_getcurrent() + + def test_new_greenlet(self): + self.assertEqual(-15, _test_extension.test_new_greenlet(lambda: -15)) + + def test_raise_greenlet_dead(self): + self.assertRaises( + greenlet.GreenletExit, _test_extension.test_raise_dead_greenlet) + + def test_raise_greenlet_error(self): + self.assertRaises( + greenlet.error, _test_extension.test_raise_greenlet_error) + + def test_throw(self): + seen = [] + + def foo(): # pylint:disable=disallowed-name + try: + greenlet.getcurrent().parent.switch() + except ValueError: + seen.append(sys.exc_info()[1]) + except greenlet.GreenletExit: + raise AssertionError + g = greenlet.greenlet(foo) + g.switch() + _test_extension.test_throw(g) + self.assertEqual(len(seen), 1) + self.assertTrue( + isinstance(seen[0], ValueError), + "ValueError was not raised in foo()") + self.assertEqual( + str(seen[0]), + 'take that sucka!', + "message doesn't match") + + def test_non_traceback_param(self): + with self.assertRaises(TypeError) as exc: + _test_extension.test_throw_exact( + greenlet.getcurrent(), + Exception, + Exception(), + self + ) + self.assertEqual(str(exc.exception), + "throw() third argument must be a traceback object") + + def test_instance_of_wrong_type(self): + with self.assertRaises(TypeError) as exc: + _test_extension.test_throw_exact( + greenlet.getcurrent(), + Exception(), + BaseException(), + None, + ) + + self.assertEqual(str(exc.exception), + "instance exception may not have a separate value") + + def test_not_throwable(self): + with self.assertRaises(TypeError) as exc: + _test_extension.test_throw_exact( + greenlet.getcurrent(), + "abc", + None, + None, + ) + self.assertEqual(str(exc.exception), + "exceptions must be classes, or instances, not str") + + +if __name__ == '__main__': + import unittest + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_gc.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_gc.py new file mode 100644 index 000000000..994addb9b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_gc.py @@ -0,0 +1,86 @@ +import gc + +import weakref + +import greenlet + + +from . import TestCase +from .leakcheck import fails_leakcheck +# These only work with greenlet gc support +# which is no longer optional. +assert greenlet.GREENLET_USE_GC + +class GCTests(TestCase): + def test_dead_circular_ref(self): + o = weakref.ref(greenlet.greenlet(greenlet.getcurrent).switch()) + gc.collect() + if o() is not None: + import sys + print("O IS NOT NONE.", sys.getrefcount(o())) + self.assertIsNone(o()) + self.assertFalse(gc.garbage, gc.garbage) + + def test_circular_greenlet(self): + class circular_greenlet(greenlet.greenlet): + self = None + o = circular_greenlet() + o.self = o + o = weakref.ref(o) + gc.collect() + self.assertIsNone(o()) + self.assertFalse(gc.garbage, gc.garbage) + + def test_inactive_ref(self): + class inactive_greenlet(greenlet.greenlet): + def __init__(self): + greenlet.greenlet.__init__(self, run=self.run) + + def run(self): + pass + o = inactive_greenlet() + o = weakref.ref(o) + gc.collect() + self.assertIsNone(o()) + self.assertFalse(gc.garbage, gc.garbage) + + @fails_leakcheck + def test_finalizer_crash(self): + # This test is designed to crash when active greenlets + # are made garbage collectable, until the underlying + # problem is resolved. How does it work: + # - order of object creation is important + # - array is created first, so it is moved to unreachable first + # - we create a cycle between a greenlet and this array + # - we create an object that participates in gc, is only + # referenced by a greenlet, and would corrupt gc lists + # on destruction, the easiest is to use an object with + # a finalizer + # - because array is the first object in unreachable it is + # cleared first, which causes all references to greenlet + # to disappear and causes greenlet to be destroyed, but since + # it is still live it causes a switch during gc, which causes + # an object with finalizer to be destroyed, which causes stack + # corruption and then a crash + + class object_with_finalizer(object): + def __del__(self): + pass + array = [] + parent = greenlet.getcurrent() + def greenlet_body(): + greenlet.getcurrent().object = object_with_finalizer() + try: + parent.switch() + except greenlet.GreenletExit: + print("Got greenlet exit!") + finally: + del greenlet.getcurrent().object + g = greenlet.greenlet(greenlet_body) + g.array = array + array.append(g) + g.switch() + del array + del g + greenlet.getcurrent() + gc.collect() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator.py new file mode 100644 index 000000000..ca4a644b6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator.py @@ -0,0 +1,59 @@ + +from greenlet import greenlet + +from . import TestCase + +class genlet(greenlet): + parent = None + def __init__(self, *args, **kwds): + self.args = args + self.kwds = kwds + + def run(self): + fn, = self.fn + fn(*self.args, **self.kwds) + + def __iter__(self): + return self + + def __next__(self): + self.parent = greenlet.getcurrent() + result = self.switch() + if self: + return result + + raise StopIteration + + next = __next__ + + +def Yield(value): + g = greenlet.getcurrent() + while not isinstance(g, genlet): + if g is None: + raise RuntimeError('yield outside a genlet') + g = g.parent + g.parent.switch(value) + + +def generator(func): + class Generator(genlet): + fn = (func,) + return Generator + +# ____________________________________________________________ + + +class GeneratorTests(TestCase): + def test_generator(self): + seen = [] + + def g(n): + for i in range(n): + seen.append(i) + Yield(i) + g = generator(g) + for _ in range(3): + for j in g(5): + seen.append(j) + self.assertEqual(seen, 3 * [0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator_nested.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator_nested.py new file mode 100644 index 000000000..8d752a63d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_generator_nested.py @@ -0,0 +1,168 @@ + +from greenlet import greenlet +from . import TestCase +from .leakcheck import fails_leakcheck + +class genlet(greenlet): + parent = None + def __init__(self, *args, **kwds): + self.args = args + self.kwds = kwds + self.child = None + + def run(self): + # Note the function is packed in a tuple + # to avoid creating a bound method for it. + fn, = self.fn + fn(*self.args, **self.kwds) + + def __iter__(self): + return self + + def set_child(self, child): + self.child = child + + def __next__(self): + if self.child: + child = self.child + while child.child: + tmp = child + child = child.child + tmp.child = None + + result = child.switch() + else: + self.parent = greenlet.getcurrent() + result = self.switch() + + if self: + return result + + raise StopIteration + + next = __next__ + +def Yield(value, level=1): + g = greenlet.getcurrent() + + while level != 0: + if not isinstance(g, genlet): + raise RuntimeError('yield outside a genlet') + if level > 1: + g.parent.set_child(g) + g = g.parent + level -= 1 + + g.switch(value) + + +def Genlet(func): + class TheGenlet(genlet): + fn = (func,) + return TheGenlet + +# ____________________________________________________________ + + +def g1(n, seen): + for i in range(n): + seen.append(i + 1) + yield i + + +def g2(n, seen): + for i in range(n): + seen.append(i + 1) + Yield(i) + +g2 = Genlet(g2) + + +def nested(i): + Yield(i) + + +def g3(n, seen): + for i in range(n): + seen.append(i + 1) + nested(i) +g3 = Genlet(g3) + + +def a(n): + if n == 0: + return + for ii in ax(n - 1): + Yield(ii) + Yield(n) +ax = Genlet(a) + + +def perms(l): + if len(l) > 1: + for e in l: + # No syntactical sugar for generator expressions + x = [Yield([e] + p) for p in perms([x for x in l if x != e])] + assert x + else: + Yield(l) +perms = Genlet(perms) + + +def gr1(n): + for ii in range(1, n): + Yield(ii) + Yield(ii * ii, 2) + +gr1 = Genlet(gr1) + + +def gr2(n, seen): + for ii in gr1(n): + seen.append(ii) + +gr2 = Genlet(gr2) + + +class NestedGeneratorTests(TestCase): + def test_layered_genlets(self): + seen = [] + for ii in gr2(5, seen): + seen.append(ii) + self.assertEqual(seen, [1, 1, 2, 4, 3, 9, 4, 16]) + + @fails_leakcheck + def test_permutations(self): + gen_perms = perms(list(range(4))) + permutations = list(gen_perms) + self.assertEqual(len(permutations), 4 * 3 * 2 * 1) + self.assertIn([0, 1, 2, 3], permutations) + self.assertIn([3, 2, 1, 0], permutations) + res = [] + for ii in zip(perms(list(range(4))), perms(list(range(3)))): + res.append(ii) + self.assertEqual( + res, + [([0, 1, 2, 3], [0, 1, 2]), ([0, 1, 3, 2], [0, 2, 1]), + ([0, 2, 1, 3], [1, 0, 2]), ([0, 2, 3, 1], [1, 2, 0]), + ([0, 3, 1, 2], [2, 0, 1]), ([0, 3, 2, 1], [2, 1, 0])]) + # XXX Test to make sure we are working as a generator expression + + def test_genlet_simple(self): + for g in g1, g2, g3: + seen = [] + for _ in range(3): + for j in g(5, seen): + seen.append(j) + self.assertEqual(seen, 3 * [1, 0, 2, 1, 3, 2, 4, 3, 5, 4]) + + def test_genlet_bad(self): + try: + Yield(10) + except RuntimeError: + pass + + def test_nested_genlets(self): + seen = [] + for ii in ax(5): + seen.append(ii) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet.py new file mode 100644 index 000000000..fd05c0db1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet.py @@ -0,0 +1,1327 @@ +import gc +import sys +import time +import threading +import unittest + +from abc import ABCMeta +from abc import abstractmethod + +import greenlet +from greenlet import greenlet as RawGreenlet +from . import TestCase +from . import RUNNING_ON_MANYLINUX +from . import PY313 +from . import PY314 +from .leakcheck import fails_leakcheck + + +# We manually manage locks in many tests +# pylint:disable=consider-using-with +# pylint:disable=too-many-public-methods +# This module is quite large. +# TODO: Refactor into separate test files. For example, +# put all the regression tests that used to produce +# crashes in test_greenlet_no_crash; put tests that DO deliberately crash +# the interpreter into test_greenlet_crash. +# pylint:disable=too-many-lines + +class SomeError(Exception): + pass + + +def fmain(seen): + try: + greenlet.getcurrent().parent.switch() + except: + seen.append(sys.exc_info()[0]) + raise + raise SomeError + + +def send_exception(g, exc): + # note: send_exception(g, exc) can be now done with g.throw(exc). + # the purpose of this test is to explicitly check the propagation rules. + def crasher(exc): + raise exc + g1 = RawGreenlet(crasher, parent=g) + g1.switch(exc) + + +class TestGreenlet(TestCase): + + def _do_simple_test(self): + lst = [] + + def f(): + lst.append(1) + greenlet.getcurrent().parent.switch() + lst.append(3) + g = RawGreenlet(f) + lst.append(0) + g.switch() + lst.append(2) + g.switch() + lst.append(4) + self.assertEqual(lst, list(range(5))) + + def test_simple(self): + self._do_simple_test() + + def test_switch_no_run_raises_AttributeError(self): + g = RawGreenlet() + with self.assertRaises(AttributeError) as exc: + g.switch() + + self.assertIn("run", str(exc.exception)) + + def test_throw_no_run_raises_AttributeError(self): + g = RawGreenlet() + with self.assertRaises(AttributeError) as exc: + g.throw(SomeError) + + self.assertIn("run", str(exc.exception)) + + def test_parent_equals_None(self): + g = RawGreenlet(parent=None) + self.assertIsNotNone(g) + self.assertIs(g.parent, greenlet.getcurrent()) + + def test_run_equals_None(self): + g = RawGreenlet(run=None) + self.assertIsNotNone(g) + self.assertIsNone(g.run) + + def test_two_children(self): + lst = [] + + def f(): + lst.append(1) + greenlet.getcurrent().parent.switch() + lst.extend([1, 1]) + g = RawGreenlet(f) + h = RawGreenlet(f) + g.switch() + self.assertEqual(len(lst), 1) + h.switch() + self.assertEqual(len(lst), 2) + h.switch() + self.assertEqual(len(lst), 4) + self.assertEqual(h.dead, True) + g.switch() + self.assertEqual(len(lst), 6) + self.assertEqual(g.dead, True) + + def test_two_recursive_children(self): + lst = [] + + def f(): + lst.append('b') + greenlet.getcurrent().parent.switch() + + def g(): + lst.append('a') + g = RawGreenlet(f) + g.switch() + lst.append('c') + self.assertEqual(sys.getrefcount(g), 2 if not PY314 else 1) + g = RawGreenlet(g) + # Python 3.14 elides reference counting operations + # in some cases. See https://github.com/python/cpython/pull/130708 + self.assertEqual(sys.getrefcount(g), 2 if not PY314 else 1) + g.switch() + self.assertEqual(lst, ['a', 'b', 'c']) + # Just the one in this frame, plus the one on the stack we pass to the function + self.assertEqual(sys.getrefcount(g), 2 if not PY314 else 1) + + def test_threads(self): + success = [] + + def f(): + self._do_simple_test() + success.append(True) + ths = [threading.Thread(target=f) for i in range(10)] + for th in ths: + th.start() + for th in ths: + th.join(10) + self.assertEqual(len(success), len(ths)) + + def test_exception(self): + seen = [] + g1 = RawGreenlet(fmain) + g2 = RawGreenlet(fmain) + g1.switch(seen) + g2.switch(seen) + g2.parent = g1 + + self.assertEqual(seen, []) + #with self.assertRaises(SomeError): + # p("***Switching back") + # g2.switch() + # Creating this as a bound method can reveal bugs that + # are hidden on newer versions of Python that avoid creating + # bound methods for direct expressions; IOW, don't use the `with` + # form! + self.assertRaises(SomeError, g2.switch) + self.assertEqual(seen, [SomeError]) + + value = g2.switch() + self.assertEqual(value, ()) + self.assertEqual(seen, [SomeError]) + + value = g2.switch(25) + self.assertEqual(value, 25) + self.assertEqual(seen, [SomeError]) + + + def test_send_exception(self): + seen = [] + g1 = RawGreenlet(fmain) + g1.switch(seen) + self.assertRaises(KeyError, send_exception, g1, KeyError) + self.assertEqual(seen, [KeyError]) + + def test_dealloc(self): + seen = [] + g1 = RawGreenlet(fmain) + g2 = RawGreenlet(fmain) + g1.switch(seen) + g2.switch(seen) + self.assertEqual(seen, []) + del g1 + gc.collect() + self.assertEqual(seen, [greenlet.GreenletExit]) + del g2 + gc.collect() + self.assertEqual(seen, [greenlet.GreenletExit, greenlet.GreenletExit]) + + def test_dealloc_catches_GreenletExit_throws_other(self): + def run(): + try: + greenlet.getcurrent().parent.switch() + except greenlet.GreenletExit: + raise SomeError from None + + g = RawGreenlet(run) + g.switch() + # Destroying the only reference to the greenlet causes it + # to get GreenletExit; when it in turn raises, even though we're the parent + # we don't get the exception, it just gets printed. + # When we run on 3.8 only, we can use sys.unraisablehook + oldstderr = sys.stderr + from io import StringIO + stderr = sys.stderr = StringIO() + try: + del g + finally: + sys.stderr = oldstderr + + v = stderr.getvalue() + self.assertIn("Exception", v) + self.assertIn('ignored', v) + self.assertIn("SomeError", v) + + + @unittest.skipIf( + PY313 and RUNNING_ON_MANYLINUX, + "Sometimes flaky (getting one GreenletExit in the second list)" + # Probably due to funky timing interactions? + # TODO: FIXME Make that work. + ) + + def test_dealloc_other_thread(self): + seen = [] + someref = [] + + bg_glet_created_running_and_no_longer_ref_in_bg = threading.Event() + fg_ref_released = threading.Event() + bg_should_be_clear = threading.Event() + ok_to_exit_bg_thread = threading.Event() + + def f(): + g1 = RawGreenlet(fmain) + g1.switch(seen) + someref.append(g1) + del g1 + gc.collect() + + bg_glet_created_running_and_no_longer_ref_in_bg.set() + fg_ref_released.wait(3) + + RawGreenlet() # trigger release + bg_should_be_clear.set() + ok_to_exit_bg_thread.wait(3) + RawGreenlet() # One more time + + t = threading.Thread(target=f) + t.start() + bg_glet_created_running_and_no_longer_ref_in_bg.wait(10) + + self.assertEqual(seen, []) + self.assertEqual(len(someref), 1) + del someref[:] + gc.collect() + # g1 is not released immediately because it's from another thread + self.assertEqual(seen, []) + fg_ref_released.set() + bg_should_be_clear.wait(3) + try: + self.assertEqual(seen, [greenlet.GreenletExit]) + finally: + ok_to_exit_bg_thread.set() + t.join(10) + del seen[:] + del someref[:] + + def test_frame(self): + def f1(): + f = sys._getframe(0) # pylint:disable=protected-access + self.assertEqual(f.f_back, None) + greenlet.getcurrent().parent.switch(f) + return "meaning of life" + g = RawGreenlet(f1) + frame = g.switch() + self.assertTrue(frame is g.gr_frame) + self.assertTrue(g) + + from_g = g.switch() + self.assertFalse(g) + self.assertEqual(from_g, 'meaning of life') + self.assertEqual(g.gr_frame, None) + + def test_thread_bug(self): + def runner(x): + g = RawGreenlet(lambda: time.sleep(x)) + g.switch() + t1 = threading.Thread(target=runner, args=(0.2,)) + t2 = threading.Thread(target=runner, args=(0.3,)) + t1.start() + t2.start() + t1.join(10) + t2.join(10) + + def test_switch_kwargs(self): + def run(a, b): + self.assertEqual(a, 4) + self.assertEqual(b, 2) + return 42 + x = RawGreenlet(run).switch(a=4, b=2) + self.assertEqual(x, 42) + + def test_switch_kwargs_to_parent(self): + def run(x): + greenlet.getcurrent().parent.switch(x=x) + greenlet.getcurrent().parent.switch(2, x=3) + return x, x ** 2 + g = RawGreenlet(run) + self.assertEqual({'x': 3}, g.switch(3)) + self.assertEqual(((2,), {'x': 3}), g.switch()) + self.assertEqual((3, 9), g.switch()) + + def test_switch_to_another_thread(self): + data = {} + created_event = threading.Event() + done_event = threading.Event() + + def run(): + data['g'] = RawGreenlet(lambda: None) + created_event.set() + done_event.wait(10) + thread = threading.Thread(target=run) + thread.start() + created_event.wait(10) + with self.assertRaises(greenlet.error): + data['g'].switch() + done_event.set() + thread.join(10) + # XXX: Should handle this automatically + data.clear() + + def test_exc_state(self): + def f(): + try: + raise ValueError('fun') + except: # pylint:disable=bare-except + exc_info = sys.exc_info() + RawGreenlet(h).switch() + self.assertEqual(exc_info, sys.exc_info()) + + def h(): + self.assertEqual(sys.exc_info(), (None, None, None)) + + RawGreenlet(f).switch() + + def test_instance_dict(self): + def f(): + greenlet.getcurrent().test = 42 + def deldict(g): + del g.__dict__ + def setdict(g, value): + g.__dict__ = value + g = RawGreenlet(f) + self.assertEqual(g.__dict__, {}) + g.switch() + self.assertEqual(g.test, 42) + self.assertEqual(g.__dict__, {'test': 42}) + g.__dict__ = g.__dict__ + self.assertEqual(g.__dict__, {'test': 42}) + self.assertRaises(TypeError, deldict, g) + self.assertRaises(TypeError, setdict, g, 42) + + def test_running_greenlet_has_no_run(self): + has_run = [] + def func(): + has_run.append( + hasattr(greenlet.getcurrent(), 'run') + ) + + g = RawGreenlet(func) + g.switch() + self.assertEqual(has_run, [False]) + + def test_deepcopy(self): + import copy + self.assertRaises(TypeError, copy.copy, RawGreenlet()) + self.assertRaises(TypeError, copy.deepcopy, RawGreenlet()) + + def test_parent_restored_on_kill(self): + hub = RawGreenlet(lambda: None) + main = greenlet.getcurrent() + result = [] + def worker(): + try: + # Wait to be killed by going back to the test. + main.switch() + except greenlet.GreenletExit: + # Resurrect and switch to parent + result.append(greenlet.getcurrent().parent) + result.append(greenlet.getcurrent()) + hub.switch() + g = RawGreenlet(worker, parent=hub) + g.switch() + # delete the only reference, thereby raising GreenletExit + del g + self.assertTrue(result) + self.assertIs(result[0], main) + self.assertIs(result[1].parent, hub) + # Delete them, thereby breaking the cycle between the greenlet + # and the frame, which otherwise would never be collectable + # XXX: We should be able to automatically fix this. + del result[:] + hub = None + main = None + + def test_parent_return_failure(self): + # No run causes AttributeError on switch + g1 = RawGreenlet() + # Greenlet that implicitly switches to parent + g2 = RawGreenlet(lambda: None, parent=g1) + # AttributeError should propagate to us, no fatal errors + with self.assertRaises(AttributeError): + g2.switch() + + def test_throw_exception_not_lost(self): + class mygreenlet(RawGreenlet): + def __getattribute__(self, name): + try: + raise Exception # pylint:disable=broad-exception-raised + except: # pylint:disable=bare-except + pass + return RawGreenlet.__getattribute__(self, name) + g = mygreenlet(lambda: None) + self.assertRaises(SomeError, g.throw, SomeError()) + + @fails_leakcheck + def _do_test_throw_to_dead_thread_doesnt_crash(self, wait_for_cleanup=False): + result = [] + def worker(): + greenlet.getcurrent().parent.switch() + + def creator(): + g = RawGreenlet(worker) + g.switch() + result.append(g) + if wait_for_cleanup: + # Let this greenlet eventually be cleaned up. + g.switch() + greenlet.getcurrent() + t = threading.Thread(target=creator) + t.start() + t.join(10) + del t + # But, depending on the operating system, the thread + # deallocator may not actually have run yet! So we can't be + # sure about the error message unless we wait. + if wait_for_cleanup: + self.wait_for_pending_cleanups() + with self.assertRaises(greenlet.error) as exc: + result[0].throw(SomeError) + + if not wait_for_cleanup: + s = str(exc.exception) + self.assertTrue( + s == "cannot switch to a different thread (which happens to have exited)" + or 'Cannot switch' in s + ) + else: + self.assertEqual( + str(exc.exception), + "cannot switch to a different thread (which happens to have exited)", + ) + + if hasattr(result[0].gr_frame, 'clear'): + # The frame is actually executing (it thinks), we can't clear it. + with self.assertRaises(RuntimeError): + result[0].gr_frame.clear() + # Unfortunately, this doesn't actually clear the references, they're in the + # fast local array. + if not wait_for_cleanup: + # f_locals has no clear method in Python 3.13 + if hasattr(result[0].gr_frame.f_locals, 'clear'): + result[0].gr_frame.f_locals.clear() + else: + self.assertIsNone(result[0].gr_frame) + + del creator + worker = None + del result[:] + # XXX: we ought to be able to automatically fix this. + # See issue 252 + self.expect_greenlet_leak = True # direct us not to wait for it to go away + + @fails_leakcheck + def test_throw_to_dead_thread_doesnt_crash(self): + self._do_test_throw_to_dead_thread_doesnt_crash() + + def test_throw_to_dead_thread_doesnt_crash_wait(self): + self._do_test_throw_to_dead_thread_doesnt_crash(True) + + @fails_leakcheck + def test_recursive_startup(self): + class convoluted(RawGreenlet): + def __init__(self): + RawGreenlet.__init__(self) + self.count = 0 + def __getattribute__(self, name): + if name == 'run' and self.count == 0: + self.count = 1 + self.switch(43) + return RawGreenlet.__getattribute__(self, name) + def run(self, value): + while True: + self.parent.switch(value) + g = convoluted() + self.assertEqual(g.switch(42), 43) + # Exits the running greenlet, otherwise it leaks + # XXX: We should be able to automatically fix this + #g.throw(greenlet.GreenletExit) + #del g + self.expect_greenlet_leak = True + + def test_threaded_updatecurrent(self): + # released when main thread should execute + lock1 = threading.Lock() + lock1.acquire() + # released when another thread should execute + lock2 = threading.Lock() + lock2.acquire() + class finalized(object): + def __del__(self): + # happens while in green_updatecurrent() in main greenlet + # should be very careful not to accidentally call it again + # at the same time we must make sure another thread executes + lock2.release() + lock1.acquire() + # now ts_current belongs to another thread + def deallocator(): + greenlet.getcurrent().parent.switch() + def fthread(): + lock2.acquire() + greenlet.getcurrent() + del g[0] + lock1.release() + lock2.acquire() + greenlet.getcurrent() + lock1.release() + main = greenlet.getcurrent() + g = [RawGreenlet(deallocator)] + g[0].bomb = finalized() + g[0].switch() + t = threading.Thread(target=fthread) + t.start() + # let another thread grab ts_current and deallocate g[0] + lock2.release() + lock1.acquire() + # this is the corner stone + # getcurrent() will notice that ts_current belongs to another thread + # and start the update process, which would notice that g[0] should + # be deallocated, and that will execute an object's finalizer. Now, + # that object will let another thread run so it can grab ts_current + # again, which would likely crash the interpreter if there's no + # check for this case at the end of green_updatecurrent(). This test + # passes if getcurrent() returns correct result, but it's likely + # to randomly crash if it's not anyway. + self.assertEqual(greenlet.getcurrent(), main) + # wait for another thread to complete, just in case + t.join(10) + + def test_dealloc_switch_args_not_lost(self): + seen = [] + def worker(): + # wait for the value + value = greenlet.getcurrent().parent.switch() + # delete all references to ourself + del worker[0] + initiator.parent = greenlet.getcurrent().parent + # switch to main with the value, but because + # ts_current is the last reference to us we + # return here immediately, where we resurrect ourself. + try: + greenlet.getcurrent().parent.switch(value) + finally: + seen.append(greenlet.getcurrent()) + def initiator(): + return 42 # implicitly falls thru to parent + + worker = [RawGreenlet(worker)] + + worker[0].switch() # prime worker + initiator = RawGreenlet(initiator, worker[0]) + value = initiator.switch() + self.assertTrue(seen) + self.assertEqual(value, 42) + + def test_tuple_subclass(self): + # The point of this test is to see what happens when a custom + # tuple subclass is used as an object passed directly to the C + # function ``green_switch``; part of ``green_switch`` checks + # the ``len()`` of the ``args`` tuple, and that can call back + # into Python. Here, when it calls back into Python, we + # recursively enter ``green_switch`` again. + + # This test is really only relevant on Python 2. The builtin + # `apply` function directly passes the given args tuple object + # to the underlying function, whereas the Python 3 version + # unpacks and repacks into an actual tuple. This could still + # happen using the C API on Python 3 though. We should write a + # builtin version of apply() ourself. + def _apply(func, a, k): + func(*a, **k) + + class mytuple(tuple): + def __len__(self): + greenlet.getcurrent().switch() + return tuple.__len__(self) + args = mytuple() + kwargs = dict(a=42) + def switchapply(): + _apply(greenlet.getcurrent().parent.switch, args, kwargs) + g = RawGreenlet(switchapply) + self.assertEqual(g.switch(), kwargs) + + def test_abstract_subclasses(self): + AbstractSubclass = ABCMeta( + 'AbstractSubclass', + (RawGreenlet,), + {'run': abstractmethod(lambda self: None)}) + + class BadSubclass(AbstractSubclass): + pass + + class GoodSubclass(AbstractSubclass): + def run(self): + pass + + GoodSubclass() # should not raise + self.assertRaises(TypeError, BadSubclass) + + def test_implicit_parent_with_threads(self): + if not gc.isenabled(): + return # cannot test with disabled gc + N = gc.get_threshold()[0] + if N < 50: + return # cannot test with such a small N + def attempt(): + lock1 = threading.Lock() + lock1.acquire() + lock2 = threading.Lock() + lock2.acquire() + recycled = [False] + def another_thread(): + lock1.acquire() # wait for gc + greenlet.getcurrent() # update ts_current + lock2.release() # release gc + t = threading.Thread(target=another_thread) + t.start() + class gc_callback(object): + def __del__(self): + lock1.release() + lock2.acquire() + recycled[0] = True + class garbage(object): + def __init__(self): + self.cycle = self + self.callback = gc_callback() + l = [] + x = range(N*2) + current = greenlet.getcurrent() + g = garbage() + for _ in x: + g = None # lose reference to garbage + if recycled[0]: + # gc callback called prematurely + t.join(10) + return False + last = RawGreenlet() + if recycled[0]: + break # yes! gc called in green_new + l.append(last) # increase allocation counter + else: + # gc callback not called when expected + gc.collect() + if recycled[0]: + t.join(10) + return False + self.assertEqual(last.parent, current) + for g in l: + self.assertEqual(g.parent, current) + return True + for _ in range(5): + if attempt(): + break + + def test_issue_245_reference_counting_subclass_no_threads(self): + # https://github.com/python-greenlet/greenlet/issues/245 + # Before the fix, this crashed pretty reliably on + # Python 3.10, at least on macOS; but much less reliably on other + # interpreters (memory layout must have changed). + # The threaded test crashed more reliably on more interpreters. + from greenlet import getcurrent + from greenlet import GreenletExit + + class Greenlet(RawGreenlet): + pass + + initial_refs = sys.getrefcount(Greenlet) + # This has to be an instance variable because + # Python 2 raises a SyntaxError if we delete a local + # variable referenced in an inner scope. + self.glets = [] # pylint:disable=attribute-defined-outside-init + + def greenlet_main(): + try: + getcurrent().parent.switch() + except GreenletExit: + self.glets.append(getcurrent()) + + # Before the + for _ in range(10): + Greenlet(greenlet_main).switch() + + del self.glets + self.assertEqual(sys.getrefcount(Greenlet), initial_refs) + + @unittest.skipIf( + PY313 and RUNNING_ON_MANYLINUX, + "The manylinux images appear to hang on this test on 3.13rc2" + # Or perhaps I just got tired of waiting for the 450s timeout. + # Still, it shouldn't take anywhere near that long. Does not reproduce in + # Ubuntu images, on macOS or Windows. + ) + def test_issue_245_reference_counting_subclass_threads(self): + # https://github.com/python-greenlet/greenlet/issues/245 + from threading import Thread + from threading import Event + + from greenlet import getcurrent + + class MyGreenlet(RawGreenlet): + pass + + glets = [] + ref_cleared = Event() + + def greenlet_main(): + getcurrent().parent.switch() + + def thread_main(greenlet_running_event): + mine = MyGreenlet(greenlet_main) + glets.append(mine) + # The greenlets being deleted must be active + mine.switch() + # Don't keep any reference to it in this thread + del mine + # Let main know we published our greenlet. + greenlet_running_event.set() + # Wait for main to let us know the references are + # gone and the greenlet objects no longer reachable + ref_cleared.wait(10) + # The creating thread must call getcurrent() (or a few other + # greenlet APIs) because that's when the thread-local list of dead + # greenlets gets cleared. + getcurrent() + + # We start with 3 references to the subclass: + # - This module + # - Its __mro__ + # - The __subclassess__ attribute of greenlet + # - (If we call gc.get_referents(), we find four entries, including + # some other tuple ``(greenlet)`` that I'm not sure about but must be part + # of the machinery.) + # + # On Python 3.10 it's often enough to just run 3 threads; on Python 2.7, + # more threads are needed, and the results are still + # non-deterministic. Presumably the memory layouts are different + initial_refs = sys.getrefcount(MyGreenlet) + thread_ready_events = [] + for _ in range( + initial_refs + 45 + ): + event = Event() + thread = Thread(target=thread_main, args=(event,)) + thread_ready_events.append(event) + thread.start() + + + for done_event in thread_ready_events: + done_event.wait(10) + + + del glets[:] + ref_cleared.set() + # Let any other thread run; it will crash the interpreter + # if not fixed (or silently corrupt memory and we possibly crash + # later). + self.wait_for_pending_cleanups() + self.assertEqual(sys.getrefcount(MyGreenlet), initial_refs) + + def test_falling_off_end_switches_to_unstarted_parent_raises_error(self): + def no_args(): + return 13 + + parent_never_started = RawGreenlet(no_args) + + def leaf(): + return 42 + + child = RawGreenlet(leaf, parent_never_started) + + # Because the run function takes to arguments + with self.assertRaises(TypeError): + child.switch() + + def test_falling_off_end_switches_to_unstarted_parent_works(self): + def one_arg(x): + return (x, 24) + + parent_never_started = RawGreenlet(one_arg) + + def leaf(): + return 42 + + child = RawGreenlet(leaf, parent_never_started) + + result = child.switch() + self.assertEqual(result, (42, 24)) + + def test_switch_to_dead_greenlet_with_unstarted_perverse_parent(self): + class Parent(RawGreenlet): + def __getattribute__(self, name): + if name == 'run': + raise SomeError + + + parent_never_started = Parent() + seen = [] + child = RawGreenlet(lambda: seen.append(42), parent_never_started) + # Because we automatically start the parent when the child is + # finished + with self.assertRaises(SomeError): + child.switch() + + self.assertEqual(seen, [42]) + + with self.assertRaises(SomeError): + child.switch() + self.assertEqual(seen, [42]) + + def test_switch_to_dead_greenlet_reparent(self): + seen = [] + parent_never_started = RawGreenlet(lambda: seen.append(24)) + child = RawGreenlet(lambda: seen.append(42)) + + child.switch() + self.assertEqual(seen, [42]) + + child.parent = parent_never_started + # This actually is the same as switching to the parent. + result = child.switch() + self.assertIsNone(result) + self.assertEqual(seen, [42, 24]) + + def test_can_access_f_back_of_suspended_greenlet(self): + # This tests our frame rewriting to work around Python 3.12+ having + # some interpreter frames on the C stack. It will crash in the absence + # of that logic. + main = greenlet.getcurrent() + + def outer(): + inner() + + def inner(): + main.switch(sys._getframe(0)) + + hub = RawGreenlet(outer) + # start it + hub.switch() + + # start another greenlet to make sure we aren't relying on + # anything in `hub` still being on the C stack + unrelated = RawGreenlet(lambda: None) + unrelated.switch() + + # now it is suspended + self.assertIsNotNone(hub.gr_frame) + self.assertEqual(hub.gr_frame.f_code.co_name, "inner") + self.assertIsNotNone(hub.gr_frame.f_back) + self.assertEqual(hub.gr_frame.f_back.f_code.co_name, "outer") + # The next line is what would crash + self.assertIsNone(hub.gr_frame.f_back.f_back) + + def test_get_stack_with_nested_c_calls(self): + from functools import partial + from . import _test_extension_cpp + + def recurse(v): + if v > 0: + return v * _test_extension_cpp.test_call(partial(recurse, v - 1)) + return greenlet.getcurrent().parent.switch() + + gr = RawGreenlet(recurse) + gr.switch(5) + frame = gr.gr_frame + for i in range(5): + self.assertEqual(frame.f_locals["v"], i) + frame = frame.f_back + self.assertEqual(frame.f_locals["v"], 5) + self.assertIsNone(frame.f_back) + self.assertEqual(gr.switch(10), 1200) # 1200 = 5! * 10 + + def test_frames_always_exposed(self): + # On Python 3.12 this will crash if we don't set the + # gr_frames_always_exposed attribute. More background: + # https://github.com/python-greenlet/greenlet/issues/388 + main = greenlet.getcurrent() + + def outer(): + inner(sys._getframe(0)) + + def inner(frame): + main.switch(frame) + + gr = RawGreenlet(outer) + frame = gr.switch() + + # Do something else to clobber the part of the C stack used by `gr`, + # so we can't skate by on "it just happened to still be there" + unrelated = RawGreenlet(lambda: None) + unrelated.switch() + + self.assertEqual(frame.f_code.co_name, "outer") + # The next line crashes on 3.12 if we haven't exposed the frames. + self.assertIsNone(frame.f_back) + + +class TestGreenletSetParentErrors(TestCase): + def test_threaded_reparent(self): + data = {} + created_event = threading.Event() + done_event = threading.Event() + + def run(): + data['g'] = RawGreenlet(lambda: None) + created_event.set() + done_event.wait(10) + + def blank(): + greenlet.getcurrent().parent.switch() + + thread = threading.Thread(target=run) + thread.start() + created_event.wait(10) + g = RawGreenlet(blank) + g.switch() + with self.assertRaises(ValueError) as exc: + g.parent = data['g'] + done_event.set() + thread.join(10) + + self.assertEqual(str(exc.exception), "parent cannot be on a different thread") + + def test_unexpected_reparenting(self): + another = [] + def worker(): + g = RawGreenlet(lambda: None) + another.append(g) + g.switch() + t = threading.Thread(target=worker) + t.start() + t.join(10) + # The first time we switch (running g_initialstub(), which is + # when we look up the run attribute) we attempt to change the + # parent to one from another thread (which also happens to be + # dead). ``g_initialstub()`` should detect this and raise a + # greenlet error. + # + # EXCEPT: With the fix for #252, this is actually detected + # sooner, when setting the parent itself. Prior to that fix, + # the main greenlet from the background thread kept a valid + # value for ``run_info``, and appeared to be a valid parent + # until we actually started the greenlet. But now that it's + # cleared, this test is catching whether ``green_setparent`` + # can detect the dead thread. + # + # Further refactoring once again changes this back to a greenlet.error + # + # We need to wait for the cleanup to happen, but we're + # deliberately leaking a main greenlet here. + self.wait_for_pending_cleanups(initial_main_greenlets=self.main_greenlets_before_test + 1) + + class convoluted(RawGreenlet): + def __getattribute__(self, name): + if name == 'run': + self.parent = another[0] # pylint:disable=attribute-defined-outside-init + return RawGreenlet.__getattribute__(self, name) + g = convoluted(lambda: None) + with self.assertRaises(greenlet.error) as exc: + g.switch() + self.assertEqual(str(exc.exception), + "cannot switch to a different thread (which happens to have exited)") + del another[:] + + def test_unexpected_reparenting_thread_running(self): + # Like ``test_unexpected_reparenting``, except the background thread is + # actually still alive. + another = [] + switched_to_greenlet = threading.Event() + keep_main_alive = threading.Event() + def worker(): + g = RawGreenlet(lambda: None) + another.append(g) + g.switch() + switched_to_greenlet.set() + keep_main_alive.wait(10) + class convoluted(RawGreenlet): + def __getattribute__(self, name): + if name == 'run': + self.parent = another[0] # pylint:disable=attribute-defined-outside-init + return RawGreenlet.__getattribute__(self, name) + + t = threading.Thread(target=worker) + t.start() + + switched_to_greenlet.wait(10) + try: + g = convoluted(lambda: None) + + with self.assertRaises(greenlet.error) as exc: + g.switch() + self.assertIn("Cannot switch to a different thread", str(exc.exception)) + self.assertIn("Expected", str(exc.exception)) + self.assertIn("Current", str(exc.exception)) + finally: + keep_main_alive.set() + t.join(10) + # XXX: Should handle this automatically. + del another[:] + + def test_cannot_delete_parent(self): + worker = RawGreenlet(lambda: None) + self.assertIs(worker.parent, greenlet.getcurrent()) + + with self.assertRaises(AttributeError) as exc: + del worker.parent + self.assertEqual(str(exc.exception), "can't delete attribute") + + def test_cannot_delete_parent_of_main(self): + with self.assertRaises(AttributeError) as exc: + del greenlet.getcurrent().parent + self.assertEqual(str(exc.exception), "can't delete attribute") + + + def test_main_greenlet_parent_is_none(self): + # assuming we're in a main greenlet here. + self.assertIsNone(greenlet.getcurrent().parent) + + def test_set_parent_wrong_types(self): + def bg(): + # Go back to main. + greenlet.getcurrent().parent.switch() + + def check(glet): + for p in None, 1, self, "42": + with self.assertRaises(TypeError) as exc: + glet.parent = p + + self.assertEqual( + str(exc.exception), + "GreenletChecker: Expected any type of greenlet, not " + type(p).__name__) + + # First, not running + g = RawGreenlet(bg) + self.assertFalse(g) + check(g) + + # Then when running. + g.switch() + self.assertTrue(g) + check(g) + + # Let it finish + g.switch() + + + def test_trivial_cycle(self): + glet = RawGreenlet(lambda: None) + with self.assertRaises(ValueError) as exc: + glet.parent = glet + self.assertEqual(str(exc.exception), "cyclic parent chain") + + def test_trivial_cycle_main(self): + # This used to produce a ValueError, but we catch it earlier than that now. + with self.assertRaises(AttributeError) as exc: + greenlet.getcurrent().parent = greenlet.getcurrent() + self.assertEqual(str(exc.exception), "cannot set the parent of a main greenlet") + + def test_deeper_cycle(self): + g1 = RawGreenlet(lambda: None) + g2 = RawGreenlet(lambda: None) + g3 = RawGreenlet(lambda: None) + + g1.parent = g2 + g2.parent = g3 + with self.assertRaises(ValueError) as exc: + g3.parent = g1 + self.assertEqual(str(exc.exception), "cyclic parent chain") + + +class TestRepr(TestCase): + + def assertEndsWith(self, got, suffix): + self.assertTrue(got.endswith(suffix), (got, suffix)) + + def test_main_while_running(self): + r = repr(greenlet.getcurrent()) + self.assertEndsWith(r, " current active started main>") + + def test_main_in_background(self): + main = greenlet.getcurrent() + def run(): + return repr(main) + + g = RawGreenlet(run) + r = g.switch() + self.assertEndsWith(r, ' suspended active started main>') + + def test_initial(self): + r = repr(RawGreenlet()) + self.assertEndsWith(r, ' pending>') + + def test_main_from_other_thread(self): + main = greenlet.getcurrent() + + class T(threading.Thread): + original_main = thread_main = None + main_glet = None + def run(self): + self.original_main = repr(main) + self.main_glet = greenlet.getcurrent() + self.thread_main = repr(self.main_glet) + + t = T() + t.start() + t.join(10) + + self.assertEndsWith(t.original_main, ' suspended active started main>') + self.assertEndsWith(t.thread_main, ' current active started main>') + # give the machinery time to notice the death of the thread, + # and clean it up. Note that we don't use + # ``expect_greenlet_leak`` or wait_for_pending_cleanups, + # because at this point we know we have an extra greenlet + # still reachable. + for _ in range(3): + time.sleep(0.001) + + # In the past, main greenlets, even from dead threads, never + # really appear dead. We have fixed that, and we also report + # that the thread is dead in the repr. (Do this multiple times + # to make sure that we don't self-modify and forget our state + # in the C++ code). + for _ in range(3): + self.assertTrue(t.main_glet.dead) + r = repr(t.main_glet) + self.assertEndsWith(r, ' (thread exited) dead>') + + def test_dead(self): + g = RawGreenlet(lambda: None) + g.switch() + self.assertEndsWith(repr(g), ' dead>') + self.assertNotIn('suspended', repr(g)) + self.assertNotIn('started', repr(g)) + self.assertNotIn('active', repr(g)) + + def test_formatting_produces_native_str(self): + # https://github.com/python-greenlet/greenlet/issues/218 + # %s formatting on Python 2 was producing unicode, not str. + + g_dead = RawGreenlet(lambda: None) + g_not_started = RawGreenlet(lambda: None) + g_cur = greenlet.getcurrent() + + for g in g_dead, g_not_started, g_cur: + + self.assertIsInstance( + '%s' % (g,), + str + ) + self.assertIsInstance( + '%r' % (g,), + str, + ) + + +class TestMainGreenlet(TestCase): + # Tests some implementation details, and relies on some + # implementation details. + + def _check_current_is_main(self): + # implementation detail + assert 'main' in repr(greenlet.getcurrent()) + + t = type(greenlet.getcurrent()) + assert 'main' not in repr(t) + return t + + def test_main_greenlet_type_can_be_subclassed(self): + main_type = self._check_current_is_main() + subclass = type('subclass', (main_type,), {}) + self.assertIsNotNone(subclass) + + def test_main_greenlet_is_greenlet(self): + self._check_current_is_main() + self.assertIsInstance(greenlet.getcurrent(), RawGreenlet) + + + +class TestBrokenGreenlets(TestCase): + # Tests for things that used to, or still do, terminate the interpreter. + # This often means doing unsavory things. + + def test_failed_to_initialstub(self): + def func(): + raise AssertionError("Never get here") + + + g = greenlet._greenlet.UnswitchableGreenlet(func) + g.force_switch_error = True + + with self.assertRaisesRegex(SystemError, + "Failed to switch stacks into a greenlet for the first time."): + g.switch() + + def test_failed_to_switch_into_running(self): + runs = [] + def func(): + runs.append(1) + greenlet.getcurrent().parent.switch() + runs.append(2) + greenlet.getcurrent().parent.switch() + runs.append(3) # pragma: no cover + + g = greenlet._greenlet.UnswitchableGreenlet(func) + g.switch() + self.assertEqual(runs, [1]) + g.switch() + self.assertEqual(runs, [1, 2]) + g.force_switch_error = True + + with self.assertRaisesRegex(SystemError, + "Failed to switch stacks into a running greenlet."): + g.switch() + + # If we stopped here, we would fail the leakcheck, because we've left + # the ``inner_bootstrap()`` C frame and its descendents hanging around, + # which have a bunch of Python references. They'll never get cleaned up + # if we don't let the greenlet finish. + g.force_switch_error = False + g.switch() + self.assertEqual(runs, [1, 2, 3]) + + def test_failed_to_slp_switch_into_running(self): + ex = self.assertScriptRaises('fail_slp_switch.py') + + self.assertIn('fail_slp_switch is running', ex.output) + self.assertIn(ex.returncode, self.get_expected_returncodes_for_aborted_process()) + + def test_reentrant_switch_two_greenlets(self): + # Before we started capturing the arguments in g_switch_finish, this could crash. + output = self.run_script('fail_switch_two_greenlets.py') + self.assertIn('In g1_run', output) + self.assertIn('TRACE', output) + self.assertIn('LEAVE TRACE', output) + self.assertIn('Falling off end of main', output) + self.assertIn('Falling off end of g1_run', output) + self.assertIn('Falling off end of g2', output) + + def test_reentrant_switch_three_greenlets(self): + # On debug builds of greenlet, this used to crash with an assertion error; + # on non-debug versions, it ran fine (which it should not do!). + # Now it always crashes correctly with a TypeError + ex = self.assertScriptRaises('fail_switch_three_greenlets.py', exitcodes=(1,)) + + self.assertIn('TypeError', ex.output) + self.assertIn('positional arguments', ex.output) + + def test_reentrant_switch_three_greenlets2(self): + # This actually passed on debug and non-debug builds. It + # should probably have been triggering some debug assertions + # but it didn't. + # + # I think the fixes for the above test also kicked in here. + output = self.run_script('fail_switch_three_greenlets2.py') + self.assertIn( + "RESULTS: [('trace', 'switch'), " + "('trace', 'switch'), ('g2 arg', 'g2 from tracefunc'), " + "('trace', 'switch'), ('main g1', 'from g2_run'), ('trace', 'switch'), " + "('g1 arg', 'g1 from main'), ('trace', 'switch'), ('main g2', 'from g1_run'), " + "('trace', 'switch'), ('g1 from parent', 'g1 from main 2'), ('trace', 'switch'), " + "('main g1.2', 'g1 done'), ('trace', 'switch'), ('g2 from parent', ()), " + "('trace', 'switch'), ('main g2.2', 'g2 done')]", + output + ) + + def test_reentrant_switch_GreenletAlreadyStartedInPython(self): + output = self.run_script('fail_initialstub_already_started.py') + + self.assertIn( + "RESULTS: ['Begin C', 'Switch to b from B.__getattribute__ in C', " + "('Begin B', ()), '_B_run switching to main', ('main from c', 'From B'), " + "'B.__getattribute__ back from main in C', ('Begin A', (None,)), " + "('A dead?', True, 'B dead?', True, 'C dead?', False), " + "'C done', ('main from c.2', None)]", + output + ) + + def test_reentrant_switch_run_callable_has_del(self): + output = self.run_script('fail_clearing_run_switches.py') + self.assertIn( + "RESULTS [" + "('G.__getattribute__', 'run'), ('RunCallable', '__del__'), " + "('main: g.switch()', 'from RunCallable'), ('run_func', 'enter')" + "]", + output + ) + +if __name__ == '__main__': + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet_trash.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet_trash.py new file mode 100644 index 000000000..c1fc1374c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_greenlet_trash.py @@ -0,0 +1,187 @@ +# -*- coding: utf-8 -*- +""" +Tests for greenlets interacting with the CPython trash can API. + +The CPython trash can API is not designed to be re-entered from a +single thread. But this can happen using greenlets, if something +during the object deallocation process switches greenlets, and this second +greenlet then causes the trash can to get entered again. Here, we do this +very explicitly, but in other cases (like gevent) it could be arbitrarily more +complicated: for example, a weakref callback might try to acquire a lock that's +already held by another greenlet; that would allow a greenlet switch to occur. + +See https://github.com/gevent/gevent/issues/1909 + +This test is fragile and relies on details of the CPython +implementation (like most of the rest of this package): + + - We enter the trashcan and deferred deallocation after + ``_PyTrash_UNWIND_LEVEL`` calls. This constant, defined in + CPython's object.c, is generally 50. That's basically how many objects are required to + get us into the deferred deallocation situation. + + - The test fails by hitting an ``assert()`` in object.c; if the + build didn't enable assert, then we don't catch this. + + - If the test fails in that way, the interpreter crashes. +""" +from __future__ import print_function, absolute_import, division + +import unittest + + +class TestTrashCanReEnter(unittest.TestCase): + + def test_it(self): + try: + # pylint:disable-next=no-name-in-module + from greenlet._greenlet import get_tstate_trash_delete_nesting # pylint:disable=unused-import + except ImportError: + import sys + # Python 3.13 has not "trash delete nesting" anymore (but "delete later") + assert sys.version_info[:2] >= (3, 13) + self.skipTest("get_tstate_trash_delete_nesting is not available.") + + # Try several times to trigger it, because it isn't 100% + # reliable. + for _ in range(10): + self.check_it() + + def check_it(self): # pylint:disable=too-many-statements + import greenlet + from greenlet._greenlet import get_tstate_trash_delete_nesting # pylint:disable=no-name-in-module + main = greenlet.getcurrent() + + assert get_tstate_trash_delete_nesting() == 0 + + # We expect to be in deferred deallocation after this many + # deallocations have occurred. TODO: I wish we had a better way to do + # this --- that was before get_tstate_trash_delete_nesting; perhaps + # we can use that API to do better? + TRASH_UNWIND_LEVEL = 50 + # How many objects to put in a container; it's the container that + # queues objects for deferred deallocation. + OBJECTS_PER_CONTAINER = 500 + + class Dealloc: # define the class here because we alter class variables each time we run. + """ + An object with a ``__del__`` method. When it starts getting deallocated + from a deferred trash can run, it switches greenlets, allocates more objects + which then also go in the trash can. If we don't save state appropriately, + nesting gets out of order and we can crash the interpreter. + """ + + #: Has our deallocation actually run and switched greenlets? + #: When it does, this will be set to the current greenlet. This should + #: be happening in the main greenlet, so we check that down below. + SPAWNED = False + + #: Has the background greenlet run? + BG_RAN = False + + BG_GLET = None + + #: How many of these things have ever been allocated. + CREATED = 0 + + #: How many of these things have ever been deallocated. + DESTROYED = 0 + + #: How many were destroyed not in the main greenlet. There should always + #: be some. + #: If the test is broken or things change in the trashcan implementation, + #: this may not be correct. + DESTROYED_BG = 0 + + def __init__(self, sequence_number): + """ + :param sequence_number: The ordinal of this object during + one particular creation run. This is used to detect (guess, really) + when we have entered the trash can's deferred deallocation. + """ + self.i = sequence_number + Dealloc.CREATED += 1 + + def __del__(self): + if self.i == TRASH_UNWIND_LEVEL and not self.SPAWNED: + Dealloc.SPAWNED = greenlet.getcurrent() + other = Dealloc.BG_GLET = greenlet.greenlet(background_greenlet) + x = other.switch() + assert x == 42 + # It's important that we don't switch back to the greenlet, + # we leave it hanging there in an incomplete state. But we don't let it + # get collected, either. If we complete it now, while we're still + # in the scope of the initial trash can, things work out and we + # don't see the problem. We need this greenlet to complete + # at some point in the future, after we've exited this trash can invocation. + del other + elif self.i == 40 and greenlet.getcurrent() is not main: + Dealloc.BG_RAN = True + try: + main.switch(42) + except greenlet.GreenletExit as ex: + # We expect this; all references to us go away + # while we're still running, and we need to finish deleting + # ourself. + Dealloc.BG_RAN = type(ex) + del ex + + # Record the fact that we're dead last of all. This ensures that + # we actually get returned too. + Dealloc.DESTROYED += 1 + if greenlet.getcurrent() is not main: + Dealloc.DESTROYED_BG += 1 + + + def background_greenlet(): + # We direct through a second function, instead of + # directly calling ``make_some()``, so that we have complete + # control over when these objects are destroyed: we need them + # to be destroyed in the context of the background greenlet + t = make_some() + del t # Triggere deletion. + + def make_some(): + t = () + i = OBJECTS_PER_CONTAINER + while i: + # Nest the tuples; it's the recursion that gets us + # into trash. + t = (Dealloc(i), t) + i -= 1 + return t + + + some = make_some() + self.assertEqual(Dealloc.CREATED, OBJECTS_PER_CONTAINER) + self.assertEqual(Dealloc.DESTROYED, 0) + + # If we're going to crash, it should be on the following line. + # We only crash if ``assert()`` is enabled, of course. + del some + + # For non-debug builds of CPython, we won't crash. The best we can do is check + # the nesting level explicitly. + self.assertEqual(0, get_tstate_trash_delete_nesting()) + + # Discard this, raising GreenletExit into where it is waiting. + Dealloc.BG_GLET = None + # The same nesting level maintains. + self.assertEqual(0, get_tstate_trash_delete_nesting()) + + # We definitely cleaned some up in the background + self.assertGreater(Dealloc.DESTROYED_BG, 0) + + # Make sure all the cleanups happened. + self.assertIs(Dealloc.SPAWNED, main) + self.assertTrue(Dealloc.BG_RAN) + self.assertEqual(Dealloc.BG_RAN, greenlet.GreenletExit) + self.assertEqual(Dealloc.CREATED, Dealloc.DESTROYED ) + self.assertEqual(Dealloc.CREATED, OBJECTS_PER_CONTAINER * 2) + + import gc + gc.collect() + + +if __name__ == '__main__': + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_leaks.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_leaks.py new file mode 100644 index 000000000..99da4ebe6 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_leaks.py @@ -0,0 +1,447 @@ +# -*- coding: utf-8 -*- +""" +Testing scenarios that may have leaked. +""" +from __future__ import print_function, absolute_import, division + +import sys +import gc + +import time +import weakref +import threading + + +import greenlet +from . import TestCase +from . import PY314 +from .leakcheck import fails_leakcheck +from .leakcheck import ignores_leakcheck +from .leakcheck import RUNNING_ON_MANYLINUX + +# pylint:disable=protected-access + +assert greenlet.GREENLET_USE_GC # Option to disable this was removed in 1.0 + +class HasFinalizerTracksInstances(object): + EXTANT_INSTANCES = set() + def __init__(self, msg): + self.msg = sys.intern(msg) + self.EXTANT_INSTANCES.add(id(self)) + def __del__(self): + self.EXTANT_INSTANCES.remove(id(self)) + def __repr__(self): + return "" % ( + id(self), self.msg + ) + @classmethod + def reset(cls): + cls.EXTANT_INSTANCES.clear() + + +class TestLeaks(TestCase): + + def test_arg_refs(self): + args = ('a', 'b', 'c') + refcount_before = sys.getrefcount(args) + # pylint:disable=unnecessary-lambda + g = greenlet.greenlet( + lambda *args: greenlet.getcurrent().parent.switch(*args)) + for _ in range(100): + g.switch(*args) + self.assertEqual(sys.getrefcount(args), refcount_before) + + def test_kwarg_refs(self): + kwargs = {} + self.assertEqual(sys.getrefcount(kwargs), 2 if not PY314 else 1) + # pylint:disable=unnecessary-lambda + g = greenlet.greenlet( + lambda **gkwargs: greenlet.getcurrent().parent.switch(**gkwargs)) + for _ in range(100): + g.switch(**kwargs) + # Python 3.14 elides reference counting operations + # in some cases. See https://github.com/python/cpython/pull/130708 + self.assertEqual(sys.getrefcount(kwargs), 2 if not PY314 else 1) + + + @staticmethod + def __recycle_threads(): + # By introducing a thread that does sleep we allow other threads, + # that have triggered their __block condition, but did not have a + # chance to deallocate their thread state yet, to finally do so. + # The way it works is by requiring a GIL switch (different thread), + # which does a GIL release (sleep), which might do a GIL switch + # to finished threads and allow them to clean up. + def worker(): + time.sleep(0.001) + t = threading.Thread(target=worker) + t.start() + time.sleep(0.001) + t.join(10) + + def test_threaded_leak(self): + gg = [] + def worker(): + # only main greenlet present + gg.append(weakref.ref(greenlet.getcurrent())) + for _ in range(2): + t = threading.Thread(target=worker) + t.start() + t.join(10) + del t + greenlet.getcurrent() # update ts_current + self.__recycle_threads() + greenlet.getcurrent() # update ts_current + gc.collect() + greenlet.getcurrent() # update ts_current + for g in gg: + self.assertIsNone(g()) + + def test_threaded_adv_leak(self): + gg = [] + def worker(): + # main and additional *finished* greenlets + ll = greenlet.getcurrent().ll = [] + def additional(): + ll.append(greenlet.getcurrent()) + for _ in range(2): + greenlet.greenlet(additional).switch() + gg.append(weakref.ref(greenlet.getcurrent())) + for _ in range(2): + t = threading.Thread(target=worker) + t.start() + t.join(10) + del t + greenlet.getcurrent() # update ts_current + self.__recycle_threads() + greenlet.getcurrent() # update ts_current + gc.collect() + greenlet.getcurrent() # update ts_current + for g in gg: + self.assertIsNone(g()) + + def assertClocksUsed(self): + used = greenlet._greenlet.get_clocks_used_doing_optional_cleanup() + self.assertGreaterEqual(used, 0) + # we don't lose the value + greenlet._greenlet.enable_optional_cleanup(True) + used2 = greenlet._greenlet.get_clocks_used_doing_optional_cleanup() + self.assertEqual(used, used2) + self.assertGreater(greenlet._greenlet.CLOCKS_PER_SEC, 1) + + def _check_issue251(self, + manually_collect_background=True, + explicit_reference_to_switch=False): + # See https://github.com/python-greenlet/greenlet/issues/251 + # Killing a greenlet (probably not the main one) + # in one thread from another thread would + # result in leaking a list (the ts_delkey list). + # We no longer use lists to hold that stuff, though. + + # For the test to be valid, even empty lists have to be tracked by the + # GC + + assert gc.is_tracked([]) + HasFinalizerTracksInstances.reset() + greenlet.getcurrent() + greenlets_before = self.count_objects(greenlet.greenlet, exact_kind=False) + + background_glet_running = threading.Event() + background_glet_killed = threading.Event() + background_greenlets = [] + + # XXX: Switching this to a greenlet subclass that overrides + # run results in all callers failing the leaktest; that + # greenlet instance is leaked. There's a bound method for + # run() living on the stack of the greenlet in g_initialstub, + # and since we don't manually switch back to the background + # greenlet to let it "fall off the end" and exit the + # g_initialstub function, it never gets cleaned up. Making the + # garbage collector aware of this bound method (making it an + # attribute of the greenlet structure and traversing into it) + # doesn't help, for some reason. + def background_greenlet(): + # Throw control back to the main greenlet. + jd = HasFinalizerTracksInstances("DELETING STACK OBJECT") + greenlet._greenlet.set_thread_local( + 'test_leaks_key', + HasFinalizerTracksInstances("DELETING THREAD STATE")) + # Explicitly keeping 'switch' in a local variable + # breaks this test in all versions + if explicit_reference_to_switch: + s = greenlet.getcurrent().parent.switch + s([jd]) + else: + greenlet.getcurrent().parent.switch([jd]) + + bg_main_wrefs = [] + + def background_thread(): + glet = greenlet.greenlet(background_greenlet) + bg_main_wrefs.append(weakref.ref(glet.parent)) + + background_greenlets.append(glet) + glet.switch() # Be sure it's active. + # Control is ours again. + del glet # Delete one reference from the thread it runs in. + background_glet_running.set() + background_glet_killed.wait(10) + + # To trigger the background collection of the dead + # greenlet, thus clearing out the contents of the list, we + # need to run some APIs. See issue 252. + if manually_collect_background: + greenlet.getcurrent() + + + t = threading.Thread(target=background_thread) + t.start() + background_glet_running.wait(10) + greenlet.getcurrent() + lists_before = self.count_objects(list, exact_kind=True) + + assert len(background_greenlets) == 1 + self.assertFalse(background_greenlets[0].dead) + # Delete the last reference to the background greenlet + # from a different thread. This puts it in the background thread's + # ts_delkey list. + del background_greenlets[:] + background_glet_killed.set() + + # Now wait for the background thread to die. + t.join(10) + del t + # As part of the fix for 252, we need to cycle the ceval.c + # interpreter loop to be sure it has had a chance to process + # the pending call. + self.wait_for_pending_cleanups() + + lists_after = self.count_objects(list, exact_kind=True) + greenlets_after = self.count_objects(greenlet.greenlet, exact_kind=False) + + # On 2.7, we observe that lists_after is smaller than + # lists_before. No idea what lists got cleaned up. All the + # Python 3 versions match exactly. + self.assertLessEqual(lists_after, lists_before) + # On versions after 3.6, we've successfully cleaned up the + # greenlet references thanks to the internal "vectorcall" + # protocol; prior to that, there is a reference path through + # the ``greenlet.switch`` method still on the stack that we + # can't reach to clean up. The C code goes through terrific + # lengths to clean that up. + if not explicit_reference_to_switch \ + and greenlet._greenlet.get_clocks_used_doing_optional_cleanup() is not None: + # If cleanup was disabled, though, we may not find it. + self.assertEqual(greenlets_after, greenlets_before) + if manually_collect_background: + # TODO: Figure out how to make this work! + # The one on the stack is still leaking somehow + # in the non-manually-collect state. + self.assertEqual(HasFinalizerTracksInstances.EXTANT_INSTANCES, set()) + else: + # The explicit reference prevents us from collecting it + # and it isn't always found by the GC either for some + # reason. The entire frame is leaked somehow, on some + # platforms (e.g., MacPorts builds of Python (all + # versions!)), but not on other platforms (the linux and + # windows builds on GitHub actions and Appveyor). So we'd + # like to write a test that proves that the main greenlet + # sticks around, and we can on my machine (macOS 11.6, + # MacPorts builds of everything) but we can't write that + # same test on other platforms. However, hopefully iteration + # done by leakcheck will find it. + pass + + if greenlet._greenlet.get_clocks_used_doing_optional_cleanup() is not None: + self.assertClocksUsed() + + def test_issue251_killing_cross_thread_leaks_list(self): + self._check_issue251() + + def test_issue251_with_cleanup_disabled(self): + greenlet._greenlet.enable_optional_cleanup(False) + try: + self._check_issue251() + finally: + greenlet._greenlet.enable_optional_cleanup(True) + + @fails_leakcheck + def test_issue251_issue252_need_to_collect_in_background(self): + # Between greenlet 1.1.2 and the next version, this was still + # failing because the leak of the list still exists when we + # don't call a greenlet API before exiting the thread. The + # proximate cause is that neither of the two greenlets from + # the background thread are actually being destroyed, even + # though the GC is in fact visiting both objects. It's not + # clear where that leak is? For some reason the thread-local + # dict holding it isn't being cleaned up. + # + # The leak, I think, is in the CPYthon internal function that + # calls into green_switch(). The argument tuple is still on + # the C stack somewhere and can't be reached? That doesn't + # make sense, because the tuple should be collectable when + # this object goes away. + # + # Note that this test sometimes spuriously passes on Linux, + # for some reason, but I've never seen it pass on macOS. + self._check_issue251(manually_collect_background=False) + + @fails_leakcheck + def test_issue251_issue252_need_to_collect_in_background_cleanup_disabled(self): + self.expect_greenlet_leak = True + greenlet._greenlet.enable_optional_cleanup(False) + try: + self._check_issue251(manually_collect_background=False) + finally: + greenlet._greenlet.enable_optional_cleanup(True) + + @fails_leakcheck + def test_issue251_issue252_explicit_reference_not_collectable(self): + self._check_issue251( + manually_collect_background=False, + explicit_reference_to_switch=True) + + UNTRACK_ATTEMPTS = 100 + + def _only_test_some_versions(self): + # We're only looking for this problem specifically on 3.11, + # and this set of tests is relatively fragile, depending on + # OS and memory management details. So we want to run it on 3.11+ + # (obviously) but not every older 3.x version in order to reduce + # false negatives. At the moment, those false results seem to have + # resolved, so we are actually running this on 3.8+ + assert sys.version_info[0] >= 3 + if sys.version_info[:2] < (3, 8): + self.skipTest('Only observed on 3.11') + if RUNNING_ON_MANYLINUX: + self.skipTest("Slow and not worth repeating here") + + @ignores_leakcheck + # Because we're just trying to track raw memory, not objects, and running + # the leakcheck makes an already slow test slower. + def test_untracked_memory_doesnt_increase(self): + # See https://github.com/gevent/gevent/issues/1924 + # and https://github.com/python-greenlet/greenlet/issues/328 + self._only_test_some_versions() + def f(): + return 1 + + ITER = 10000 + def run_it(): + for _ in range(ITER): + greenlet.greenlet(f).switch() + + # Establish baseline + for _ in range(3): + run_it() + + # uss: (Linux, macOS, Windows): aka "Unique Set Size", this is + # the memory which is unique to a process and which would be + # freed if the process was terminated right now. + uss_before = self.get_process_uss() + + for count in range(self.UNTRACK_ATTEMPTS): + uss_before = max(uss_before, self.get_process_uss()) + run_it() + + uss_after = self.get_process_uss() + if uss_after <= uss_before and count > 1: + break + + self.assertLessEqual(uss_after, uss_before) + + def _check_untracked_memory_thread(self, deallocate_in_thread=True): + self._only_test_some_versions() + # Like the above test, but what if there are a bunch of + # unfinished greenlets in a thread that dies? + # Does it matter if we deallocate in the thread or not? + EXIT_COUNT = [0] + + def f(): + try: + greenlet.getcurrent().parent.switch() + except greenlet.GreenletExit: + EXIT_COUNT[0] += 1 + raise + return 1 + + ITER = 10000 + def run_it(): + glets = [] + for _ in range(ITER): + # Greenlet starts, switches back to us. + # We keep a strong reference to the greenlet though so it doesn't + # get a GreenletExit exception. + g = greenlet.greenlet(f) + glets.append(g) + g.switch() + + return glets + + test = self + + class ThreadFunc: + uss_before = uss_after = 0 + glets = () + ITER = 2 + def __call__(self): + self.uss_before = test.get_process_uss() + + for _ in range(self.ITER): + self.glets += tuple(run_it()) + + for g in self.glets: + test.assertIn('suspended active', str(g)) + # Drop them. + if deallocate_in_thread: + self.glets = () + self.uss_after = test.get_process_uss() + + # Establish baseline + uss_before = uss_after = None + for count in range(self.UNTRACK_ATTEMPTS): + EXIT_COUNT[0] = 0 + thread_func = ThreadFunc() + t = threading.Thread(target=thread_func) + t.start() + t.join(30) + self.assertFalse(t.is_alive()) + + if uss_before is None: + uss_before = thread_func.uss_before + + uss_before = max(uss_before, thread_func.uss_before) + if deallocate_in_thread: + self.assertEqual(thread_func.glets, ()) + self.assertEqual(EXIT_COUNT[0], ITER * thread_func.ITER) + + del thread_func # Deallocate the greenlets; but this won't raise into them + del t + if not deallocate_in_thread: + self.assertEqual(EXIT_COUNT[0], 0) + if deallocate_in_thread: + self.wait_for_pending_cleanups() + + uss_after = self.get_process_uss() + # See if we achieve a non-growth state at some point. Break when we do. + if uss_after <= uss_before and count > 1: + break + + self.wait_for_pending_cleanups() + uss_after = self.get_process_uss() + self.assertLessEqual(uss_after, uss_before, "after attempts %d" % (count,)) + + @ignores_leakcheck + # Because we're just trying to track raw memory, not objects, and running + # the leakcheck makes an already slow test slower. + def test_untracked_memory_doesnt_increase_unfinished_thread_dealloc_in_thread(self): + self._check_untracked_memory_thread(deallocate_in_thread=True) + + @ignores_leakcheck + # Because the main greenlets from the background threads do not exit in a timely fashion, + # we fail the object-based leakchecks. + def test_untracked_memory_doesnt_increase_unfinished_thread_dealloc_in_main(self): + self._check_untracked_memory_thread(deallocate_in_thread=False) + +if __name__ == '__main__': + __import__('unittest').main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_stack_saved.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_stack_saved.py new file mode 100644 index 000000000..b362bf95a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_stack_saved.py @@ -0,0 +1,19 @@ +import greenlet +from . import TestCase + + +class Test(TestCase): + + def test_stack_saved(self): + main = greenlet.getcurrent() + self.assertEqual(main._stack_saved, 0) + + def func(): + main.switch(main._stack_saved) + + g = greenlet.greenlet(func) + x = g.switch() + self.assertGreater(x, 0) + self.assertGreater(g._stack_saved, 0) + g.switch() + self.assertEqual(g._stack_saved, 0) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_throw.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_throw.py new file mode 100644 index 000000000..f4f9a1402 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_throw.py @@ -0,0 +1,128 @@ +import sys + + +from greenlet import greenlet +from . import TestCase + +def switch(*args): + return greenlet.getcurrent().parent.switch(*args) + + +class ThrowTests(TestCase): + def test_class(self): + def f(): + try: + switch("ok") + except RuntimeError: + switch("ok") + return + switch("fail") + g = greenlet(f) + res = g.switch() + self.assertEqual(res, "ok") + res = g.throw(RuntimeError) + self.assertEqual(res, "ok") + + def test_val(self): + def f(): + try: + switch("ok") + except RuntimeError: + val = sys.exc_info()[1] + if str(val) == "ciao": + switch("ok") + return + switch("fail") + + g = greenlet(f) + res = g.switch() + self.assertEqual(res, "ok") + res = g.throw(RuntimeError("ciao")) + self.assertEqual(res, "ok") + + g = greenlet(f) + res = g.switch() + self.assertEqual(res, "ok") + res = g.throw(RuntimeError, "ciao") + self.assertEqual(res, "ok") + + def test_kill(self): + def f(): + switch("ok") + switch("fail") + g = greenlet(f) + res = g.switch() + self.assertEqual(res, "ok") + res = g.throw() + self.assertTrue(isinstance(res, greenlet.GreenletExit)) + self.assertTrue(g.dead) + res = g.throw() # immediately eaten by the already-dead greenlet + self.assertTrue(isinstance(res, greenlet.GreenletExit)) + + def test_throw_goes_to_original_parent(self): + main = greenlet.getcurrent() + + def f1(): + try: + main.switch("f1 ready to catch") + except IndexError: + return "caught" + return "normal exit" + + def f2(): + main.switch("from f2") + + g1 = greenlet(f1) + g2 = greenlet(f2, parent=g1) + with self.assertRaises(IndexError): + g2.throw(IndexError) + self.assertTrue(g2.dead) + self.assertTrue(g1.dead) + + g1 = greenlet(f1) + g2 = greenlet(f2, parent=g1) + res = g1.switch() + self.assertEqual(res, "f1 ready to catch") + res = g2.throw(IndexError) + self.assertEqual(res, "caught") + self.assertTrue(g2.dead) + self.assertTrue(g1.dead) + + g1 = greenlet(f1) + g2 = greenlet(f2, parent=g1) + res = g1.switch() + self.assertEqual(res, "f1 ready to catch") + res = g2.switch() + self.assertEqual(res, "from f2") + res = g2.throw(IndexError) + self.assertEqual(res, "caught") + self.assertTrue(g2.dead) + self.assertTrue(g1.dead) + + def test_non_traceback_param(self): + with self.assertRaises(TypeError) as exc: + greenlet.getcurrent().throw( + Exception, + Exception(), + self + ) + self.assertEqual(str(exc.exception), + "throw() third argument must be a traceback object") + + def test_instance_of_wrong_type(self): + with self.assertRaises(TypeError) as exc: + greenlet.getcurrent().throw( + Exception(), + BaseException() + ) + + self.assertEqual(str(exc.exception), + "instance exception may not have a separate value") + + def test_not_throwable(self): + with self.assertRaises(TypeError) as exc: + greenlet.getcurrent().throw( + "abc" + ) + self.assertEqual(str(exc.exception), + "exceptions must be classes, or instances, not str") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_tracing.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_tracing.py new file mode 100644 index 000000000..c044d4b64 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_tracing.py @@ -0,0 +1,291 @@ +from __future__ import print_function +import sys +import greenlet +import unittest + +from . import TestCase +from . import PY312 + +# https://discuss.python.org/t/cpython-3-12-greenlet-and-tracing-profiling-how-to-not-crash-and-get-correct-results/33144/2 +DEBUG_BUILD_PY312 = ( + PY312 and hasattr(sys, 'gettotalrefcount'), + "Broken on debug builds of Python 3.12" +) + +class SomeError(Exception): + pass + +class GreenletTracer(object): + oldtrace = None + + def __init__(self, error_on_trace=False): + self.actions = [] + self.error_on_trace = error_on_trace + + def __call__(self, *args): + self.actions.append(args) + if self.error_on_trace: + raise SomeError + + def __enter__(self): + self.oldtrace = greenlet.settrace(self) + return self.actions + + def __exit__(self, *args): + greenlet.settrace(self.oldtrace) + + +class TestGreenletTracing(TestCase): + """ + Tests of ``greenlet.settrace()`` + """ + + def test_a_greenlet_tracing(self): + main = greenlet.getcurrent() + def dummy(): + pass + def dummyexc(): + raise SomeError() + + with GreenletTracer() as actions: + g1 = greenlet.greenlet(dummy) + g1.switch() + g2 = greenlet.greenlet(dummyexc) + self.assertRaises(SomeError, g2.switch) + + self.assertEqual(actions, [ + ('switch', (main, g1)), + ('switch', (g1, main)), + ('switch', (main, g2)), + ('throw', (g2, main)), + ]) + + def test_b_exception_disables_tracing(self): + main = greenlet.getcurrent() + def dummy(): + main.switch() + g = greenlet.greenlet(dummy) + g.switch() + with GreenletTracer(error_on_trace=True) as actions: + self.assertRaises(SomeError, g.switch) + self.assertEqual(greenlet.gettrace(), None) + + self.assertEqual(actions, [ + ('switch', (main, g)), + ]) + + def test_set_same_tracer_twice(self): + # https://github.com/python-greenlet/greenlet/issues/332 + # Our logic in asserting that the tracefunction should + # gain a reference was incorrect if the same tracefunction was set + # twice. + tracer = GreenletTracer() + with tracer: + greenlet.settrace(tracer) + + +class PythonTracer(object): + oldtrace = None + + def __init__(self): + self.actions = [] + + def __call__(self, frame, event, arg): + # Record the co_name so we have an idea what function we're in. + self.actions.append((event, frame.f_code.co_name)) + + def __enter__(self): + self.oldtrace = sys.setprofile(self) + return self.actions + + def __exit__(self, *args): + sys.setprofile(self.oldtrace) + +def tpt_callback(): + return 42 + +class TestPythonTracing(TestCase): + """ + Tests of the interaction of ``sys.settrace()`` + with greenlet facilities. + + NOTE: Most of this is probably CPython specific. + """ + + maxDiff = None + + def test_trace_events_trivial(self): + with PythonTracer() as actions: + tpt_callback() + # If we use the sys.settrace instead of setprofile, we get + # this: + + # self.assertEqual(actions, [ + # ('call', 'tpt_callback'), + # ('call', '__exit__'), + # ]) + + self.assertEqual(actions, [ + ('return', '__enter__'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('call', '__exit__'), + ('c_call', '__exit__'), + ]) + + def _trace_switch(self, glet): + with PythonTracer() as actions: + glet.switch() + return actions + + def _check_trace_events_func_already_set(self, glet): + actions = self._trace_switch(glet) + self.assertEqual(actions, [ + ('return', '__enter__'), + ('c_call', '_trace_switch'), + ('call', 'run'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('return', 'run'), + ('c_return', '_trace_switch'), + ('call', '__exit__'), + ('c_call', '__exit__'), + ]) + + def test_trace_events_into_greenlet_func_already_set(self): + def run(): + return tpt_callback() + + self._check_trace_events_func_already_set(greenlet.greenlet(run)) + + def test_trace_events_into_greenlet_subclass_already_set(self): + class X(greenlet.greenlet): + def run(self): + return tpt_callback() + self._check_trace_events_func_already_set(X()) + + def _check_trace_events_from_greenlet_sets_profiler(self, g, tracer): + g.switch() + tpt_callback() + tracer.__exit__() + self.assertEqual(tracer.actions, [ + ('return', '__enter__'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('return', 'run'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('call', '__exit__'), + ('c_call', '__exit__'), + ]) + + + def test_trace_events_from_greenlet_func_sets_profiler(self): + tracer = PythonTracer() + def run(): + tracer.__enter__() + return tpt_callback() + + self._check_trace_events_from_greenlet_sets_profiler(greenlet.greenlet(run), + tracer) + + def test_trace_events_from_greenlet_subclass_sets_profiler(self): + tracer = PythonTracer() + class X(greenlet.greenlet): + def run(self): + tracer.__enter__() + return tpt_callback() + + self._check_trace_events_from_greenlet_sets_profiler(X(), tracer) + + @unittest.skipIf(*DEBUG_BUILD_PY312) + def test_trace_events_multiple_greenlets_switching(self): + tracer = PythonTracer() + + g1 = None + g2 = None + + def g1_run(): + tracer.__enter__() + tpt_callback() + g2.switch() + tpt_callback() + return 42 + + def g2_run(): + tpt_callback() + tracer.__exit__() + tpt_callback() + g1.switch() + + g1 = greenlet.greenlet(g1_run) + g2 = greenlet.greenlet(g2_run) + + x = g1.switch() + self.assertEqual(x, 42) + tpt_callback() # ensure not in the trace + self.assertEqual(tracer.actions, [ + ('return', '__enter__'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('c_call', 'g1_run'), + ('call', 'g2_run'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('call', '__exit__'), + ('c_call', '__exit__'), + ]) + + @unittest.skipIf(*DEBUG_BUILD_PY312) + def test_trace_events_multiple_greenlets_switching_siblings(self): + # Like the first version, but get both greenlets running first + # as "siblings" and then establish the tracing. + tracer = PythonTracer() + + g1 = None + g2 = None + + def g1_run(): + greenlet.getcurrent().parent.switch() + tracer.__enter__() + tpt_callback() + g2.switch() + tpt_callback() + return 42 + + def g2_run(): + greenlet.getcurrent().parent.switch() + + tpt_callback() + tracer.__exit__() + tpt_callback() + g1.switch() + + g1 = greenlet.greenlet(g1_run) + g2 = greenlet.greenlet(g2_run) + + # Start g1 + g1.switch() + # And it immediately returns control to us. + # Start g2 + g2.switch() + # Which also returns. Now kick of the real part of the + # test. + x = g1.switch() + self.assertEqual(x, 42) + + tpt_callback() # ensure not in the trace + self.assertEqual(tracer.actions, [ + ('return', '__enter__'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('c_call', 'g1_run'), + ('call', 'tpt_callback'), + ('return', 'tpt_callback'), + ('call', '__exit__'), + ('c_call', '__exit__'), + ]) + + +if __name__ == '__main__': + unittest.main() diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_version.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_version.py new file mode 100644 index 000000000..96c17cf17 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_version.py @@ -0,0 +1,41 @@ +#! /usr/bin/env python +from __future__ import absolute_import +from __future__ import print_function + +import sys +import os +from unittest import TestCase as NonLeakingTestCase + +import greenlet + +# No reason to run this multiple times under leakchecks, +# it doesn't do anything. +class VersionTests(NonLeakingTestCase): + def test_version(self): + def find_dominating_file(name): + if os.path.exists(name): + return name + + tried = [] + here = os.path.abspath(os.path.dirname(__file__)) + for i in range(10): + up = ['..'] * i + path = [here] + up + [name] + fname = os.path.join(*path) + fname = os.path.abspath(fname) + tried.append(fname) + if os.path.exists(fname): + return fname + raise AssertionError("Could not find file " + name + "; checked " + str(tried)) + + try: + setup_py = find_dominating_file('setup.py') + except AssertionError as e: + self.skipTest("Unable to find setup.py; must be out of tree. " + str(e)) + + + invoke_setup = "%s %s --version" % (sys.executable, setup_py) + with os.popen(invoke_setup) as f: + sversion = f.read().strip() + + self.assertEqual(sversion, greenlet.__version__) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_weakref.py b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_weakref.py new file mode 100644 index 000000000..05a38a7f1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/greenlet/tests/test_weakref.py @@ -0,0 +1,35 @@ +import gc +import weakref + + +import greenlet +from . import TestCase + +class WeakRefTests(TestCase): + def test_dead_weakref(self): + def _dead_greenlet(): + g = greenlet.greenlet(lambda: None) + g.switch() + return g + o = weakref.ref(_dead_greenlet()) + gc.collect() + self.assertEqual(o(), None) + + def test_inactive_weakref(self): + o = weakref.ref(greenlet.greenlet()) + gc.collect() + self.assertEqual(o(), None) + + def test_dealloc_weakref(self): + seen = [] + def worker(): + try: + greenlet.getcurrent().parent.switch() + finally: + seen.append(g()) + g = greenlet.greenlet(worker) + g.switch() + g2 = greenlet.greenlet(lambda: None, g) + g = weakref.ref(g2) + g2 = None + self.assertEqual(seen, [None]) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt new file mode 100644 index 000000000..7b190ca67 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/LICENSE.txt @@ -0,0 +1,28 @@ +Copyright 2011 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/METADATA new file mode 100644 index 000000000..ddf546484 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/METADATA @@ -0,0 +1,60 @@ +Metadata-Version: 2.1 +Name: itsdangerous +Version: 2.2.0 +Summary: Safely pass data to untrusted environments and back. +Maintainer-email: Pallets +Requires-Python: >=3.8 +Description-Content-Type: text/markdown +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Typing :: Typed +Project-URL: Changes, https://itsdangerous.palletsprojects.com/changes/ +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://itsdangerous.palletsprojects.com/ +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Source, https://github.com/pallets/itsdangerous/ + +# ItsDangerous + +... so better sign this + +Various helpers to pass data to untrusted environments and to get it +back safe and sound. Data is cryptographically signed to ensure that a +token has not been tampered with. + +It's possible to customize how data is serialized. Data is compressed as +needed. A timestamp can be added and verified automatically while +loading a token. + + +## A Simple Example + +Here's how you could generate a token for transmitting a user's id and +name between web requests. + +```python +from itsdangerous import URLSafeSerializer +auth_s = URLSafeSerializer("secret key", "auth") +token = auth_s.dumps({"id": 5, "name": "itsdangerous"}) + +print(token) +# eyJpZCI6NSwibmFtZSI6Iml0c2Rhbmdlcm91cyJ9.6YP6T0BaO67XP--9UzTrmurXSmg + +data = auth_s.loads(token) +print(data["name"]) +# itsdangerous +``` + + +## Donate + +The Pallets organization develops and supports ItsDangerous and other +popular packages. In order to grow the community of contributors and +users, and allow the maintainers to devote more time to the projects, +[please donate today][]. + +[please donate today]: https://palletsprojects.com/donate + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/RECORD new file mode 100644 index 000000000..255db20b5 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/RECORD @@ -0,0 +1,22 @@ +itsdangerous-2.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +itsdangerous-2.2.0.dist-info/LICENSE.txt,sha256=Y68JiRtr6K0aQlLtQ68PTvun_JSOIoNnvtfzxa4LCdc,1475 +itsdangerous-2.2.0.dist-info/METADATA,sha256=0rk0-1ZwihuU5DnwJVwPWoEI4yWOyCexih3JyZHblhE,1924 +itsdangerous-2.2.0.dist-info/RECORD,, +itsdangerous-2.2.0.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81 +itsdangerous/__init__.py,sha256=4SK75sCe29xbRgQE1ZQtMHnKUuZYAf3bSpZOrff1IAY,1427 +itsdangerous/__pycache__/__init__.cpython-310.pyc,, +itsdangerous/__pycache__/_json.cpython-310.pyc,, +itsdangerous/__pycache__/encoding.cpython-310.pyc,, +itsdangerous/__pycache__/exc.cpython-310.pyc,, +itsdangerous/__pycache__/serializer.cpython-310.pyc,, +itsdangerous/__pycache__/signer.cpython-310.pyc,, +itsdangerous/__pycache__/timed.cpython-310.pyc,, +itsdangerous/__pycache__/url_safe.cpython-310.pyc,, +itsdangerous/_json.py,sha256=wPQGmge2yZ9328EHKF6gadGeyGYCJQKxtU-iLKE6UnA,473 +itsdangerous/encoding.py,sha256=wwTz5q_3zLcaAdunk6_vSoStwGqYWe307Zl_U87aRFM,1409 +itsdangerous/exc.py,sha256=Rr3exo0MRFEcPZltwecyK16VV1bE2K9_F1-d-ljcUn4,3201 +itsdangerous/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +itsdangerous/serializer.py,sha256=PmdwADLqkSyQLZ0jOKAgDsAW4k_H0TlA71Ei3z0C5aI,15601 +itsdangerous/signer.py,sha256=YO0CV7NBvHA6j549REHJFUjUojw2pHqwcUpQnU7yNYQ,9647 +itsdangerous/timed.py,sha256=6RvDMqNumGMxf0-HlpaZdN9PUQQmRvrQGplKhxuivUs,8083 +itsdangerous/url_safe.py,sha256=az4e5fXi_vs-YbWj8YZwn4wiVKfeD--GEKRT5Ueu4P4,2505 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/WHEEL new file mode 100644 index 000000000..3b5e64b5e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous-2.2.0.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.9.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/__init__.py new file mode 100644 index 000000000..ea55256eb --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/__init__.py @@ -0,0 +1,38 @@ +from __future__ import annotations + +import typing as t + +from .encoding import base64_decode as base64_decode +from .encoding import base64_encode as base64_encode +from .encoding import want_bytes as want_bytes +from .exc import BadData as BadData +from .exc import BadHeader as BadHeader +from .exc import BadPayload as BadPayload +from .exc import BadSignature as BadSignature +from .exc import BadTimeSignature as BadTimeSignature +from .exc import SignatureExpired as SignatureExpired +from .serializer import Serializer as Serializer +from .signer import HMACAlgorithm as HMACAlgorithm +from .signer import NoneAlgorithm as NoneAlgorithm +from .signer import Signer as Signer +from .timed import TimedSerializer as TimedSerializer +from .timed import TimestampSigner as TimestampSigner +from .url_safe import URLSafeSerializer as URLSafeSerializer +from .url_safe import URLSafeTimedSerializer as URLSafeTimedSerializer + + +def __getattr__(name: str) -> t.Any: + if name == "__version__": + import importlib.metadata + import warnings + + warnings.warn( + "The '__version__' attribute is deprecated and will be removed in" + " ItsDangerous 2.3. Use feature detection or" + " 'importlib.metadata.version(\"itsdangerous\")' instead.", + DeprecationWarning, + stacklevel=2, + ) + return importlib.metadata.version("itsdangerous") + + raise AttributeError(name) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/_json.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/_json.py new file mode 100644 index 000000000..fc23feaaf --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/_json.py @@ -0,0 +1,18 @@ +from __future__ import annotations + +import json as _json +import typing as t + + +class _CompactJSON: + """Wrapper around json module that strips whitespace.""" + + @staticmethod + def loads(payload: str | bytes) -> t.Any: + return _json.loads(payload) + + @staticmethod + def dumps(obj: t.Any, **kwargs: t.Any) -> str: + kwargs.setdefault("ensure_ascii", False) + kwargs.setdefault("separators", (",", ":")) + return _json.dumps(obj, **kwargs) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/encoding.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/encoding.py new file mode 100644 index 000000000..f5ca80f90 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/encoding.py @@ -0,0 +1,54 @@ +from __future__ import annotations + +import base64 +import string +import struct +import typing as t + +from .exc import BadData + + +def want_bytes( + s: str | bytes, encoding: str = "utf-8", errors: str = "strict" +) -> bytes: + if isinstance(s, str): + s = s.encode(encoding, errors) + + return s + + +def base64_encode(string: str | bytes) -> bytes: + """Base64 encode a string of bytes or text. The resulting bytes are + safe to use in URLs. + """ + string = want_bytes(string) + return base64.urlsafe_b64encode(string).rstrip(b"=") + + +def base64_decode(string: str | bytes) -> bytes: + """Base64 decode a URL-safe string of bytes or text. The result is + bytes. + """ + string = want_bytes(string, encoding="ascii", errors="ignore") + string += b"=" * (-len(string) % 4) + + try: + return base64.urlsafe_b64decode(string) + except (TypeError, ValueError) as e: + raise BadData("Invalid base64-encoded data") from e + + +# The alphabet used by base64.urlsafe_* +_base64_alphabet = f"{string.ascii_letters}{string.digits}-_=".encode("ascii") + +_int64_struct = struct.Struct(">Q") +_int_to_bytes = _int64_struct.pack +_bytes_to_int = t.cast("t.Callable[[bytes], tuple[int]]", _int64_struct.unpack) + + +def int_to_bytes(num: int) -> bytes: + return _int_to_bytes(num).lstrip(b"\x00") + + +def bytes_to_int(bytestr: bytes) -> int: + return _bytes_to_int(bytestr.rjust(8, b"\x00"))[0] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/exc.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/exc.py new file mode 100644 index 000000000..a75adcd52 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/exc.py @@ -0,0 +1,106 @@ +from __future__ import annotations + +import typing as t +from datetime import datetime + + +class BadData(Exception): + """Raised if bad data of any sort was encountered. This is the base + for all exceptions that ItsDangerous defines. + + .. versionadded:: 0.15 + """ + + def __init__(self, message: str): + super().__init__(message) + self.message = message + + def __str__(self) -> str: + return self.message + + +class BadSignature(BadData): + """Raised if a signature does not match.""" + + def __init__(self, message: str, payload: t.Any | None = None): + super().__init__(message) + + #: The payload that failed the signature test. In some + #: situations you might still want to inspect this, even if + #: you know it was tampered with. + #: + #: .. versionadded:: 0.14 + self.payload: t.Any | None = payload + + +class BadTimeSignature(BadSignature): + """Raised if a time-based signature is invalid. This is a subclass + of :class:`BadSignature`. + """ + + def __init__( + self, + message: str, + payload: t.Any | None = None, + date_signed: datetime | None = None, + ): + super().__init__(message, payload) + + #: If the signature expired this exposes the date of when the + #: signature was created. This can be helpful in order to + #: tell the user how long a link has been gone stale. + #: + #: .. versionchanged:: 2.0 + #: The datetime value is timezone-aware rather than naive. + #: + #: .. versionadded:: 0.14 + self.date_signed = date_signed + + +class SignatureExpired(BadTimeSignature): + """Raised if a signature timestamp is older than ``max_age``. This + is a subclass of :exc:`BadTimeSignature`. + """ + + +class BadHeader(BadSignature): + """Raised if a signed header is invalid in some form. This only + happens for serializers that have a header that goes with the + signature. + + .. versionadded:: 0.24 + """ + + def __init__( + self, + message: str, + payload: t.Any | None = None, + header: t.Any | None = None, + original_error: Exception | None = None, + ): + super().__init__(message, payload) + + #: If the header is actually available but just malformed it + #: might be stored here. + self.header: t.Any | None = header + + #: If available, the error that indicates why the payload was + #: not valid. This might be ``None``. + self.original_error: Exception | None = original_error + + +class BadPayload(BadData): + """Raised if a payload is invalid. This could happen if the payload + is loaded despite an invalid signature, or if there is a mismatch + between the serializer and deserializer. The original exception + that occurred during loading is stored on as :attr:`original_error`. + + .. versionadded:: 0.15 + """ + + def __init__(self, message: str, original_error: Exception | None = None): + super().__init__(message) + + #: If available, the error that indicates why the payload was + #: not valid. This might be ``None``. + self.original_error: Exception | None = original_error diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/py.typed b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/serializer.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/serializer.py new file mode 100644 index 000000000..5ddf3871d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/serializer.py @@ -0,0 +1,406 @@ +from __future__ import annotations + +import collections.abc as cabc +import json +import typing as t + +from .encoding import want_bytes +from .exc import BadPayload +from .exc import BadSignature +from .signer import _make_keys_list +from .signer import Signer + +if t.TYPE_CHECKING: + import typing_extensions as te + + # This should be either be str or bytes. To avoid having to specify the + # bound type, it falls back to a union if structural matching fails. + _TSerialized = te.TypeVar( + "_TSerialized", bound=t.Union[str, bytes], default=t.Union[str, bytes] + ) +else: + # Still available at runtime on Python < 3.13, but without the default. + _TSerialized = t.TypeVar("_TSerialized", bound=t.Union[str, bytes]) + + +class _PDataSerializer(t.Protocol[_TSerialized]): + def loads(self, payload: _TSerialized, /) -> t.Any: ... + # A signature with additional arguments is not handled correctly by type + # checkers right now, so an overload is used below for serializers that + # don't match this strict protocol. + def dumps(self, obj: t.Any, /) -> _TSerialized: ... + + +# Use TypeIs once it's available in typing_extensions or 3.13. +def is_text_serializer( + serializer: _PDataSerializer[t.Any], +) -> te.TypeGuard[_PDataSerializer[str]]: + """Checks whether a serializer generates text or binary.""" + return isinstance(serializer.dumps({}), str) + + +class Serializer(t.Generic[_TSerialized]): + """A serializer wraps a :class:`~itsdangerous.signer.Signer` to + enable serializing and securely signing data other than bytes. It + can unsign to verify that the data hasn't been changed. + + The serializer provides :meth:`dumps` and :meth:`loads`, similar to + :mod:`json`, and by default uses :mod:`json` internally to serialize + the data to bytes. + + The secret key should be a random string of ``bytes`` and should not + be saved to code or version control. Different salts should be used + to distinguish signing in different contexts. See :doc:`/concepts` + for information about the security of the secret key and salt. + + :param secret_key: The secret key to sign and verify with. Can be a + list of keys, oldest to newest, to support key rotation. + :param salt: Extra key to combine with ``secret_key`` to distinguish + signatures in different contexts. + :param serializer: An object that provides ``dumps`` and ``loads`` + methods for serializing data to a string. Defaults to + :attr:`default_serializer`, which defaults to :mod:`json`. + :param serializer_kwargs: Keyword arguments to pass when calling + ``serializer.dumps``. + :param signer: A ``Signer`` class to instantiate when signing data. + Defaults to :attr:`default_signer`, which defaults to + :class:`~itsdangerous.signer.Signer`. + :param signer_kwargs: Keyword arguments to pass when instantiating + the ``Signer`` class. + :param fallback_signers: List of signer parameters to try when + unsigning with the default signer fails. Each item can be a dict + of ``signer_kwargs``, a ``Signer`` class, or a tuple of + ``(signer, signer_kwargs)``. Defaults to + :attr:`default_fallback_signers`. + + .. versionchanged:: 2.0 + Added support for key rotation by passing a list to + ``secret_key``. + + .. versionchanged:: 2.0 + Removed the default SHA-512 fallback signer from + ``default_fallback_signers``. + + .. versionchanged:: 1.1 + Added support for ``fallback_signers`` and configured a default + SHA-512 fallback. This fallback is for users who used the yanked + 1.0.0 release which defaulted to SHA-512. + + .. versionchanged:: 0.14 + The ``signer`` and ``signer_kwargs`` parameters were added to + the constructor. + """ + + #: The default serialization module to use to serialize data to a + #: string internally. The default is :mod:`json`, but can be changed + #: to any object that provides ``dumps`` and ``loads`` methods. + default_serializer: _PDataSerializer[t.Any] = json + + #: The default ``Signer`` class to instantiate when signing data. + #: The default is :class:`itsdangerous.signer.Signer`. + default_signer: type[Signer] = Signer + + #: The default fallback signers to try when unsigning fails. + default_fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] = [] + + # Serializer[str] if no data serializer is provided, or if it returns str. + @t.overload + def __init__( + self: Serializer[str], + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None = b"itsdangerous", + serializer: None | _PDataSerializer[str] = None, + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): ... + + # Serializer[bytes] with a bytes data serializer positional argument. + @t.overload + def __init__( + self: Serializer[bytes], + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None, + serializer: _PDataSerializer[bytes], + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): ... + + # Serializer[bytes] with a bytes data serializer keyword argument. + @t.overload + def __init__( + self: Serializer[bytes], + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None = b"itsdangerous", + *, + serializer: _PDataSerializer[bytes], + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): ... + + # Fall back with a positional argument. If the strict signature of + # _PDataSerializer doesn't match, fall back to a union, requiring the user + # to specify the type. + @t.overload + def __init__( + self, + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None, + serializer: t.Any, + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): ... + + # Fall back with a keyword argument. + @t.overload + def __init__( + self, + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None = b"itsdangerous", + *, + serializer: t.Any, + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): ... + + def __init__( + self, + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None = b"itsdangerous", + serializer: t.Any | None = None, + serializer_kwargs: dict[str, t.Any] | None = None, + signer: type[Signer] | None = None, + signer_kwargs: dict[str, t.Any] | None = None, + fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] + | None = None, + ): + #: The list of secret keys to try for verifying signatures, from + #: oldest to newest. The newest (last) key is used for signing. + #: + #: This allows a key rotation system to keep a list of allowed + #: keys and remove expired ones. + self.secret_keys: list[bytes] = _make_keys_list(secret_key) + + if salt is not None: + salt = want_bytes(salt) + # if salt is None then the signer's default is used + + self.salt = salt + + if serializer is None: + serializer = self.default_serializer + + self.serializer: _PDataSerializer[_TSerialized] = serializer + self.is_text_serializer: bool = is_text_serializer(serializer) + + if signer is None: + signer = self.default_signer + + self.signer: type[Signer] = signer + self.signer_kwargs: dict[str, t.Any] = signer_kwargs or {} + + if fallback_signers is None: + fallback_signers = list(self.default_fallback_signers) + + self.fallback_signers: list[ + dict[str, t.Any] | tuple[type[Signer], dict[str, t.Any]] | type[Signer] + ] = fallback_signers + self.serializer_kwargs: dict[str, t.Any] = serializer_kwargs or {} + + @property + def secret_key(self) -> bytes: + """The newest (last) entry in the :attr:`secret_keys` list. This + is for compatibility from before key rotation support was added. + """ + return self.secret_keys[-1] + + def load_payload( + self, payload: bytes, serializer: _PDataSerializer[t.Any] | None = None + ) -> t.Any: + """Loads the encoded object. This function raises + :class:`.BadPayload` if the payload is not valid. The + ``serializer`` parameter can be used to override the serializer + stored on the class. The encoded ``payload`` should always be + bytes. + """ + if serializer is None: + use_serializer = self.serializer + is_text = self.is_text_serializer + else: + use_serializer = serializer + is_text = is_text_serializer(serializer) + + try: + if is_text: + return use_serializer.loads(payload.decode("utf-8")) # type: ignore[arg-type] + + return use_serializer.loads(payload) # type: ignore[arg-type] + except Exception as e: + raise BadPayload( + "Could not load the payload because an exception" + " occurred on unserializing the data.", + original_error=e, + ) from e + + def dump_payload(self, obj: t.Any) -> bytes: + """Dumps the encoded object. The return value is always bytes. + If the internal serializer returns text, the value will be + encoded as UTF-8. + """ + return want_bytes(self.serializer.dumps(obj, **self.serializer_kwargs)) + + def make_signer(self, salt: str | bytes | None = None) -> Signer: + """Creates a new instance of the signer to be used. The default + implementation uses the :class:`.Signer` base class. + """ + if salt is None: + salt = self.salt + + return self.signer(self.secret_keys, salt=salt, **self.signer_kwargs) + + def iter_unsigners(self, salt: str | bytes | None = None) -> cabc.Iterator[Signer]: + """Iterates over all signers to be tried for unsigning. Starts + with the configured signer, then constructs each signer + specified in ``fallback_signers``. + """ + if salt is None: + salt = self.salt + + yield self.make_signer(salt) + + for fallback in self.fallback_signers: + if isinstance(fallback, dict): + kwargs = fallback + fallback = self.signer + elif isinstance(fallback, tuple): + fallback, kwargs = fallback + else: + kwargs = self.signer_kwargs + + for secret_key in self.secret_keys: + yield fallback(secret_key, salt=salt, **kwargs) + + def dumps(self, obj: t.Any, salt: str | bytes | None = None) -> _TSerialized: + """Returns a signed string serialized with the internal + serializer. The return value can be either a byte or unicode + string depending on the format of the internal serializer. + """ + payload = want_bytes(self.dump_payload(obj)) + rv = self.make_signer(salt).sign(payload) + + if self.is_text_serializer: + return rv.decode("utf-8") # type: ignore[return-value] + + return rv # type: ignore[return-value] + + def dump(self, obj: t.Any, f: t.IO[t.Any], salt: str | bytes | None = None) -> None: + """Like :meth:`dumps` but dumps into a file. The file handle has + to be compatible with what the internal serializer expects. + """ + f.write(self.dumps(obj, salt)) + + def loads( + self, s: str | bytes, salt: str | bytes | None = None, **kwargs: t.Any + ) -> t.Any: + """Reverse of :meth:`dumps`. Raises :exc:`.BadSignature` if the + signature validation fails. + """ + s = want_bytes(s) + last_exception = None + + for signer in self.iter_unsigners(salt): + try: + return self.load_payload(signer.unsign(s)) + except BadSignature as err: + last_exception = err + + raise t.cast(BadSignature, last_exception) + + def load(self, f: t.IO[t.Any], salt: str | bytes | None = None) -> t.Any: + """Like :meth:`loads` but loads from a file.""" + return self.loads(f.read(), salt) + + def loads_unsafe( + self, s: str | bytes, salt: str | bytes | None = None + ) -> tuple[bool, t.Any]: + """Like :meth:`loads` but without verifying the signature. This + is potentially very dangerous to use depending on how your + serializer works. The return value is ``(signature_valid, + payload)`` instead of just the payload. The first item will be a + boolean that indicates if the signature is valid. This function + never fails. + + Use it for debugging only and if you know that your serializer + module is not exploitable (for example, do not use it with a + pickle serializer). + + .. versionadded:: 0.15 + """ + return self._loads_unsafe_impl(s, salt) + + def _loads_unsafe_impl( + self, + s: str | bytes, + salt: str | bytes | None, + load_kwargs: dict[str, t.Any] | None = None, + load_payload_kwargs: dict[str, t.Any] | None = None, + ) -> tuple[bool, t.Any]: + """Low level helper function to implement :meth:`loads_unsafe` + in serializer subclasses. + """ + if load_kwargs is None: + load_kwargs = {} + + try: + return True, self.loads(s, salt=salt, **load_kwargs) + except BadSignature as e: + if e.payload is None: + return False, None + + if load_payload_kwargs is None: + load_payload_kwargs = {} + + try: + return ( + False, + self.load_payload(e.payload, **load_payload_kwargs), + ) + except BadPayload: + return False, None + + def load_unsafe( + self, f: t.IO[t.Any], salt: str | bytes | None = None + ) -> tuple[bool, t.Any]: + """Like :meth:`loads_unsafe` but loads from a file. + + .. versionadded:: 0.15 + """ + return self.loads_unsafe(f.read(), salt=salt) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/signer.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/signer.py new file mode 100644 index 000000000..e324dc03d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/signer.py @@ -0,0 +1,266 @@ +from __future__ import annotations + +import collections.abc as cabc +import hashlib +import hmac +import typing as t + +from .encoding import _base64_alphabet +from .encoding import base64_decode +from .encoding import base64_encode +from .encoding import want_bytes +from .exc import BadSignature + + +class SigningAlgorithm: + """Subclasses must implement :meth:`get_signature` to provide + signature generation functionality. + """ + + def get_signature(self, key: bytes, value: bytes) -> bytes: + """Returns the signature for the given key and value.""" + raise NotImplementedError() + + def verify_signature(self, key: bytes, value: bytes, sig: bytes) -> bool: + """Verifies the given signature matches the expected + signature. + """ + return hmac.compare_digest(sig, self.get_signature(key, value)) + + +class NoneAlgorithm(SigningAlgorithm): + """Provides an algorithm that does not perform any signing and + returns an empty signature. + """ + + def get_signature(self, key: bytes, value: bytes) -> bytes: + return b"" + + +def _lazy_sha1(string: bytes = b"") -> t.Any: + """Don't access ``hashlib.sha1`` until runtime. FIPS builds may not include + SHA-1, in which case the import and use as a default would fail before the + developer can configure something else. + """ + return hashlib.sha1(string) + + +class HMACAlgorithm(SigningAlgorithm): + """Provides signature generation using HMACs.""" + + #: The digest method to use with the MAC algorithm. This defaults to + #: SHA1, but can be changed to any other function in the hashlib + #: module. + default_digest_method: t.Any = staticmethod(_lazy_sha1) + + def __init__(self, digest_method: t.Any = None): + if digest_method is None: + digest_method = self.default_digest_method + + self.digest_method: t.Any = digest_method + + def get_signature(self, key: bytes, value: bytes) -> bytes: + mac = hmac.new(key, msg=value, digestmod=self.digest_method) + return mac.digest() + + +def _make_keys_list( + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], +) -> list[bytes]: + if isinstance(secret_key, (str, bytes)): + return [want_bytes(secret_key)] + + return [want_bytes(s) for s in secret_key] # pyright: ignore + + +class Signer: + """A signer securely signs bytes, then unsigns them to verify that + the value hasn't been changed. + + The secret key should be a random string of ``bytes`` and should not + be saved to code or version control. Different salts should be used + to distinguish signing in different contexts. See :doc:`/concepts` + for information about the security of the secret key and salt. + + :param secret_key: The secret key to sign and verify with. Can be a + list of keys, oldest to newest, to support key rotation. + :param salt: Extra key to combine with ``secret_key`` to distinguish + signatures in different contexts. + :param sep: Separator between the signature and value. + :param key_derivation: How to derive the signing key from the secret + key and salt. Possible values are ``concat``, ``django-concat``, + or ``hmac``. Defaults to :attr:`default_key_derivation`, which + defaults to ``django-concat``. + :param digest_method: Hash function to use when generating the HMAC + signature. Defaults to :attr:`default_digest_method`, which + defaults to :func:`hashlib.sha1`. Note that the security of the + hash alone doesn't apply when used intermediately in HMAC. + :param algorithm: A :class:`SigningAlgorithm` instance to use + instead of building a default :class:`HMACAlgorithm` with the + ``digest_method``. + + .. versionchanged:: 2.0 + Added support for key rotation by passing a list to + ``secret_key``. + + .. versionchanged:: 0.18 + ``algorithm`` was added as an argument to the class constructor. + + .. versionchanged:: 0.14 + ``key_derivation`` and ``digest_method`` were added as arguments + to the class constructor. + """ + + #: The default digest method to use for the signer. The default is + #: :func:`hashlib.sha1`, but can be changed to any :mod:`hashlib` or + #: compatible object. Note that the security of the hash alone + #: doesn't apply when used intermediately in HMAC. + #: + #: .. versionadded:: 0.14 + default_digest_method: t.Any = staticmethod(_lazy_sha1) + + #: The default scheme to use to derive the signing key from the + #: secret key and salt. The default is ``django-concat``. Possible + #: values are ``concat``, ``django-concat``, and ``hmac``. + #: + #: .. versionadded:: 0.14 + default_key_derivation: str = "django-concat" + + def __init__( + self, + secret_key: str | bytes | cabc.Iterable[str] | cabc.Iterable[bytes], + salt: str | bytes | None = b"itsdangerous.Signer", + sep: str | bytes = b".", + key_derivation: str | None = None, + digest_method: t.Any | None = None, + algorithm: SigningAlgorithm | None = None, + ): + #: The list of secret keys to try for verifying signatures, from + #: oldest to newest. The newest (last) key is used for signing. + #: + #: This allows a key rotation system to keep a list of allowed + #: keys and remove expired ones. + self.secret_keys: list[bytes] = _make_keys_list(secret_key) + self.sep: bytes = want_bytes(sep) + + if self.sep in _base64_alphabet: + raise ValueError( + "The given separator cannot be used because it may be" + " contained in the signature itself. ASCII letters," + " digits, and '-_=' must not be used." + ) + + if salt is not None: + salt = want_bytes(salt) + else: + salt = b"itsdangerous.Signer" + + self.salt = salt + + if key_derivation is None: + key_derivation = self.default_key_derivation + + self.key_derivation: str = key_derivation + + if digest_method is None: + digest_method = self.default_digest_method + + self.digest_method: t.Any = digest_method + + if algorithm is None: + algorithm = HMACAlgorithm(self.digest_method) + + self.algorithm: SigningAlgorithm = algorithm + + @property + def secret_key(self) -> bytes: + """The newest (last) entry in the :attr:`secret_keys` list. This + is for compatibility from before key rotation support was added. + """ + return self.secret_keys[-1] + + def derive_key(self, secret_key: str | bytes | None = None) -> bytes: + """This method is called to derive the key. The default key + derivation choices can be overridden here. Key derivation is not + intended to be used as a security method to make a complex key + out of a short password. Instead you should use large random + secret keys. + + :param secret_key: A specific secret key to derive from. + Defaults to the last item in :attr:`secret_keys`. + + .. versionchanged:: 2.0 + Added the ``secret_key`` parameter. + """ + if secret_key is None: + secret_key = self.secret_keys[-1] + else: + secret_key = want_bytes(secret_key) + + if self.key_derivation == "concat": + return t.cast(bytes, self.digest_method(self.salt + secret_key).digest()) + elif self.key_derivation == "django-concat": + return t.cast( + bytes, self.digest_method(self.salt + b"signer" + secret_key).digest() + ) + elif self.key_derivation == "hmac": + mac = hmac.new(secret_key, digestmod=self.digest_method) + mac.update(self.salt) + return mac.digest() + elif self.key_derivation == "none": + return secret_key + else: + raise TypeError("Unknown key derivation method") + + def get_signature(self, value: str | bytes) -> bytes: + """Returns the signature for the given value.""" + value = want_bytes(value) + key = self.derive_key() + sig = self.algorithm.get_signature(key, value) + return base64_encode(sig) + + def sign(self, value: str | bytes) -> bytes: + """Signs the given string.""" + value = want_bytes(value) + return value + self.sep + self.get_signature(value) + + def verify_signature(self, value: str | bytes, sig: str | bytes) -> bool: + """Verifies the signature for the given value.""" + try: + sig = base64_decode(sig) + except Exception: + return False + + value = want_bytes(value) + + for secret_key in reversed(self.secret_keys): + key = self.derive_key(secret_key) + + if self.algorithm.verify_signature(key, value, sig): + return True + + return False + + def unsign(self, signed_value: str | bytes) -> bytes: + """Unsigns the given string.""" + signed_value = want_bytes(signed_value) + + if self.sep not in signed_value: + raise BadSignature(f"No {self.sep!r} found in value") + + value, sig = signed_value.rsplit(self.sep, 1) + + if self.verify_signature(value, sig): + return value + + raise BadSignature(f"Signature {sig!r} does not match", payload=value) + + def validate(self, signed_value: str | bytes) -> bool: + """Only validates the given signed value. Returns ``True`` if + the signature exists and is valid. + """ + try: + self.unsign(signed_value) + return True + except BadSignature: + return False diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/timed.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/timed.py new file mode 100644 index 000000000..73843755d --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/timed.py @@ -0,0 +1,228 @@ +from __future__ import annotations + +import collections.abc as cabc +import time +import typing as t +from datetime import datetime +from datetime import timezone + +from .encoding import base64_decode +from .encoding import base64_encode +from .encoding import bytes_to_int +from .encoding import int_to_bytes +from .encoding import want_bytes +from .exc import BadSignature +from .exc import BadTimeSignature +from .exc import SignatureExpired +from .serializer import _TSerialized +from .serializer import Serializer +from .signer import Signer + + +class TimestampSigner(Signer): + """Works like the regular :class:`.Signer` but also records the time + of the signing and can be used to expire signatures. The + :meth:`unsign` method can raise :exc:`.SignatureExpired` if the + unsigning failed because the signature is expired. + """ + + def get_timestamp(self) -> int: + """Returns the current timestamp. The function must return an + integer. + """ + return int(time.time()) + + def timestamp_to_datetime(self, ts: int) -> datetime: + """Convert the timestamp from :meth:`get_timestamp` into an + aware :class`datetime.datetime` in UTC. + + .. versionchanged:: 2.0 + The timestamp is returned as a timezone-aware ``datetime`` + in UTC rather than a naive ``datetime`` assumed to be UTC. + """ + return datetime.fromtimestamp(ts, tz=timezone.utc) + + def sign(self, value: str | bytes) -> bytes: + """Signs the given string and also attaches time information.""" + value = want_bytes(value) + timestamp = base64_encode(int_to_bytes(self.get_timestamp())) + sep = want_bytes(self.sep) + value = value + sep + timestamp + return value + sep + self.get_signature(value) + + # Ignore overlapping signatures check, return_timestamp is the only + # parameter that affects the return type. + + @t.overload + def unsign( # type: ignore[overload-overlap] + self, + signed_value: str | bytes, + max_age: int | None = None, + return_timestamp: t.Literal[False] = False, + ) -> bytes: ... + + @t.overload + def unsign( + self, + signed_value: str | bytes, + max_age: int | None = None, + return_timestamp: t.Literal[True] = True, + ) -> tuple[bytes, datetime]: ... + + def unsign( + self, + signed_value: str | bytes, + max_age: int | None = None, + return_timestamp: bool = False, + ) -> tuple[bytes, datetime] | bytes: + """Works like the regular :meth:`.Signer.unsign` but can also + validate the time. See the base docstring of the class for + the general behavior. If ``return_timestamp`` is ``True`` the + timestamp of the signature will be returned as an aware + :class:`datetime.datetime` object in UTC. + + .. versionchanged:: 2.0 + The timestamp is returned as a timezone-aware ``datetime`` + in UTC rather than a naive ``datetime`` assumed to be UTC. + """ + try: + result = super().unsign(signed_value) + sig_error = None + except BadSignature as e: + sig_error = e + result = e.payload or b"" + + sep = want_bytes(self.sep) + + # If there is no timestamp in the result there is something + # seriously wrong. In case there was a signature error, we raise + # that one directly, otherwise we have a weird situation in + # which we shouldn't have come except someone uses a time-based + # serializer on non-timestamp data, so catch that. + if sep not in result: + if sig_error: + raise sig_error + + raise BadTimeSignature("timestamp missing", payload=result) + + value, ts_bytes = result.rsplit(sep, 1) + ts_int: int | None = None + ts_dt: datetime | None = None + + try: + ts_int = bytes_to_int(base64_decode(ts_bytes)) + except Exception: + pass + + # Signature is *not* okay. Raise a proper error now that we have + # split the value and the timestamp. + if sig_error is not None: + if ts_int is not None: + try: + ts_dt = self.timestamp_to_datetime(ts_int) + except (ValueError, OSError, OverflowError) as exc: + # Windows raises OSError + # 32-bit raises OverflowError + raise BadTimeSignature( + "Malformed timestamp", payload=value + ) from exc + + raise BadTimeSignature(str(sig_error), payload=value, date_signed=ts_dt) + + # Signature was okay but the timestamp is actually not there or + # malformed. Should not happen, but we handle it anyway. + if ts_int is None: + raise BadTimeSignature("Malformed timestamp", payload=value) + + # Check timestamp is not older than max_age + if max_age is not None: + age = self.get_timestamp() - ts_int + + if age > max_age: + raise SignatureExpired( + f"Signature age {age} > {max_age} seconds", + payload=value, + date_signed=self.timestamp_to_datetime(ts_int), + ) + + if age < 0: + raise SignatureExpired( + f"Signature age {age} < 0 seconds", + payload=value, + date_signed=self.timestamp_to_datetime(ts_int), + ) + + if return_timestamp: + return value, self.timestamp_to_datetime(ts_int) + + return value + + def validate(self, signed_value: str | bytes, max_age: int | None = None) -> bool: + """Only validates the given signed value. Returns ``True`` if + the signature exists and is valid.""" + try: + self.unsign(signed_value, max_age=max_age) + return True + except BadSignature: + return False + + +class TimedSerializer(Serializer[_TSerialized]): + """Uses :class:`TimestampSigner` instead of the default + :class:`.Signer`. + """ + + default_signer: type[TimestampSigner] = TimestampSigner + + def iter_unsigners( + self, salt: str | bytes | None = None + ) -> cabc.Iterator[TimestampSigner]: + return t.cast("cabc.Iterator[TimestampSigner]", super().iter_unsigners(salt)) + + # TODO: Signature is incompatible because parameters were added + # before salt. + + def loads( # type: ignore[override] + self, + s: str | bytes, + max_age: int | None = None, + return_timestamp: bool = False, + salt: str | bytes | None = None, + ) -> t.Any: + """Reverse of :meth:`dumps`, raises :exc:`.BadSignature` if the + signature validation fails. If a ``max_age`` is provided it will + ensure the signature is not older than that time in seconds. In + case the signature is outdated, :exc:`.SignatureExpired` is + raised. All arguments are forwarded to the signer's + :meth:`~TimestampSigner.unsign` method. + """ + s = want_bytes(s) + last_exception = None + + for signer in self.iter_unsigners(salt): + try: + base64d, timestamp = signer.unsign( + s, max_age=max_age, return_timestamp=True + ) + payload = self.load_payload(base64d) + + if return_timestamp: + return payload, timestamp + + return payload + except SignatureExpired: + # The signature was unsigned successfully but was + # expired. Do not try the next signer. + raise + except BadSignature as err: + last_exception = err + + raise t.cast(BadSignature, last_exception) + + def loads_unsafe( # type: ignore[override] + self, + s: str | bytes, + max_age: int | None = None, + salt: str | bytes | None = None, + ) -> tuple[bool, t.Any]: + return self._loads_unsafe_impl(s, salt, load_kwargs={"max_age": max_age}) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/url_safe.py b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/url_safe.py new file mode 100644 index 000000000..56a079331 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/itsdangerous/url_safe.py @@ -0,0 +1,83 @@ +from __future__ import annotations + +import typing as t +import zlib + +from ._json import _CompactJSON +from .encoding import base64_decode +from .encoding import base64_encode +from .exc import BadPayload +from .serializer import _PDataSerializer +from .serializer import Serializer +from .timed import TimedSerializer + + +class URLSafeSerializerMixin(Serializer[str]): + """Mixed in with a regular serializer it will attempt to zlib + compress the string to make it shorter if necessary. It will also + base64 encode the string so that it can safely be placed in a URL. + """ + + default_serializer: _PDataSerializer[str] = _CompactJSON + + def load_payload( + self, + payload: bytes, + *args: t.Any, + serializer: t.Any | None = None, + **kwargs: t.Any, + ) -> t.Any: + decompress = False + + if payload.startswith(b"."): + payload = payload[1:] + decompress = True + + try: + json = base64_decode(payload) + except Exception as e: + raise BadPayload( + "Could not base64 decode the payload because of an exception", + original_error=e, + ) from e + + if decompress: + try: + json = zlib.decompress(json) + except Exception as e: + raise BadPayload( + "Could not zlib decompress the payload before decoding the payload", + original_error=e, + ) from e + + return super().load_payload(json, *args, **kwargs) + + def dump_payload(self, obj: t.Any) -> bytes: + json = super().dump_payload(obj) + is_compressed = False + compressed = zlib.compress(json) + + if len(compressed) < (len(json) - 1): + json = compressed + is_compressed = True + + base64d = base64_encode(json) + + if is_compressed: + base64d = b"." + base64d + + return base64d + + +class URLSafeSerializer(URLSafeSerializerMixin, Serializer[str]): + """Works like :class:`.Serializer` but dumps and loads into a URL + safe string consisting of the upper and lowercase character of the + alphabet as well as ``'_'``, ``'-'`` and ``'.'``. + """ + + +class URLSafeTimedSerializer(URLSafeSerializerMixin, TimedSerializer[str]): + """Works like :class:`.TimedSerializer` but dumps and loads into a + URL safe string consisting of the upper and lowercase character of + the alphabet as well as ``'_'``, ``'-'`` and ``'.'``. + """ diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/INSTALLER b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/INSTALLER new file mode 100644 index 000000000..a1b589e38 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/METADATA b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/METADATA new file mode 100644 index 000000000..ffef2ff3b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/METADATA @@ -0,0 +1,84 @@ +Metadata-Version: 2.4 +Name: Jinja2 +Version: 3.1.6 +Summary: A very fast and expressive template engine. +Maintainer-email: Pallets +Requires-Python: >=3.7 +Description-Content-Type: text/markdown +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Text Processing :: Markup :: HTML +Classifier: Typing :: Typed +License-File: LICENSE.txt +Requires-Dist: MarkupSafe>=2.0 +Requires-Dist: Babel>=2.7 ; extra == "i18n" +Project-URL: Changes, https://jinja.palletsprojects.com/changes/ +Project-URL: Chat, https://discord.gg/pallets +Project-URL: Documentation, https://jinja.palletsprojects.com/ +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Source, https://github.com/pallets/jinja/ +Provides-Extra: i18n + +# Jinja + +Jinja is a fast, expressive, extensible templating engine. Special +placeholders in the template allow writing code similar to Python +syntax. Then the template is passed data to render the final document. + +It includes: + +- Template inheritance and inclusion. +- Define and import macros within templates. +- HTML templates can use autoescaping to prevent XSS from untrusted + user input. +- A sandboxed environment can safely render untrusted templates. +- AsyncIO support for generating templates and calling async + functions. +- I18N support with Babel. +- Templates are compiled to optimized Python code just-in-time and + cached, or can be compiled ahead-of-time. +- Exceptions point to the correct line in templates to make debugging + easier. +- Extensible filters, tests, functions, and even syntax. + +Jinja's philosophy is that while application logic belongs in Python if +possible, it shouldn't make the template designer's job difficult by +restricting functionality too much. + + +## In A Nutshell + +```jinja +{% extends "base.html" %} +{% block title %}Members{% endblock %} +{% block content %} + +{% endblock %} +``` + +## Donate + +The Pallets organization develops and supports Jinja and other popular +packages. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, [please +donate today][]. + +[please donate today]: https://palletsprojects.com/donate + +## Contributing + +See our [detailed contributing documentation][contrib] for many ways to +contribute, including reporting issues, requesting features, asking or answering +questions, and making PRs. + +[contrib]: https://palletsprojects.com/contributing/ + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/RECORD b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/RECORD new file mode 100644 index 000000000..20d4fee9c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/RECORD @@ -0,0 +1,57 @@ +jinja2-3.1.6.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +jinja2-3.1.6.dist-info/METADATA,sha256=aMVUj7Z8QTKhOJjZsx7FDGvqKr3ZFdkh8hQ1XDpkmcg,2871 +jinja2-3.1.6.dist-info/RECORD,, +jinja2-3.1.6.dist-info/WHEEL,sha256=_2ozNFCLWc93bK4WKHCO-eDUENDlo-dgc9cU3qokYO4,82 +jinja2-3.1.6.dist-info/entry_points.txt,sha256=OL85gYU1eD8cuPlikifFngXpeBjaxl6rIJ8KkC_3r-I,58 +jinja2-3.1.6.dist-info/licenses/LICENSE.txt,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475 +jinja2/__init__.py,sha256=xxepO9i7DHsqkQrgBEduLtfoz2QCuT6_gbL4XSN1hbU,1928 +jinja2/__pycache__/__init__.cpython-310.pyc,, +jinja2/__pycache__/_identifier.cpython-310.pyc,, +jinja2/__pycache__/async_utils.cpython-310.pyc,, +jinja2/__pycache__/bccache.cpython-310.pyc,, +jinja2/__pycache__/compiler.cpython-310.pyc,, +jinja2/__pycache__/constants.cpython-310.pyc,, +jinja2/__pycache__/debug.cpython-310.pyc,, +jinja2/__pycache__/defaults.cpython-310.pyc,, +jinja2/__pycache__/environment.cpython-310.pyc,, +jinja2/__pycache__/exceptions.cpython-310.pyc,, +jinja2/__pycache__/ext.cpython-310.pyc,, +jinja2/__pycache__/filters.cpython-310.pyc,, +jinja2/__pycache__/idtracking.cpython-310.pyc,, +jinja2/__pycache__/lexer.cpython-310.pyc,, +jinja2/__pycache__/loaders.cpython-310.pyc,, +jinja2/__pycache__/meta.cpython-310.pyc,, +jinja2/__pycache__/nativetypes.cpython-310.pyc,, +jinja2/__pycache__/nodes.cpython-310.pyc,, +jinja2/__pycache__/optimizer.cpython-310.pyc,, +jinja2/__pycache__/parser.cpython-310.pyc,, +jinja2/__pycache__/runtime.cpython-310.pyc,, +jinja2/__pycache__/sandbox.cpython-310.pyc,, +jinja2/__pycache__/tests.cpython-310.pyc,, +jinja2/__pycache__/utils.cpython-310.pyc,, +jinja2/__pycache__/visitor.cpython-310.pyc,, +jinja2/_identifier.py,sha256=_zYctNKzRqlk_murTNlzrju1FFJL7Va_Ijqqd7ii2lU,1958 +jinja2/async_utils.py,sha256=vK-PdsuorOMnWSnEkT3iUJRIkTnYgO2T6MnGxDgHI5o,2834 +jinja2/bccache.py,sha256=gh0qs9rulnXo0PhX5jTJy2UHzI8wFnQ63o_vw7nhzRg,14061 +jinja2/compiler.py,sha256=9RpCQl5X88BHllJiPsHPh295Hh0uApvwFJNQuutULeM,74131 +jinja2/constants.py,sha256=GMoFydBF_kdpaRKPoM5cl5MviquVRLVyZtfp5-16jg0,1433 +jinja2/debug.py,sha256=CnHqCDHd-BVGvti_8ZsTolnXNhA3ECsY-6n_2pwU8Hw,6297 +jinja2/defaults.py,sha256=boBcSw78h-lp20YbaXSJsqkAI2uN_mD_TtCydpeq5wU,1267 +jinja2/environment.py,sha256=9nhrP7Ch-NbGX00wvyr4yy-uhNHq2OCc60ggGrni_fk,61513 +jinja2/exceptions.py,sha256=ioHeHrWwCWNaXX1inHmHVblvc4haO7AXsjCp3GfWvx0,5071 +jinja2/ext.py,sha256=5PF5eHfh8mXAIxXHHRB2xXbXohi8pE3nHSOxa66uS7E,31875 +jinja2/filters.py,sha256=PQ_Egd9n9jSgtnGQYyF4K5j2nYwhUIulhPnyimkdr-k,55212 +jinja2/idtracking.py,sha256=-ll5lIp73pML3ErUYiIJj7tdmWxcH_IlDv3yA_hiZYo,10555 +jinja2/lexer.py,sha256=LYiYio6br-Tep9nPcupWXsPEtjluw3p1mU-lNBVRUfk,29786 +jinja2/loaders.py,sha256=wIrnxjvcbqh5VwW28NSkfotiDq8qNCxIOSFbGUiSLB4,24055 +jinja2/meta.py,sha256=OTDPkaFvU2Hgvx-6akz7154F8BIWaRmvJcBFvwopHww,4397 +jinja2/nativetypes.py,sha256=7GIGALVJgdyL80oZJdQUaUfwSt5q2lSSZbXt0dNf_M4,4210 +jinja2/nodes.py,sha256=m1Duzcr6qhZI8JQ6VyJgUNinjAf5bQzijSmDnMsvUx8,34579 +jinja2/optimizer.py,sha256=rJnCRlQ7pZsEEmMhsQDgC_pKyDHxP5TPS6zVPGsgcu8,1651 +jinja2/parser.py,sha256=lLOFy3sEmHc5IaEHRiH1sQVnId2moUQzhyeJZTtdY30,40383 +jinja2/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +jinja2/runtime.py,sha256=gDk-GvdriJXqgsGbHgrcKTP0Yp6zPXzhzrIpCFH3jAU,34249 +jinja2/sandbox.py,sha256=Mw2aitlY2I8la7FYhcX2YG9BtUYcLnD0Gh3d29cDWrY,15009 +jinja2/tests.py,sha256=VLsBhVFnWg-PxSBz1MhRnNWgP1ovXk3neO1FLQMeC9Q,5926 +jinja2/utils.py,sha256=rRp3o9e7ZKS4fyrWRbELyLcpuGVTFcnooaOa1qx_FIk,24129 +jinja2/visitor.py,sha256=EcnL1PIwf_4RVCOMxsRNuR8AXHbS1qfAdMOE2ngKJz4,3557 diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/WHEEL b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/WHEEL new file mode 100644 index 000000000..23d2d7e9a --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.11.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/entry_points.txt b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/entry_points.txt new file mode 100644 index 000000000..abc3eae3b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[babel.extractors] +jinja2=jinja2.ext:babel_extract[i18n] + diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt new file mode 100644 index 000000000..c37cae49e --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2-3.1.6.dist-info/licenses/LICENSE.txt @@ -0,0 +1,28 @@ +Copyright 2007 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/__init__.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/__init__.py new file mode 100644 index 000000000..1a423a3ea --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/__init__.py @@ -0,0 +1,38 @@ +"""Jinja is a template engine written in pure Python. It provides a +non-XML syntax that supports inline expressions and an optional +sandboxed environment. +""" + +from .bccache import BytecodeCache as BytecodeCache +from .bccache import FileSystemBytecodeCache as FileSystemBytecodeCache +from .bccache import MemcachedBytecodeCache as MemcachedBytecodeCache +from .environment import Environment as Environment +from .environment import Template as Template +from .exceptions import TemplateAssertionError as TemplateAssertionError +from .exceptions import TemplateError as TemplateError +from .exceptions import TemplateNotFound as TemplateNotFound +from .exceptions import TemplateRuntimeError as TemplateRuntimeError +from .exceptions import TemplatesNotFound as TemplatesNotFound +from .exceptions import TemplateSyntaxError as TemplateSyntaxError +from .exceptions import UndefinedError as UndefinedError +from .loaders import BaseLoader as BaseLoader +from .loaders import ChoiceLoader as ChoiceLoader +from .loaders import DictLoader as DictLoader +from .loaders import FileSystemLoader as FileSystemLoader +from .loaders import FunctionLoader as FunctionLoader +from .loaders import ModuleLoader as ModuleLoader +from .loaders import PackageLoader as PackageLoader +from .loaders import PrefixLoader as PrefixLoader +from .runtime import ChainableUndefined as ChainableUndefined +from .runtime import DebugUndefined as DebugUndefined +from .runtime import make_logging_undefined as make_logging_undefined +from .runtime import StrictUndefined as StrictUndefined +from .runtime import Undefined as Undefined +from .utils import clear_caches as clear_caches +from .utils import is_undefined as is_undefined +from .utils import pass_context as pass_context +from .utils import pass_environment as pass_environment +from .utils import pass_eval_context as pass_eval_context +from .utils import select_autoescape as select_autoescape + +__version__ = "3.1.6" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/_identifier.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/_identifier.py new file mode 100644 index 000000000..928c1503c --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/_identifier.py @@ -0,0 +1,6 @@ +import re + +# generated by scripts/generate_identifier_pattern.py +pattern = re.compile( + r"[\w·̀-ͯ·҃-֑҇-ׇֽֿׁׂׅׄؐ-ًؚ-ٰٟۖ-ۜ۟-۪ۤۧۨ-ܑۭܰ-݊ަ-ް߫-߽߳ࠖ-࠙ࠛ-ࠣࠥ-ࠧࠩ-࡙࠭-࡛࣓-ࣣ࣡-ःऺ-़ा-ॏ॑-ॗॢॣঁ-ঃ়া-ৄেৈো-্ৗৢৣ৾ਁ-ਃ਼ਾ-ੂੇੈੋ-੍ੑੰੱੵઁ-ઃ઼ા-ૅે-ૉો-્ૢૣૺ-૿ଁ-ଃ଼ା-ୄେୈୋ-୍ୖୗୢୣஂா-ூெ-ைொ-்ௗఀ-ఄా-ౄె-ైొ-్ౕౖౢౣಁ-ಃ಼ಾ-ೄೆ-ೈೊ-್ೕೖೢೣഀ-ഃ഻഼ാ-ൄെ-ൈൊ-്ൗൢൣංඃ්ා-ුූෘ-ෟෲෳัิ-ฺ็-๎ັິ-ູົຼ່-ໍ༹༘༙༵༷༾༿ཱ-྄྆྇ྍ-ྗྙ-ྼ࿆ါ-ှၖ-ၙၞ-ၠၢ-ၤၧ-ၭၱ-ၴႂ-ႍႏႚ-ႝ፝-፟ᜒ-᜔ᜲ-᜴ᝒᝓᝲᝳ឴-៓៝᠋-᠍ᢅᢆᢩᤠ-ᤫᤰ-᤻ᨗ-ᨛᩕ-ᩞ᩠-᩿᩼᪰-᪽ᬀ-ᬄ᬴-᭄᭫-᭳ᮀ-ᮂᮡ-ᮭ᯦-᯳ᰤ-᰷᳐-᳔᳒-᳨᳭ᳲ-᳴᳷-᳹᷀-᷹᷻-᷿‿⁀⁔⃐-⃥⃜⃡-⃰℘℮⳯-⵿⳱ⷠ-〪ⷿ-゙゚〯꙯ꙴ-꙽ꚞꚟ꛰꛱ꠂ꠆ꠋꠣ-ꠧꢀꢁꢴ-ꣅ꣠-꣱ꣿꤦ-꤭ꥇ-꥓ꦀ-ꦃ꦳-꧀ꧥꨩ-ꨶꩃꩌꩍꩻ-ꩽꪰꪲ-ꪴꪷꪸꪾ꪿꫁ꫫ-ꫯꫵ꫶ꯣ-ꯪ꯬꯭ﬞ︀-️︠-︯︳︴﹍-﹏_𐇽𐋠𐍶-𐍺𐨁-𐨃𐨅𐨆𐨌-𐨏𐨸-𐨿𐨺𐫦𐫥𐴤-𐽆𐴧-𐽐𑀀-𑀂𑀸-𑁆𑁿-𑂂𑂰-𑂺𑄀-𑄂𑄧-𑄴𑅅𑅆𑅳𑆀-𑆂𑆳-𑇀𑇉-𑇌𑈬-𑈷𑈾𑋟-𑋪𑌀-𑌃𑌻𑌼𑌾-𑍄𑍇𑍈𑍋-𑍍𑍗𑍢𑍣𑍦-𑍬𑍰-𑍴𑐵-𑑆𑑞𑒰-𑓃𑖯-𑖵𑖸-𑗀𑗜𑗝𑘰-𑙀𑚫-𑚷𑜝-𑜫𑠬-𑠺𑨁-𑨊𑨳-𑨹𑨻-𑨾𑩇𑩑-𑩛𑪊-𑪙𑰯-𑰶𑰸-𑰿𑲒-𑲧𑲩-𑲶𑴱-𑴶𑴺𑴼𑴽𑴿-𑵅𑵇𑶊-𑶎𑶐𑶑𑶓-𑶗𑻳-𑻶𖫰-𖫴𖬰-𖬶𖽑-𖽾𖾏-𖾒𛲝𛲞𝅥-𝅩𝅭-𝅲𝅻-𝆂𝆅-𝆋𝆪-𝆭𝉂-𝉄𝨀-𝨶𝨻-𝩬𝩵𝪄𝪛-𝪟𝪡-𝪯𞀀-𞀆𞀈-𞀘𞀛-𞀡𞀣𞀤𞀦-𞣐𞀪-𞣖𞥄-𞥊󠄀-󠇯]+" # noqa: B950 +) diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/async_utils.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/async_utils.py new file mode 100644 index 000000000..f0c140205 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/async_utils.py @@ -0,0 +1,99 @@ +import inspect +import typing as t +from functools import WRAPPER_ASSIGNMENTS +from functools import wraps + +from .utils import _PassArg +from .utils import pass_eval_context + +if t.TYPE_CHECKING: + import typing_extensions as te + +V = t.TypeVar("V") + + +def async_variant(normal_func): # type: ignore + def decorator(async_func): # type: ignore + pass_arg = _PassArg.from_obj(normal_func) + need_eval_context = pass_arg is None + + if pass_arg is _PassArg.environment: + + def is_async(args: t.Any) -> bool: + return t.cast(bool, args[0].is_async) + + else: + + def is_async(args: t.Any) -> bool: + return t.cast(bool, args[0].environment.is_async) + + # Take the doc and annotations from the sync function, but the + # name from the async function. Pallets-Sphinx-Themes + # build_function_directive expects __wrapped__ to point to the + # sync function. + async_func_attrs = ("__module__", "__name__", "__qualname__") + normal_func_attrs = tuple(set(WRAPPER_ASSIGNMENTS).difference(async_func_attrs)) + + @wraps(normal_func, assigned=normal_func_attrs) + @wraps(async_func, assigned=async_func_attrs, updated=()) + def wrapper(*args, **kwargs): # type: ignore + b = is_async(args) + + if need_eval_context: + args = args[1:] + + if b: + return async_func(*args, **kwargs) + + return normal_func(*args, **kwargs) + + if need_eval_context: + wrapper = pass_eval_context(wrapper) + + wrapper.jinja_async_variant = True # type: ignore[attr-defined] + return wrapper + + return decorator + + +_common_primitives = {int, float, bool, str, list, dict, tuple, type(None)} + + +async def auto_await(value: t.Union[t.Awaitable["V"], "V"]) -> "V": + # Avoid a costly call to isawaitable + if type(value) in _common_primitives: + return t.cast("V", value) + + if inspect.isawaitable(value): + return await t.cast("t.Awaitable[V]", value) + + return value + + +class _IteratorToAsyncIterator(t.Generic[V]): + def __init__(self, iterator: "t.Iterator[V]"): + self._iterator = iterator + + def __aiter__(self) -> "te.Self": + return self + + async def __anext__(self) -> V: + try: + return next(self._iterator) + except StopIteration as e: + raise StopAsyncIteration(e.value) from e + + +def auto_aiter( + iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", +) -> "t.AsyncIterator[V]": + if hasattr(iterable, "__aiter__"): + return iterable.__aiter__() + else: + return _IteratorToAsyncIterator(iter(iterable)) + + +async def auto_to_list( + value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", +) -> t.List["V"]: + return [x async for x in auto_aiter(value)] diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/bccache.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/bccache.py new file mode 100644 index 000000000..ada8b099f --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/bccache.py @@ -0,0 +1,408 @@ +"""The optional bytecode cache system. This is useful if you have very +complex template situations and the compilation of all those templates +slows down your application too much. + +Situations where this is useful are often forking web applications that +are initialized on the first request. +""" + +import errno +import fnmatch +import marshal +import os +import pickle +import stat +import sys +import tempfile +import typing as t +from hashlib import sha1 +from io import BytesIO +from types import CodeType + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .environment import Environment + + class _MemcachedClient(te.Protocol): + def get(self, key: str) -> bytes: ... + + def set( + self, key: str, value: bytes, timeout: t.Optional[int] = None + ) -> None: ... + + +bc_version = 5 +# Magic bytes to identify Jinja bytecode cache files. Contains the +# Python major and minor version to avoid loading incompatible bytecode +# if a project upgrades its Python version. +bc_magic = ( + b"j2" + + pickle.dumps(bc_version, 2) + + pickle.dumps((sys.version_info[0] << 24) | sys.version_info[1], 2) +) + + +class Bucket: + """Buckets are used to store the bytecode for one template. It's created + and initialized by the bytecode cache and passed to the loading functions. + + The buckets get an internal checksum from the cache assigned and use this + to automatically reject outdated cache material. Individual bytecode + cache subclasses don't have to care about cache invalidation. + """ + + def __init__(self, environment: "Environment", key: str, checksum: str) -> None: + self.environment = environment + self.key = key + self.checksum = checksum + self.reset() + + def reset(self) -> None: + """Resets the bucket (unloads the bytecode).""" + self.code: t.Optional[CodeType] = None + + def load_bytecode(self, f: t.BinaryIO) -> None: + """Loads bytecode from a file or file like object.""" + # make sure the magic header is correct + magic = f.read(len(bc_magic)) + if magic != bc_magic: + self.reset() + return + # the source code of the file changed, we need to reload + checksum = pickle.load(f) + if self.checksum != checksum: + self.reset() + return + # if marshal_load fails then we need to reload + try: + self.code = marshal.load(f) + except (EOFError, ValueError, TypeError): + self.reset() + return + + def write_bytecode(self, f: t.IO[bytes]) -> None: + """Dump the bytecode into the file or file like object passed.""" + if self.code is None: + raise TypeError("can't write empty bucket") + f.write(bc_magic) + pickle.dump(self.checksum, f, 2) + marshal.dump(self.code, f) + + def bytecode_from_string(self, string: bytes) -> None: + """Load bytecode from bytes.""" + self.load_bytecode(BytesIO(string)) + + def bytecode_to_string(self) -> bytes: + """Return the bytecode as bytes.""" + out = BytesIO() + self.write_bytecode(out) + return out.getvalue() + + +class BytecodeCache: + """To implement your own bytecode cache you have to subclass this class + and override :meth:`load_bytecode` and :meth:`dump_bytecode`. Both of + these methods are passed a :class:`~jinja2.bccache.Bucket`. + + A very basic bytecode cache that saves the bytecode on the file system:: + + from os import path + + class MyCache(BytecodeCache): + + def __init__(self, directory): + self.directory = directory + + def load_bytecode(self, bucket): + filename = path.join(self.directory, bucket.key) + if path.exists(filename): + with open(filename, 'rb') as f: + bucket.load_bytecode(f) + + def dump_bytecode(self, bucket): + filename = path.join(self.directory, bucket.key) + with open(filename, 'wb') as f: + bucket.write_bytecode(f) + + A more advanced version of a filesystem based bytecode cache is part of + Jinja. + """ + + def load_bytecode(self, bucket: Bucket) -> None: + """Subclasses have to override this method to load bytecode into a + bucket. If they are not able to find code in the cache for the + bucket, it must not do anything. + """ + raise NotImplementedError() + + def dump_bytecode(self, bucket: Bucket) -> None: + """Subclasses have to override this method to write the bytecode + from a bucket back to the cache. If it unable to do so it must not + fail silently but raise an exception. + """ + raise NotImplementedError() + + def clear(self) -> None: + """Clears the cache. This method is not used by Jinja but should be + implemented to allow applications to clear the bytecode cache used + by a particular environment. + """ + + def get_cache_key( + self, name: str, filename: t.Optional[t.Union[str]] = None + ) -> str: + """Returns the unique hash key for this template name.""" + hash = sha1(name.encode("utf-8")) + + if filename is not None: + hash.update(f"|{filename}".encode()) + + return hash.hexdigest() + + def get_source_checksum(self, source: str) -> str: + """Returns a checksum for the source.""" + return sha1(source.encode("utf-8")).hexdigest() + + def get_bucket( + self, + environment: "Environment", + name: str, + filename: t.Optional[str], + source: str, + ) -> Bucket: + """Return a cache bucket for the given template. All arguments are + mandatory but filename may be `None`. + """ + key = self.get_cache_key(name, filename) + checksum = self.get_source_checksum(source) + bucket = Bucket(environment, key, checksum) + self.load_bytecode(bucket) + return bucket + + def set_bucket(self, bucket: Bucket) -> None: + """Put the bucket into the cache.""" + self.dump_bytecode(bucket) + + +class FileSystemBytecodeCache(BytecodeCache): + """A bytecode cache that stores bytecode on the filesystem. It accepts + two arguments: The directory where the cache items are stored and a + pattern string that is used to build the filename. + + If no directory is specified a default cache directory is selected. On + Windows the user's temp directory is used, on UNIX systems a directory + is created for the user in the system temp directory. + + The pattern can be used to have multiple separate caches operate on the + same directory. The default pattern is ``'__jinja2_%s.cache'``. ``%s`` + is replaced with the cache key. + + >>> bcc = FileSystemBytecodeCache('/tmp/jinja_cache', '%s.cache') + + This bytecode cache supports clearing of the cache using the clear method. + """ + + def __init__( + self, directory: t.Optional[str] = None, pattern: str = "__jinja2_%s.cache" + ) -> None: + if directory is None: + directory = self._get_default_cache_dir() + self.directory = directory + self.pattern = pattern + + def _get_default_cache_dir(self) -> str: + def _unsafe_dir() -> "te.NoReturn": + raise RuntimeError( + "Cannot determine safe temp directory. You " + "need to explicitly provide one." + ) + + tmpdir = tempfile.gettempdir() + + # On windows the temporary directory is used specific unless + # explicitly forced otherwise. We can just use that. + if os.name == "nt": + return tmpdir + if not hasattr(os, "getuid"): + _unsafe_dir() + + dirname = f"_jinja2-cache-{os.getuid()}" + actual_dir = os.path.join(tmpdir, dirname) + + try: + os.mkdir(actual_dir, stat.S_IRWXU) + except OSError as e: + if e.errno != errno.EEXIST: + raise + try: + os.chmod(actual_dir, stat.S_IRWXU) + actual_dir_stat = os.lstat(actual_dir) + if ( + actual_dir_stat.st_uid != os.getuid() + or not stat.S_ISDIR(actual_dir_stat.st_mode) + or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU + ): + _unsafe_dir() + except OSError as e: + if e.errno != errno.EEXIST: + raise + + actual_dir_stat = os.lstat(actual_dir) + if ( + actual_dir_stat.st_uid != os.getuid() + or not stat.S_ISDIR(actual_dir_stat.st_mode) + or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU + ): + _unsafe_dir() + + return actual_dir + + def _get_cache_filename(self, bucket: Bucket) -> str: + return os.path.join(self.directory, self.pattern % (bucket.key,)) + + def load_bytecode(self, bucket: Bucket) -> None: + filename = self._get_cache_filename(bucket) + + # Don't test for existence before opening the file, since the + # file could disappear after the test before the open. + try: + f = open(filename, "rb") + except (FileNotFoundError, IsADirectoryError, PermissionError): + # PermissionError can occur on Windows when an operation is + # in progress, such as calling clear(). + return + + with f: + bucket.load_bytecode(f) + + def dump_bytecode(self, bucket: Bucket) -> None: + # Write to a temporary file, then rename to the real name after + # writing. This avoids another process reading the file before + # it is fully written. + name = self._get_cache_filename(bucket) + f = tempfile.NamedTemporaryFile( + mode="wb", + dir=os.path.dirname(name), + prefix=os.path.basename(name), + suffix=".tmp", + delete=False, + ) + + def remove_silent() -> None: + try: + os.remove(f.name) + except OSError: + # Another process may have called clear(). On Windows, + # another program may be holding the file open. + pass + + try: + with f: + bucket.write_bytecode(f) + except BaseException: + remove_silent() + raise + + try: + os.replace(f.name, name) + except OSError: + # Another process may have called clear(). On Windows, + # another program may be holding the file open. + remove_silent() + except BaseException: + remove_silent() + raise + + def clear(self) -> None: + # imported lazily here because google app-engine doesn't support + # write access on the file system and the function does not exist + # normally. + from os import remove + + files = fnmatch.filter(os.listdir(self.directory), self.pattern % ("*",)) + for filename in files: + try: + remove(os.path.join(self.directory, filename)) + except OSError: + pass + + +class MemcachedBytecodeCache(BytecodeCache): + """This class implements a bytecode cache that uses a memcache cache for + storing the information. It does not enforce a specific memcache library + (tummy's memcache or cmemcache) but will accept any class that provides + the minimal interface required. + + Libraries compatible with this class: + + - `cachelib `_ + - `python-memcached `_ + + (Unfortunately the django cache interface is not compatible because it + does not support storing binary data, only text. You can however pass + the underlying cache client to the bytecode cache which is available + as `django.core.cache.cache._client`.) + + The minimal interface for the client passed to the constructor is this: + + .. class:: MinimalClientInterface + + .. method:: set(key, value[, timeout]) + + Stores the bytecode in the cache. `value` is a string and + `timeout` the timeout of the key. If timeout is not provided + a default timeout or no timeout should be assumed, if it's + provided it's an integer with the number of seconds the cache + item should exist. + + .. method:: get(key) + + Returns the value for the cache key. If the item does not + exist in the cache the return value must be `None`. + + The other arguments to the constructor are the prefix for all keys that + is added before the actual cache key and the timeout for the bytecode in + the cache system. We recommend a high (or no) timeout. + + This bytecode cache does not support clearing of used items in the cache. + The clear method is a no-operation function. + + .. versionadded:: 2.7 + Added support for ignoring memcache errors through the + `ignore_memcache_errors` parameter. + """ + + def __init__( + self, + client: "_MemcachedClient", + prefix: str = "jinja2/bytecode/", + timeout: t.Optional[int] = None, + ignore_memcache_errors: bool = True, + ): + self.client = client + self.prefix = prefix + self.timeout = timeout + self.ignore_memcache_errors = ignore_memcache_errors + + def load_bytecode(self, bucket: Bucket) -> None: + try: + code = self.client.get(self.prefix + bucket.key) + except Exception: + if not self.ignore_memcache_errors: + raise + else: + bucket.bytecode_from_string(code) + + def dump_bytecode(self, bucket: Bucket) -> None: + key = self.prefix + bucket.key + value = bucket.bytecode_to_string() + + try: + if self.timeout is not None: + self.client.set(key, value, self.timeout) + else: + self.client.set(key, value) + except Exception: + if not self.ignore_memcache_errors: + raise diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/compiler.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/compiler.py new file mode 100644 index 000000000..a4ff6a1b1 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/compiler.py @@ -0,0 +1,1998 @@ +"""Compiles nodes from the parser into Python code.""" + +import typing as t +from contextlib import contextmanager +from functools import update_wrapper +from io import StringIO +from itertools import chain +from keyword import iskeyword as is_python_keyword + +from markupsafe import escape +from markupsafe import Markup + +from . import nodes +from .exceptions import TemplateAssertionError +from .idtracking import Symbols +from .idtracking import VAR_LOAD_ALIAS +from .idtracking import VAR_LOAD_PARAMETER +from .idtracking import VAR_LOAD_RESOLVE +from .idtracking import VAR_LOAD_UNDEFINED +from .nodes import EvalContext +from .optimizer import Optimizer +from .utils import _PassArg +from .utils import concat +from .visitor import NodeVisitor + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .environment import Environment + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) + +operators = { + "eq": "==", + "ne": "!=", + "gt": ">", + "gteq": ">=", + "lt": "<", + "lteq": "<=", + "in": "in", + "notin": "not in", +} + + +def optimizeconst(f: F) -> F: + def new_func( + self: "CodeGenerator", node: nodes.Expr, frame: "Frame", **kwargs: t.Any + ) -> t.Any: + # Only optimize if the frame is not volatile + if self.optimizer is not None and not frame.eval_ctx.volatile: + new_node = self.optimizer.visit(node, frame.eval_ctx) + + if new_node != node: + return self.visit(new_node, frame) + + return f(self, node, frame, **kwargs) + + return update_wrapper(new_func, f) # type: ignore[return-value] + + +def _make_binop(op: str) -> t.Callable[["CodeGenerator", nodes.BinExpr, "Frame"], None]: + @optimizeconst + def visitor(self: "CodeGenerator", node: nodes.BinExpr, frame: Frame) -> None: + if ( + self.environment.sandboxed and op in self.environment.intercepted_binops # type: ignore + ): + self.write(f"environment.call_binop(context, {op!r}, ") + self.visit(node.left, frame) + self.write(", ") + self.visit(node.right, frame) + else: + self.write("(") + self.visit(node.left, frame) + self.write(f" {op} ") + self.visit(node.right, frame) + + self.write(")") + + return visitor + + +def _make_unop( + op: str, +) -> t.Callable[["CodeGenerator", nodes.UnaryExpr, "Frame"], None]: + @optimizeconst + def visitor(self: "CodeGenerator", node: nodes.UnaryExpr, frame: Frame) -> None: + if ( + self.environment.sandboxed and op in self.environment.intercepted_unops # type: ignore + ): + self.write(f"environment.call_unop(context, {op!r}, ") + self.visit(node.node, frame) + else: + self.write("(" + op) + self.visit(node.node, frame) + + self.write(")") + + return visitor + + +def generate( + node: nodes.Template, + environment: "Environment", + name: t.Optional[str], + filename: t.Optional[str], + stream: t.Optional[t.TextIO] = None, + defer_init: bool = False, + optimized: bool = True, +) -> t.Optional[str]: + """Generate the python source for a node tree.""" + if not isinstance(node, nodes.Template): + raise TypeError("Can't compile non template nodes") + + generator = environment.code_generator_class( + environment, name, filename, stream, defer_init, optimized + ) + generator.visit(node) + + if stream is None: + return generator.stream.getvalue() # type: ignore + + return None + + +def has_safe_repr(value: t.Any) -> bool: + """Does the node have a safe representation?""" + if value is None or value is NotImplemented or value is Ellipsis: + return True + + if type(value) in {bool, int, float, complex, range, str, Markup}: + return True + + if type(value) in {tuple, list, set, frozenset}: + return all(has_safe_repr(v) for v in value) + + if type(value) is dict: # noqa E721 + return all(has_safe_repr(k) and has_safe_repr(v) for k, v in value.items()) + + return False + + +def find_undeclared( + nodes: t.Iterable[nodes.Node], names: t.Iterable[str] +) -> t.Set[str]: + """Check if the names passed are accessed undeclared. The return value + is a set of all the undeclared names from the sequence of names found. + """ + visitor = UndeclaredNameVisitor(names) + try: + for node in nodes: + visitor.visit(node) + except VisitorExit: + pass + return visitor.undeclared + + +class MacroRef: + def __init__(self, node: t.Union[nodes.Macro, nodes.CallBlock]) -> None: + self.node = node + self.accesses_caller = False + self.accesses_kwargs = False + self.accesses_varargs = False + + +class Frame: + """Holds compile time information for us.""" + + def __init__( + self, + eval_ctx: EvalContext, + parent: t.Optional["Frame"] = None, + level: t.Optional[int] = None, + ) -> None: + self.eval_ctx = eval_ctx + + # the parent of this frame + self.parent = parent + + if parent is None: + self.symbols = Symbols(level=level) + + # in some dynamic inheritance situations the compiler needs to add + # write tests around output statements. + self.require_output_check = False + + # inside some tags we are using a buffer rather than yield statements. + # this for example affects {% filter %} or {% macro %}. If a frame + # is buffered this variable points to the name of the list used as + # buffer. + self.buffer: t.Optional[str] = None + + # the name of the block we're in, otherwise None. + self.block: t.Optional[str] = None + + else: + self.symbols = Symbols(parent.symbols, level=level) + self.require_output_check = parent.require_output_check + self.buffer = parent.buffer + self.block = parent.block + + # a toplevel frame is the root + soft frames such as if conditions. + self.toplevel = False + + # the root frame is basically just the outermost frame, so no if + # conditions. This information is used to optimize inheritance + # situations. + self.rootlevel = False + + # variables set inside of loops and blocks should not affect outer frames, + # but they still needs to be kept track of as part of the active context. + self.loop_frame = False + self.block_frame = False + + # track whether the frame is being used in an if-statement or conditional + # expression as it determines which errors should be raised during runtime + # or compile time. + self.soft_frame = False + + def copy(self) -> "te.Self": + """Create a copy of the current one.""" + rv = object.__new__(self.__class__) + rv.__dict__.update(self.__dict__) + rv.symbols = self.symbols.copy() + return rv + + def inner(self, isolated: bool = False) -> "Frame": + """Return an inner frame.""" + if isolated: + return Frame(self.eval_ctx, level=self.symbols.level + 1) + return Frame(self.eval_ctx, self) + + def soft(self) -> "te.Self": + """Return a soft frame. A soft frame may not be modified as + standalone thing as it shares the resources with the frame it + was created of, but it's not a rootlevel frame any longer. + + This is only used to implement if-statements and conditional + expressions. + """ + rv = self.copy() + rv.rootlevel = False + rv.soft_frame = True + return rv + + __copy__ = copy + + +class VisitorExit(RuntimeError): + """Exception used by the `UndeclaredNameVisitor` to signal a stop.""" + + +class DependencyFinderVisitor(NodeVisitor): + """A visitor that collects filter and test calls.""" + + def __init__(self) -> None: + self.filters: t.Set[str] = set() + self.tests: t.Set[str] = set() + + def visit_Filter(self, node: nodes.Filter) -> None: + self.generic_visit(node) + self.filters.add(node.name) + + def visit_Test(self, node: nodes.Test) -> None: + self.generic_visit(node) + self.tests.add(node.name) + + def visit_Block(self, node: nodes.Block) -> None: + """Stop visiting at blocks.""" + + +class UndeclaredNameVisitor(NodeVisitor): + """A visitor that checks if a name is accessed without being + declared. This is different from the frame visitor as it will + not stop at closure frames. + """ + + def __init__(self, names: t.Iterable[str]) -> None: + self.names = set(names) + self.undeclared: t.Set[str] = set() + + def visit_Name(self, node: nodes.Name) -> None: + if node.ctx == "load" and node.name in self.names: + self.undeclared.add(node.name) + if self.undeclared == self.names: + raise VisitorExit() + else: + self.names.discard(node.name) + + def visit_Block(self, node: nodes.Block) -> None: + """Stop visiting a blocks.""" + + +class CompilerExit(Exception): + """Raised if the compiler encountered a situation where it just + doesn't make sense to further process the code. Any block that + raises such an exception is not further processed. + """ + + +class CodeGenerator(NodeVisitor): + def __init__( + self, + environment: "Environment", + name: t.Optional[str], + filename: t.Optional[str], + stream: t.Optional[t.TextIO] = None, + defer_init: bool = False, + optimized: bool = True, + ) -> None: + if stream is None: + stream = StringIO() + self.environment = environment + self.name = name + self.filename = filename + self.stream = stream + self.created_block_context = False + self.defer_init = defer_init + self.optimizer: t.Optional[Optimizer] = None + + if optimized: + self.optimizer = Optimizer(environment) + + # aliases for imports + self.import_aliases: t.Dict[str, str] = {} + + # a registry for all blocks. Because blocks are moved out + # into the global python scope they are registered here + self.blocks: t.Dict[str, nodes.Block] = {} + + # the number of extends statements so far + self.extends_so_far = 0 + + # some templates have a rootlevel extends. In this case we + # can safely assume that we're a child template and do some + # more optimizations. + self.has_known_extends = False + + # the current line number + self.code_lineno = 1 + + # registry of all filters and tests (global, not block local) + self.tests: t.Dict[str, str] = {} + self.filters: t.Dict[str, str] = {} + + # the debug information + self.debug_info: t.List[t.Tuple[int, int]] = [] + self._write_debug_info: t.Optional[int] = None + + # the number of new lines before the next write() + self._new_lines = 0 + + # the line number of the last written statement + self._last_line = 0 + + # true if nothing was written so far. + self._first_write = True + + # used by the `temporary_identifier` method to get new + # unique, temporary identifier + self._last_identifier = 0 + + # the current indentation + self._indentation = 0 + + # Tracks toplevel assignments + self._assign_stack: t.List[t.Set[str]] = [] + + # Tracks parameter definition blocks + self._param_def_block: t.List[t.Set[str]] = [] + + # Tracks the current context. + self._context_reference_stack = ["context"] + + @property + def optimized(self) -> bool: + return self.optimizer is not None + + # -- Various compilation helpers + + def fail(self, msg: str, lineno: int) -> "te.NoReturn": + """Fail with a :exc:`TemplateAssertionError`.""" + raise TemplateAssertionError(msg, lineno, self.name, self.filename) + + def temporary_identifier(self) -> str: + """Get a new unique identifier.""" + self._last_identifier += 1 + return f"t_{self._last_identifier}" + + def buffer(self, frame: Frame) -> None: + """Enable buffering for the frame from that point onwards.""" + frame.buffer = self.temporary_identifier() + self.writeline(f"{frame.buffer} = []") + + def return_buffer_contents( + self, frame: Frame, force_unescaped: bool = False + ) -> None: + """Return the buffer contents of the frame.""" + if not force_unescaped: + if frame.eval_ctx.volatile: + self.writeline("if context.eval_ctx.autoescape:") + self.indent() + self.writeline(f"return Markup(concat({frame.buffer}))") + self.outdent() + self.writeline("else:") + self.indent() + self.writeline(f"return concat({frame.buffer})") + self.outdent() + return + elif frame.eval_ctx.autoescape: + self.writeline(f"return Markup(concat({frame.buffer}))") + return + self.writeline(f"return concat({frame.buffer})") + + def indent(self) -> None: + """Indent by one.""" + self._indentation += 1 + + def outdent(self, step: int = 1) -> None: + """Outdent by step.""" + self._indentation -= step + + def start_write(self, frame: Frame, node: t.Optional[nodes.Node] = None) -> None: + """Yield or write into the frame buffer.""" + if frame.buffer is None: + self.writeline("yield ", node) + else: + self.writeline(f"{frame.buffer}.append(", node) + + def end_write(self, frame: Frame) -> None: + """End the writing process started by `start_write`.""" + if frame.buffer is not None: + self.write(")") + + def simple_write( + self, s: str, frame: Frame, node: t.Optional[nodes.Node] = None + ) -> None: + """Simple shortcut for start_write + write + end_write.""" + self.start_write(frame, node) + self.write(s) + self.end_write(frame) + + def blockvisit(self, nodes: t.Iterable[nodes.Node], frame: Frame) -> None: + """Visit a list of nodes as block in a frame. If the current frame + is no buffer a dummy ``if 0: yield None`` is written automatically. + """ + try: + self.writeline("pass") + for node in nodes: + self.visit(node, frame) + except CompilerExit: + pass + + def write(self, x: str) -> None: + """Write a string into the output stream.""" + if self._new_lines: + if not self._first_write: + self.stream.write("\n" * self._new_lines) + self.code_lineno += self._new_lines + if self._write_debug_info is not None: + self.debug_info.append((self._write_debug_info, self.code_lineno)) + self._write_debug_info = None + self._first_write = False + self.stream.write(" " * self._indentation) + self._new_lines = 0 + self.stream.write(x) + + def writeline( + self, x: str, node: t.Optional[nodes.Node] = None, extra: int = 0 + ) -> None: + """Combination of newline and write.""" + self.newline(node, extra) + self.write(x) + + def newline(self, node: t.Optional[nodes.Node] = None, extra: int = 0) -> None: + """Add one or more newlines before the next write.""" + self._new_lines = max(self._new_lines, 1 + extra) + if node is not None and node.lineno != self._last_line: + self._write_debug_info = node.lineno + self._last_line = node.lineno + + def signature( + self, + node: t.Union[nodes.Call, nodes.Filter, nodes.Test], + frame: Frame, + extra_kwargs: t.Optional[t.Mapping[str, t.Any]] = None, + ) -> None: + """Writes a function call to the stream for the current node. + A leading comma is added automatically. The extra keyword + arguments may not include python keywords otherwise a syntax + error could occur. The extra keyword arguments should be given + as python dict. + """ + # if any of the given keyword arguments is a python keyword + # we have to make sure that no invalid call is created. + kwarg_workaround = any( + is_python_keyword(t.cast(str, k)) + for k in chain((x.key for x in node.kwargs), extra_kwargs or ()) + ) + + for arg in node.args: + self.write(", ") + self.visit(arg, frame) + + if not kwarg_workaround: + for kwarg in node.kwargs: + self.write(", ") + self.visit(kwarg, frame) + if extra_kwargs is not None: + for key, value in extra_kwargs.items(): + self.write(f", {key}={value}") + if node.dyn_args: + self.write(", *") + self.visit(node.dyn_args, frame) + + if kwarg_workaround: + if node.dyn_kwargs is not None: + self.write(", **dict({") + else: + self.write(", **{") + for kwarg in node.kwargs: + self.write(f"{kwarg.key!r}: ") + self.visit(kwarg.value, frame) + self.write(", ") + if extra_kwargs is not None: + for key, value in extra_kwargs.items(): + self.write(f"{key!r}: {value}, ") + if node.dyn_kwargs is not None: + self.write("}, **") + self.visit(node.dyn_kwargs, frame) + self.write(")") + else: + self.write("}") + + elif node.dyn_kwargs is not None: + self.write(", **") + self.visit(node.dyn_kwargs, frame) + + def pull_dependencies(self, nodes: t.Iterable[nodes.Node]) -> None: + """Find all filter and test names used in the template and + assign them to variables in the compiled namespace. Checking + that the names are registered with the environment is done when + compiling the Filter and Test nodes. If the node is in an If or + CondExpr node, the check is done at runtime instead. + + .. versionchanged:: 3.0 + Filters and tests in If and CondExpr nodes are checked at + runtime instead of compile time. + """ + visitor = DependencyFinderVisitor() + + for node in nodes: + visitor.visit(node) + + for id_map, names, dependency in ( + (self.filters, visitor.filters, "filters"), + ( + self.tests, + visitor.tests, + "tests", + ), + ): + for name in sorted(names): + if name not in id_map: + id_map[name] = self.temporary_identifier() + + # add check during runtime that dependencies used inside of executed + # blocks are defined, as this step may be skipped during compile time + self.writeline("try:") + self.indent() + self.writeline(f"{id_map[name]} = environment.{dependency}[{name!r}]") + self.outdent() + self.writeline("except KeyError:") + self.indent() + self.writeline("@internalcode") + self.writeline(f"def {id_map[name]}(*unused):") + self.indent() + self.writeline( + f'raise TemplateRuntimeError("No {dependency[:-1]}' + f' named {name!r} found.")' + ) + self.outdent() + self.outdent() + + def enter_frame(self, frame: Frame) -> None: + undefs = [] + for target, (action, param) in frame.symbols.loads.items(): + if action == VAR_LOAD_PARAMETER: + pass + elif action == VAR_LOAD_RESOLVE: + self.writeline(f"{target} = {self.get_resolve_func()}({param!r})") + elif action == VAR_LOAD_ALIAS: + self.writeline(f"{target} = {param}") + elif action == VAR_LOAD_UNDEFINED: + undefs.append(target) + else: + raise NotImplementedError("unknown load instruction") + if undefs: + self.writeline(f"{' = '.join(undefs)} = missing") + + def leave_frame(self, frame: Frame, with_python_scope: bool = False) -> None: + if not with_python_scope: + undefs = [] + for target in frame.symbols.loads: + undefs.append(target) + if undefs: + self.writeline(f"{' = '.join(undefs)} = missing") + + def choose_async(self, async_value: str = "async ", sync_value: str = "") -> str: + return async_value if self.environment.is_async else sync_value + + def func(self, name: str) -> str: + return f"{self.choose_async()}def {name}" + + def macro_body( + self, node: t.Union[nodes.Macro, nodes.CallBlock], frame: Frame + ) -> t.Tuple[Frame, MacroRef]: + """Dump the function def of a macro or call block.""" + frame = frame.inner() + frame.symbols.analyze_node(node) + macro_ref = MacroRef(node) + + explicit_caller = None + skip_special_params = set() + args = [] + + for idx, arg in enumerate(node.args): + if arg.name == "caller": + explicit_caller = idx + if arg.name in ("kwargs", "varargs"): + skip_special_params.add(arg.name) + args.append(frame.symbols.ref(arg.name)) + + undeclared = find_undeclared(node.body, ("caller", "kwargs", "varargs")) + + if "caller" in undeclared: + # In older Jinja versions there was a bug that allowed caller + # to retain the special behavior even if it was mentioned in + # the argument list. However thankfully this was only really + # working if it was the last argument. So we are explicitly + # checking this now and error out if it is anywhere else in + # the argument list. + if explicit_caller is not None: + try: + node.defaults[explicit_caller - len(node.args)] + except IndexError: + self.fail( + "When defining macros or call blocks the " + 'special "caller" argument must be omitted ' + "or be given a default.", + node.lineno, + ) + else: + args.append(frame.symbols.declare_parameter("caller")) + macro_ref.accesses_caller = True + if "kwargs" in undeclared and "kwargs" not in skip_special_params: + args.append(frame.symbols.declare_parameter("kwargs")) + macro_ref.accesses_kwargs = True + if "varargs" in undeclared and "varargs" not in skip_special_params: + args.append(frame.symbols.declare_parameter("varargs")) + macro_ref.accesses_varargs = True + + # macros are delayed, they never require output checks + frame.require_output_check = False + frame.symbols.analyze_node(node) + self.writeline(f"{self.func('macro')}({', '.join(args)}):", node) + self.indent() + + self.buffer(frame) + self.enter_frame(frame) + + self.push_parameter_definitions(frame) + for idx, arg in enumerate(node.args): + ref = frame.symbols.ref(arg.name) + self.writeline(f"if {ref} is missing:") + self.indent() + try: + default = node.defaults[idx - len(node.args)] + except IndexError: + self.writeline( + f'{ref} = undefined("parameter {arg.name!r} was not provided",' + f" name={arg.name!r})" + ) + else: + self.writeline(f"{ref} = ") + self.visit(default, frame) + self.mark_parameter_stored(ref) + self.outdent() + self.pop_parameter_definitions() + + self.blockvisit(node.body, frame) + self.return_buffer_contents(frame, force_unescaped=True) + self.leave_frame(frame, with_python_scope=True) + self.outdent() + + return frame, macro_ref + + def macro_def(self, macro_ref: MacroRef, frame: Frame) -> None: + """Dump the macro definition for the def created by macro_body.""" + arg_tuple = ", ".join(repr(x.name) for x in macro_ref.node.args) + name = getattr(macro_ref.node, "name", None) + if len(macro_ref.node.args) == 1: + arg_tuple += "," + self.write( + f"Macro(environment, macro, {name!r}, ({arg_tuple})," + f" {macro_ref.accesses_kwargs!r}, {macro_ref.accesses_varargs!r}," + f" {macro_ref.accesses_caller!r}, context.eval_ctx.autoescape)" + ) + + def position(self, node: nodes.Node) -> str: + """Return a human readable position for the node.""" + rv = f"line {node.lineno}" + if self.name is not None: + rv = f"{rv} in {self.name!r}" + return rv + + def dump_local_context(self, frame: Frame) -> str: + items_kv = ", ".join( + f"{name!r}: {target}" + for name, target in frame.symbols.dump_stores().items() + ) + return f"{{{items_kv}}}" + + def write_commons(self) -> None: + """Writes a common preamble that is used by root and block functions. + Primarily this sets up common local helpers and enforces a generator + through a dead branch. + """ + self.writeline("resolve = context.resolve_or_missing") + self.writeline("undefined = environment.undefined") + self.writeline("concat = environment.concat") + # always use the standard Undefined class for the implicit else of + # conditional expressions + self.writeline("cond_expr_undefined = Undefined") + self.writeline("if 0: yield None") + + def push_parameter_definitions(self, frame: Frame) -> None: + """Pushes all parameter targets from the given frame into a local + stack that permits tracking of yet to be assigned parameters. In + particular this enables the optimization from `visit_Name` to skip + undefined expressions for parameters in macros as macros can reference + otherwise unbound parameters. + """ + self._param_def_block.append(frame.symbols.dump_param_targets()) + + def pop_parameter_definitions(self) -> None: + """Pops the current parameter definitions set.""" + self._param_def_block.pop() + + def mark_parameter_stored(self, target: str) -> None: + """Marks a parameter in the current parameter definitions as stored. + This will skip the enforced undefined checks. + """ + if self._param_def_block: + self._param_def_block[-1].discard(target) + + def push_context_reference(self, target: str) -> None: + self._context_reference_stack.append(target) + + def pop_context_reference(self) -> None: + self._context_reference_stack.pop() + + def get_context_ref(self) -> str: + return self._context_reference_stack[-1] + + def get_resolve_func(self) -> str: + target = self._context_reference_stack[-1] + if target == "context": + return "resolve" + return f"{target}.resolve" + + def derive_context(self, frame: Frame) -> str: + return f"{self.get_context_ref()}.derived({self.dump_local_context(frame)})" + + def parameter_is_undeclared(self, target: str) -> bool: + """Checks if a given target is an undeclared parameter.""" + if not self._param_def_block: + return False + return target in self._param_def_block[-1] + + def push_assign_tracking(self) -> None: + """Pushes a new layer for assignment tracking.""" + self._assign_stack.append(set()) + + def pop_assign_tracking(self, frame: Frame) -> None: + """Pops the topmost level for assignment tracking and updates the + context variables if necessary. + """ + vars = self._assign_stack.pop() + if ( + not frame.block_frame + and not frame.loop_frame + and not frame.toplevel + or not vars + ): + return + public_names = [x for x in vars if x[:1] != "_"] + if len(vars) == 1: + name = next(iter(vars)) + ref = frame.symbols.ref(name) + if frame.loop_frame: + self.writeline(f"_loop_vars[{name!r}] = {ref}") + return + if frame.block_frame: + self.writeline(f"_block_vars[{name!r}] = {ref}") + return + self.writeline(f"context.vars[{name!r}] = {ref}") + else: + if frame.loop_frame: + self.writeline("_loop_vars.update({") + elif frame.block_frame: + self.writeline("_block_vars.update({") + else: + self.writeline("context.vars.update({") + for idx, name in enumerate(sorted(vars)): + if idx: + self.write(", ") + ref = frame.symbols.ref(name) + self.write(f"{name!r}: {ref}") + self.write("})") + if not frame.block_frame and not frame.loop_frame and public_names: + if len(public_names) == 1: + self.writeline(f"context.exported_vars.add({public_names[0]!r})") + else: + names_str = ", ".join(map(repr, sorted(public_names))) + self.writeline(f"context.exported_vars.update(({names_str}))") + + # -- Statement Visitors + + def visit_Template( + self, node: nodes.Template, frame: t.Optional[Frame] = None + ) -> None: + assert frame is None, "no root frame allowed" + eval_ctx = EvalContext(self.environment, self.name) + + from .runtime import async_exported + from .runtime import exported + + if self.environment.is_async: + exported_names = sorted(exported + async_exported) + else: + exported_names = sorted(exported) + + self.writeline("from jinja2.runtime import " + ", ".join(exported_names)) + + # if we want a deferred initialization we cannot move the + # environment into a local name + envenv = "" if self.defer_init else ", environment=environment" + + # do we have an extends tag at all? If not, we can save some + # overhead by just not processing any inheritance code. + have_extends = node.find(nodes.Extends) is not None + + # find all blocks + for block in node.find_all(nodes.Block): + if block.name in self.blocks: + self.fail(f"block {block.name!r} defined twice", block.lineno) + self.blocks[block.name] = block + + # find all imports and import them + for import_ in node.find_all(nodes.ImportedName): + if import_.importname not in self.import_aliases: + imp = import_.importname + self.import_aliases[imp] = alias = self.temporary_identifier() + if "." in imp: + module, obj = imp.rsplit(".", 1) + self.writeline(f"from {module} import {obj} as {alias}") + else: + self.writeline(f"import {imp} as {alias}") + + # add the load name + self.writeline(f"name = {self.name!r}") + + # generate the root render function. + self.writeline( + f"{self.func('root')}(context, missing=missing{envenv}):", extra=1 + ) + self.indent() + self.write_commons() + + # process the root + frame = Frame(eval_ctx) + if "self" in find_undeclared(node.body, ("self",)): + ref = frame.symbols.declare_parameter("self") + self.writeline(f"{ref} = TemplateReference(context)") + frame.symbols.analyze_node(node) + frame.toplevel = frame.rootlevel = True + frame.require_output_check = have_extends and not self.has_known_extends + if have_extends: + self.writeline("parent_template = None") + self.enter_frame(frame) + self.pull_dependencies(node.body) + self.blockvisit(node.body, frame) + self.leave_frame(frame, with_python_scope=True) + self.outdent() + + # make sure that the parent root is called. + if have_extends: + if not self.has_known_extends: + self.indent() + self.writeline("if parent_template is not None:") + self.indent() + if not self.environment.is_async: + self.writeline("yield from parent_template.root_render_func(context)") + else: + self.writeline("agen = parent_template.root_render_func(context)") + self.writeline("try:") + self.indent() + self.writeline("async for event in agen:") + self.indent() + self.writeline("yield event") + self.outdent() + self.outdent() + self.writeline("finally: await agen.aclose()") + self.outdent(1 + (not self.has_known_extends)) + + # at this point we now have the blocks collected and can visit them too. + for name, block in self.blocks.items(): + self.writeline( + f"{self.func('block_' + name)}(context, missing=missing{envenv}):", + block, + 1, + ) + self.indent() + self.write_commons() + # It's important that we do not make this frame a child of the + # toplevel template. This would cause a variety of + # interesting issues with identifier tracking. + block_frame = Frame(eval_ctx) + block_frame.block_frame = True + undeclared = find_undeclared(block.body, ("self", "super")) + if "self" in undeclared: + ref = block_frame.symbols.declare_parameter("self") + self.writeline(f"{ref} = TemplateReference(context)") + if "super" in undeclared: + ref = block_frame.symbols.declare_parameter("super") + self.writeline(f"{ref} = context.super({name!r}, block_{name})") + block_frame.symbols.analyze_node(block) + block_frame.block = name + self.writeline("_block_vars = {}") + self.enter_frame(block_frame) + self.pull_dependencies(block.body) + self.blockvisit(block.body, block_frame) + self.leave_frame(block_frame, with_python_scope=True) + self.outdent() + + blocks_kv_str = ", ".join(f"{x!r}: block_{x}" for x in self.blocks) + self.writeline(f"blocks = {{{blocks_kv_str}}}", extra=1) + debug_kv_str = "&".join(f"{k}={v}" for k, v in self.debug_info) + self.writeline(f"debug_info = {debug_kv_str!r}") + + def visit_Block(self, node: nodes.Block, frame: Frame) -> None: + """Call a block and register it for the template.""" + level = 0 + if frame.toplevel: + # if we know that we are a child template, there is no need to + # check if we are one + if self.has_known_extends: + return + if self.extends_so_far > 0: + self.writeline("if parent_template is None:") + self.indent() + level += 1 + + if node.scoped: + context = self.derive_context(frame) + else: + context = self.get_context_ref() + + if node.required: + self.writeline(f"if len(context.blocks[{node.name!r}]) <= 1:", node) + self.indent() + self.writeline( + f'raise TemplateRuntimeError("Required block {node.name!r} not found")', + node, + ) + self.outdent() + + if not self.environment.is_async and frame.buffer is None: + self.writeline( + f"yield from context.blocks[{node.name!r}][0]({context})", node + ) + else: + self.writeline(f"gen = context.blocks[{node.name!r}][0]({context})") + self.writeline("try:") + self.indent() + self.writeline( + f"{self.choose_async()}for event in gen:", + node, + ) + self.indent() + self.simple_write("event", frame) + self.outdent() + self.outdent() + self.writeline( + f"finally: {self.choose_async('await gen.aclose()', 'gen.close()')}" + ) + + self.outdent(level) + + def visit_Extends(self, node: nodes.Extends, frame: Frame) -> None: + """Calls the extender.""" + if not frame.toplevel: + self.fail("cannot use extend from a non top-level scope", node.lineno) + + # if the number of extends statements in general is zero so + # far, we don't have to add a check if something extended + # the template before this one. + if self.extends_so_far > 0: + # if we have a known extends we just add a template runtime + # error into the generated code. We could catch that at compile + # time too, but i welcome it not to confuse users by throwing the + # same error at different times just "because we can". + if not self.has_known_extends: + self.writeline("if parent_template is not None:") + self.indent() + self.writeline('raise TemplateRuntimeError("extended multiple times")') + + # if we have a known extends already we don't need that code here + # as we know that the template execution will end here. + if self.has_known_extends: + raise CompilerExit() + else: + self.outdent() + + self.writeline("parent_template = environment.get_template(", node) + self.visit(node.template, frame) + self.write(f", {self.name!r})") + self.writeline("for name, parent_block in parent_template.blocks.items():") + self.indent() + self.writeline("context.blocks.setdefault(name, []).append(parent_block)") + self.outdent() + + # if this extends statement was in the root level we can take + # advantage of that information and simplify the generated code + # in the top level from this point onwards + if frame.rootlevel: + self.has_known_extends = True + + # and now we have one more + self.extends_so_far += 1 + + def visit_Include(self, node: nodes.Include, frame: Frame) -> None: + """Handles includes.""" + if node.ignore_missing: + self.writeline("try:") + self.indent() + + func_name = "get_or_select_template" + if isinstance(node.template, nodes.Const): + if isinstance(node.template.value, str): + func_name = "get_template" + elif isinstance(node.template.value, (tuple, list)): + func_name = "select_template" + elif isinstance(node.template, (nodes.Tuple, nodes.List)): + func_name = "select_template" + + self.writeline(f"template = environment.{func_name}(", node) + self.visit(node.template, frame) + self.write(f", {self.name!r})") + if node.ignore_missing: + self.outdent() + self.writeline("except TemplateNotFound:") + self.indent() + self.writeline("pass") + self.outdent() + self.writeline("else:") + self.indent() + + def loop_body() -> None: + self.indent() + self.simple_write("event", frame) + self.outdent() + + if node.with_context: + self.writeline( + f"gen = template.root_render_func(" + "template.new_context(context.get_all(), True," + f" {self.dump_local_context(frame)}))" + ) + self.writeline("try:") + self.indent() + self.writeline(f"{self.choose_async()}for event in gen:") + loop_body() + self.outdent() + self.writeline( + f"finally: {self.choose_async('await gen.aclose()', 'gen.close()')}" + ) + elif self.environment.is_async: + self.writeline( + "for event in (await template._get_default_module_async())" + "._body_stream:" + ) + loop_body() + else: + self.writeline("yield from template._get_default_module()._body_stream") + + if node.ignore_missing: + self.outdent() + + def _import_common( + self, node: t.Union[nodes.Import, nodes.FromImport], frame: Frame + ) -> None: + self.write(f"{self.choose_async('await ')}environment.get_template(") + self.visit(node.template, frame) + self.write(f", {self.name!r}).") + + if node.with_context: + f_name = f"make_module{self.choose_async('_async')}" + self.write( + f"{f_name}(context.get_all(), True, {self.dump_local_context(frame)})" + ) + else: + self.write(f"_get_default_module{self.choose_async('_async')}(context)") + + def visit_Import(self, node: nodes.Import, frame: Frame) -> None: + """Visit regular imports.""" + self.writeline(f"{frame.symbols.ref(node.target)} = ", node) + if frame.toplevel: + self.write(f"context.vars[{node.target!r}] = ") + + self._import_common(node, frame) + + if frame.toplevel and not node.target.startswith("_"): + self.writeline(f"context.exported_vars.discard({node.target!r})") + + def visit_FromImport(self, node: nodes.FromImport, frame: Frame) -> None: + """Visit named imports.""" + self.newline(node) + self.write("included_template = ") + self._import_common(node, frame) + var_names = [] + discarded_names = [] + for name in node.names: + if isinstance(name, tuple): + name, alias = name + else: + alias = name + self.writeline( + f"{frame.symbols.ref(alias)} =" + f" getattr(included_template, {name!r}, missing)" + ) + self.writeline(f"if {frame.symbols.ref(alias)} is missing:") + self.indent() + # The position will contain the template name, and will be formatted + # into a string that will be compiled into an f-string. Curly braces + # in the name must be replaced with escapes so that they will not be + # executed as part of the f-string. + position = self.position(node).replace("{", "{{").replace("}", "}}") + message = ( + "the template {included_template.__name__!r}" + f" (imported on {position})" + f" does not export the requested name {name!r}" + ) + self.writeline( + f"{frame.symbols.ref(alias)} = undefined(f{message!r}, name={name!r})" + ) + self.outdent() + if frame.toplevel: + var_names.append(alias) + if not alias.startswith("_"): + discarded_names.append(alias) + + if var_names: + if len(var_names) == 1: + name = var_names[0] + self.writeline(f"context.vars[{name!r}] = {frame.symbols.ref(name)}") + else: + names_kv = ", ".join( + f"{name!r}: {frame.symbols.ref(name)}" for name in var_names + ) + self.writeline(f"context.vars.update({{{names_kv}}})") + if discarded_names: + if len(discarded_names) == 1: + self.writeline(f"context.exported_vars.discard({discarded_names[0]!r})") + else: + names_str = ", ".join(map(repr, discarded_names)) + self.writeline( + f"context.exported_vars.difference_update(({names_str}))" + ) + + def visit_For(self, node: nodes.For, frame: Frame) -> None: + loop_frame = frame.inner() + loop_frame.loop_frame = True + test_frame = frame.inner() + else_frame = frame.inner() + + # try to figure out if we have an extended loop. An extended loop + # is necessary if the loop is in recursive mode if the special loop + # variable is accessed in the body if the body is a scoped block. + extended_loop = ( + node.recursive + or "loop" + in find_undeclared(node.iter_child_nodes(only=("body",)), ("loop",)) + or any(block.scoped for block in node.find_all(nodes.Block)) + ) + + loop_ref = None + if extended_loop: + loop_ref = loop_frame.symbols.declare_parameter("loop") + + loop_frame.symbols.analyze_node(node, for_branch="body") + if node.else_: + else_frame.symbols.analyze_node(node, for_branch="else") + + if node.test: + loop_filter_func = self.temporary_identifier() + test_frame.symbols.analyze_node(node, for_branch="test") + self.writeline(f"{self.func(loop_filter_func)}(fiter):", node.test) + self.indent() + self.enter_frame(test_frame) + self.writeline(self.choose_async("async for ", "for ")) + self.visit(node.target, loop_frame) + self.write(" in ") + self.write(self.choose_async("auto_aiter(fiter)", "fiter")) + self.write(":") + self.indent() + self.writeline("if ", node.test) + self.visit(node.test, test_frame) + self.write(":") + self.indent() + self.writeline("yield ") + self.visit(node.target, loop_frame) + self.outdent(3) + self.leave_frame(test_frame, with_python_scope=True) + + # if we don't have an recursive loop we have to find the shadowed + # variables at that point. Because loops can be nested but the loop + # variable is a special one we have to enforce aliasing for it. + if node.recursive: + self.writeline( + f"{self.func('loop')}(reciter, loop_render_func, depth=0):", node + ) + self.indent() + self.buffer(loop_frame) + + # Use the same buffer for the else frame + else_frame.buffer = loop_frame.buffer + + # make sure the loop variable is a special one and raise a template + # assertion error if a loop tries to write to loop + if extended_loop: + self.writeline(f"{loop_ref} = missing") + + for name in node.find_all(nodes.Name): + if name.ctx == "store" and name.name == "loop": + self.fail( + "Can't assign to special loop variable in for-loop target", + name.lineno, + ) + + if node.else_: + iteration_indicator = self.temporary_identifier() + self.writeline(f"{iteration_indicator} = 1") + + self.writeline(self.choose_async("async for ", "for "), node) + self.visit(node.target, loop_frame) + if extended_loop: + self.write(f", {loop_ref} in {self.choose_async('Async')}LoopContext(") + else: + self.write(" in ") + + if node.test: + self.write(f"{loop_filter_func}(") + if node.recursive: + self.write("reciter") + else: + if self.environment.is_async and not extended_loop: + self.write("auto_aiter(") + self.visit(node.iter, frame) + if self.environment.is_async and not extended_loop: + self.write(")") + if node.test: + self.write(")") + + if node.recursive: + self.write(", undefined, loop_render_func, depth):") + else: + self.write(", undefined):" if extended_loop else ":") + + self.indent() + self.enter_frame(loop_frame) + + self.writeline("_loop_vars = {}") + self.blockvisit(node.body, loop_frame) + if node.else_: + self.writeline(f"{iteration_indicator} = 0") + self.outdent() + self.leave_frame( + loop_frame, with_python_scope=node.recursive and not node.else_ + ) + + if node.else_: + self.writeline(f"if {iteration_indicator}:") + self.indent() + self.enter_frame(else_frame) + self.blockvisit(node.else_, else_frame) + self.leave_frame(else_frame) + self.outdent() + + # if the node was recursive we have to return the buffer contents + # and start the iteration code + if node.recursive: + self.return_buffer_contents(loop_frame) + self.outdent() + self.start_write(frame, node) + self.write(f"{self.choose_async('await ')}loop(") + if self.environment.is_async: + self.write("auto_aiter(") + self.visit(node.iter, frame) + if self.environment.is_async: + self.write(")") + self.write(", loop)") + self.end_write(frame) + + # at the end of the iteration, clear any assignments made in the + # loop from the top level + if self._assign_stack: + self._assign_stack[-1].difference_update(loop_frame.symbols.stores) + + def visit_If(self, node: nodes.If, frame: Frame) -> None: + if_frame = frame.soft() + self.writeline("if ", node) + self.visit(node.test, if_frame) + self.write(":") + self.indent() + self.blockvisit(node.body, if_frame) + self.outdent() + for elif_ in node.elif_: + self.writeline("elif ", elif_) + self.visit(elif_.test, if_frame) + self.write(":") + self.indent() + self.blockvisit(elif_.body, if_frame) + self.outdent() + if node.else_: + self.writeline("else:") + self.indent() + self.blockvisit(node.else_, if_frame) + self.outdent() + + def visit_Macro(self, node: nodes.Macro, frame: Frame) -> None: + macro_frame, macro_ref = self.macro_body(node, frame) + self.newline() + if frame.toplevel: + if not node.name.startswith("_"): + self.write(f"context.exported_vars.add({node.name!r})") + self.writeline(f"context.vars[{node.name!r}] = ") + self.write(f"{frame.symbols.ref(node.name)} = ") + self.macro_def(macro_ref, macro_frame) + + def visit_CallBlock(self, node: nodes.CallBlock, frame: Frame) -> None: + call_frame, macro_ref = self.macro_body(node, frame) + self.writeline("caller = ") + self.macro_def(macro_ref, call_frame) + self.start_write(frame, node) + self.visit_Call(node.call, frame, forward_caller=True) + self.end_write(frame) + + def visit_FilterBlock(self, node: nodes.FilterBlock, frame: Frame) -> None: + filter_frame = frame.inner() + filter_frame.symbols.analyze_node(node) + self.enter_frame(filter_frame) + self.buffer(filter_frame) + self.blockvisit(node.body, filter_frame) + self.start_write(frame, node) + self.visit_Filter(node.filter, filter_frame) + self.end_write(frame) + self.leave_frame(filter_frame) + + def visit_With(self, node: nodes.With, frame: Frame) -> None: + with_frame = frame.inner() + with_frame.symbols.analyze_node(node) + self.enter_frame(with_frame) + for target, expr in zip(node.targets, node.values): + self.newline() + self.visit(target, with_frame) + self.write(" = ") + self.visit(expr, frame) + self.blockvisit(node.body, with_frame) + self.leave_frame(with_frame) + + def visit_ExprStmt(self, node: nodes.ExprStmt, frame: Frame) -> None: + self.newline(node) + self.visit(node.node, frame) + + class _FinalizeInfo(t.NamedTuple): + const: t.Optional[t.Callable[..., str]] + src: t.Optional[str] + + @staticmethod + def _default_finalize(value: t.Any) -> t.Any: + """The default finalize function if the environment isn't + configured with one. Or, if the environment has one, this is + called on that function's output for constants. + """ + return str(value) + + _finalize: t.Optional[_FinalizeInfo] = None + + def _make_finalize(self) -> _FinalizeInfo: + """Build the finalize function to be used on constants and at + runtime. Cached so it's only created once for all output nodes. + + Returns a ``namedtuple`` with the following attributes: + + ``const`` + A function to finalize constant data at compile time. + + ``src`` + Source code to output around nodes to be evaluated at + runtime. + """ + if self._finalize is not None: + return self._finalize + + finalize: t.Optional[t.Callable[..., t.Any]] + finalize = default = self._default_finalize + src = None + + if self.environment.finalize: + src = "environment.finalize(" + env_finalize = self.environment.finalize + pass_arg = { + _PassArg.context: "context", + _PassArg.eval_context: "context.eval_ctx", + _PassArg.environment: "environment", + }.get( + _PassArg.from_obj(env_finalize) # type: ignore + ) + finalize = None + + if pass_arg is None: + + def finalize(value: t.Any) -> t.Any: # noqa: F811 + return default(env_finalize(value)) + + else: + src = f"{src}{pass_arg}, " + + if pass_arg == "environment": + + def finalize(value: t.Any) -> t.Any: # noqa: F811 + return default(env_finalize(self.environment, value)) + + self._finalize = self._FinalizeInfo(finalize, src) + return self._finalize + + def _output_const_repr(self, group: t.Iterable[t.Any]) -> str: + """Given a group of constant values converted from ``Output`` + child nodes, produce a string to write to the template module + source. + """ + return repr(concat(group)) + + def _output_child_to_const( + self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo + ) -> str: + """Try to optimize a child of an ``Output`` node by trying to + convert it to constant, finalized data at compile time. + + If :exc:`Impossible` is raised, the node is not constant and + will be evaluated at runtime. Any other exception will also be + evaluated at runtime for easier debugging. + """ + const = node.as_const(frame.eval_ctx) + + if frame.eval_ctx.autoescape: + const = escape(const) + + # Template data doesn't go through finalize. + if isinstance(node, nodes.TemplateData): + return str(const) + + return finalize.const(const) # type: ignore + + def _output_child_pre( + self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo + ) -> None: + """Output extra source code before visiting a child of an + ``Output`` node. + """ + if frame.eval_ctx.volatile: + self.write("(escape if context.eval_ctx.autoescape else str)(") + elif frame.eval_ctx.autoescape: + self.write("escape(") + else: + self.write("str(") + + if finalize.src is not None: + self.write(finalize.src) + + def _output_child_post( + self, node: nodes.Expr, frame: Frame, finalize: _FinalizeInfo + ) -> None: + """Output extra source code after visiting a child of an + ``Output`` node. + """ + self.write(")") + + if finalize.src is not None: + self.write(")") + + def visit_Output(self, node: nodes.Output, frame: Frame) -> None: + # If an extends is active, don't render outside a block. + if frame.require_output_check: + # A top-level extends is known to exist at compile time. + if self.has_known_extends: + return + + self.writeline("if parent_template is None:") + self.indent() + + finalize = self._make_finalize() + body: t.List[t.Union[t.List[t.Any], nodes.Expr]] = [] + + # Evaluate constants at compile time if possible. Each item in + # body will be either a list of static data or a node to be + # evaluated at runtime. + for child in node.nodes: + try: + if not ( + # If the finalize function requires runtime context, + # constants can't be evaluated at compile time. + finalize.const + # Unless it's basic template data that won't be + # finalized anyway. + or isinstance(child, nodes.TemplateData) + ): + raise nodes.Impossible() + + const = self._output_child_to_const(child, frame, finalize) + except (nodes.Impossible, Exception): + # The node was not constant and needs to be evaluated at + # runtime. Or another error was raised, which is easier + # to debug at runtime. + body.append(child) + continue + + if body and isinstance(body[-1], list): + body[-1].append(const) + else: + body.append([const]) + + if frame.buffer is not None: + if len(body) == 1: + self.writeline(f"{frame.buffer}.append(") + else: + self.writeline(f"{frame.buffer}.extend((") + + self.indent() + + for item in body: + if isinstance(item, list): + # A group of constant data to join and output. + val = self._output_const_repr(item) + + if frame.buffer is None: + self.writeline("yield " + val) + else: + self.writeline(val + ",") + else: + if frame.buffer is None: + self.writeline("yield ", item) + else: + self.newline(item) + + # A node to be evaluated at runtime. + self._output_child_pre(item, frame, finalize) + self.visit(item, frame) + self._output_child_post(item, frame, finalize) + + if frame.buffer is not None: + self.write(",") + + if frame.buffer is not None: + self.outdent() + self.writeline(")" if len(body) == 1 else "))") + + if frame.require_output_check: + self.outdent() + + def visit_Assign(self, node: nodes.Assign, frame: Frame) -> None: + self.push_assign_tracking() + + # ``a.b`` is allowed for assignment, and is parsed as an NSRef. However, + # it is only valid if it references a Namespace object. Emit a check for + # that for each ref here, before assignment code is emitted. This can't + # be done in visit_NSRef as the ref could be in the middle of a tuple. + seen_refs: t.Set[str] = set() + + for nsref in node.find_all(nodes.NSRef): + if nsref.name in seen_refs: + # Only emit the check for each reference once, in case the same + # ref is used multiple times in a tuple, `ns.a, ns.b = c, d`. + continue + + seen_refs.add(nsref.name) + ref = frame.symbols.ref(nsref.name) + self.writeline(f"if not isinstance({ref}, Namespace):") + self.indent() + self.writeline( + "raise TemplateRuntimeError" + '("cannot assign attribute on non-namespace object")' + ) + self.outdent() + + self.newline(node) + self.visit(node.target, frame) + self.write(" = ") + self.visit(node.node, frame) + self.pop_assign_tracking(frame) + + def visit_AssignBlock(self, node: nodes.AssignBlock, frame: Frame) -> None: + self.push_assign_tracking() + block_frame = frame.inner() + # This is a special case. Since a set block always captures we + # will disable output checks. This way one can use set blocks + # toplevel even in extended templates. + block_frame.require_output_check = False + block_frame.symbols.analyze_node(node) + self.enter_frame(block_frame) + self.buffer(block_frame) + self.blockvisit(node.body, block_frame) + self.newline(node) + self.visit(node.target, frame) + self.write(" = (Markup if context.eval_ctx.autoescape else identity)(") + if node.filter is not None: + self.visit_Filter(node.filter, block_frame) + else: + self.write(f"concat({block_frame.buffer})") + self.write(")") + self.pop_assign_tracking(frame) + self.leave_frame(block_frame) + + # -- Expression Visitors + + def visit_Name(self, node: nodes.Name, frame: Frame) -> None: + if node.ctx == "store" and ( + frame.toplevel or frame.loop_frame or frame.block_frame + ): + if self._assign_stack: + self._assign_stack[-1].add(node.name) + ref = frame.symbols.ref(node.name) + + # If we are looking up a variable we might have to deal with the + # case where it's undefined. We can skip that case if the load + # instruction indicates a parameter which are always defined. + if node.ctx == "load": + load = frame.symbols.find_load(ref) + if not ( + load is not None + and load[0] == VAR_LOAD_PARAMETER + and not self.parameter_is_undeclared(ref) + ): + self.write( + f"(undefined(name={node.name!r}) if {ref} is missing else {ref})" + ) + return + + self.write(ref) + + def visit_NSRef(self, node: nodes.NSRef, frame: Frame) -> None: + # NSRef is a dotted assignment target a.b=c, but uses a[b]=c internally. + # visit_Assign emits code to validate that each ref is to a Namespace + # object only. That can't be emitted here as the ref could be in the + # middle of a tuple assignment. + ref = frame.symbols.ref(node.name) + self.writeline(f"{ref}[{node.attr!r}]") + + def visit_Const(self, node: nodes.Const, frame: Frame) -> None: + val = node.as_const(frame.eval_ctx) + if isinstance(val, float): + self.write(str(val)) + else: + self.write(repr(val)) + + def visit_TemplateData(self, node: nodes.TemplateData, frame: Frame) -> None: + try: + self.write(repr(node.as_const(frame.eval_ctx))) + except nodes.Impossible: + self.write( + f"(Markup if context.eval_ctx.autoescape else identity)({node.data!r})" + ) + + def visit_Tuple(self, node: nodes.Tuple, frame: Frame) -> None: + self.write("(") + idx = -1 + for idx, item in enumerate(node.items): + if idx: + self.write(", ") + self.visit(item, frame) + self.write(",)" if idx == 0 else ")") + + def visit_List(self, node: nodes.List, frame: Frame) -> None: + self.write("[") + for idx, item in enumerate(node.items): + if idx: + self.write(", ") + self.visit(item, frame) + self.write("]") + + def visit_Dict(self, node: nodes.Dict, frame: Frame) -> None: + self.write("{") + for idx, item in enumerate(node.items): + if idx: + self.write(", ") + self.visit(item.key, frame) + self.write(": ") + self.visit(item.value, frame) + self.write("}") + + visit_Add = _make_binop("+") + visit_Sub = _make_binop("-") + visit_Mul = _make_binop("*") + visit_Div = _make_binop("/") + visit_FloorDiv = _make_binop("//") + visit_Pow = _make_binop("**") + visit_Mod = _make_binop("%") + visit_And = _make_binop("and") + visit_Or = _make_binop("or") + visit_Pos = _make_unop("+") + visit_Neg = _make_unop("-") + visit_Not = _make_unop("not ") + + @optimizeconst + def visit_Concat(self, node: nodes.Concat, frame: Frame) -> None: + if frame.eval_ctx.volatile: + func_name = "(markup_join if context.eval_ctx.volatile else str_join)" + elif frame.eval_ctx.autoescape: + func_name = "markup_join" + else: + func_name = "str_join" + self.write(f"{func_name}((") + for arg in node.nodes: + self.visit(arg, frame) + self.write(", ") + self.write("))") + + @optimizeconst + def visit_Compare(self, node: nodes.Compare, frame: Frame) -> None: + self.write("(") + self.visit(node.expr, frame) + for op in node.ops: + self.visit(op, frame) + self.write(")") + + def visit_Operand(self, node: nodes.Operand, frame: Frame) -> None: + self.write(f" {operators[node.op]} ") + self.visit(node.expr, frame) + + @optimizeconst + def visit_Getattr(self, node: nodes.Getattr, frame: Frame) -> None: + if self.environment.is_async: + self.write("(await auto_await(") + + self.write("environment.getattr(") + self.visit(node.node, frame) + self.write(f", {node.attr!r})") + + if self.environment.is_async: + self.write("))") + + @optimizeconst + def visit_Getitem(self, node: nodes.Getitem, frame: Frame) -> None: + # slices bypass the environment getitem method. + if isinstance(node.arg, nodes.Slice): + self.visit(node.node, frame) + self.write("[") + self.visit(node.arg, frame) + self.write("]") + else: + if self.environment.is_async: + self.write("(await auto_await(") + + self.write("environment.getitem(") + self.visit(node.node, frame) + self.write(", ") + self.visit(node.arg, frame) + self.write(")") + + if self.environment.is_async: + self.write("))") + + def visit_Slice(self, node: nodes.Slice, frame: Frame) -> None: + if node.start is not None: + self.visit(node.start, frame) + self.write(":") + if node.stop is not None: + self.visit(node.stop, frame) + if node.step is not None: + self.write(":") + self.visit(node.step, frame) + + @contextmanager + def _filter_test_common( + self, node: t.Union[nodes.Filter, nodes.Test], frame: Frame, is_filter: bool + ) -> t.Iterator[None]: + if self.environment.is_async: + self.write("(await auto_await(") + + if is_filter: + self.write(f"{self.filters[node.name]}(") + func = self.environment.filters.get(node.name) + else: + self.write(f"{self.tests[node.name]}(") + func = self.environment.tests.get(node.name) + + # When inside an If or CondExpr frame, allow the filter to be + # undefined at compile time and only raise an error if it's + # actually called at runtime. See pull_dependencies. + if func is None and not frame.soft_frame: + type_name = "filter" if is_filter else "test" + self.fail(f"No {type_name} named {node.name!r}.", node.lineno) + + pass_arg = { + _PassArg.context: "context", + _PassArg.eval_context: "context.eval_ctx", + _PassArg.environment: "environment", + }.get( + _PassArg.from_obj(func) # type: ignore + ) + + if pass_arg is not None: + self.write(f"{pass_arg}, ") + + # Back to the visitor function to handle visiting the target of + # the filter or test. + yield + + self.signature(node, frame) + self.write(")") + + if self.environment.is_async: + self.write("))") + + @optimizeconst + def visit_Filter(self, node: nodes.Filter, frame: Frame) -> None: + with self._filter_test_common(node, frame, True): + # if the filter node is None we are inside a filter block + # and want to write to the current buffer + if node.node is not None: + self.visit(node.node, frame) + elif frame.eval_ctx.volatile: + self.write( + f"(Markup(concat({frame.buffer}))" + f" if context.eval_ctx.autoescape else concat({frame.buffer}))" + ) + elif frame.eval_ctx.autoescape: + self.write(f"Markup(concat({frame.buffer}))") + else: + self.write(f"concat({frame.buffer})") + + @optimizeconst + def visit_Test(self, node: nodes.Test, frame: Frame) -> None: + with self._filter_test_common(node, frame, False): + self.visit(node.node, frame) + + @optimizeconst + def visit_CondExpr(self, node: nodes.CondExpr, frame: Frame) -> None: + frame = frame.soft() + + def write_expr2() -> None: + if node.expr2 is not None: + self.visit(node.expr2, frame) + return + + self.write( + f'cond_expr_undefined("the inline if-expression on' + f" {self.position(node)} evaluated to false and no else" + f' section was defined.")' + ) + + self.write("(") + self.visit(node.expr1, frame) + self.write(" if ") + self.visit(node.test, frame) + self.write(" else ") + write_expr2() + self.write(")") + + @optimizeconst + def visit_Call( + self, node: nodes.Call, frame: Frame, forward_caller: bool = False + ) -> None: + if self.environment.is_async: + self.write("(await auto_await(") + if self.environment.sandboxed: + self.write("environment.call(context, ") + else: + self.write("context.call(") + self.visit(node.node, frame) + extra_kwargs = {"caller": "caller"} if forward_caller else None + loop_kwargs = {"_loop_vars": "_loop_vars"} if frame.loop_frame else {} + block_kwargs = {"_block_vars": "_block_vars"} if frame.block_frame else {} + if extra_kwargs: + extra_kwargs.update(loop_kwargs, **block_kwargs) + elif loop_kwargs or block_kwargs: + extra_kwargs = dict(loop_kwargs, **block_kwargs) + self.signature(node, frame, extra_kwargs) + self.write(")") + if self.environment.is_async: + self.write("))") + + def visit_Keyword(self, node: nodes.Keyword, frame: Frame) -> None: + self.write(node.key + "=") + self.visit(node.value, frame) + + # -- Unused nodes for extensions + + def visit_MarkSafe(self, node: nodes.MarkSafe, frame: Frame) -> None: + self.write("Markup(") + self.visit(node.expr, frame) + self.write(")") + + def visit_MarkSafeIfAutoescape( + self, node: nodes.MarkSafeIfAutoescape, frame: Frame + ) -> None: + self.write("(Markup if context.eval_ctx.autoescape else identity)(") + self.visit(node.expr, frame) + self.write(")") + + def visit_EnvironmentAttribute( + self, node: nodes.EnvironmentAttribute, frame: Frame + ) -> None: + self.write("environment." + node.name) + + def visit_ExtensionAttribute( + self, node: nodes.ExtensionAttribute, frame: Frame + ) -> None: + self.write(f"environment.extensions[{node.identifier!r}].{node.name}") + + def visit_ImportedName(self, node: nodes.ImportedName, frame: Frame) -> None: + self.write(self.import_aliases[node.importname]) + + def visit_InternalName(self, node: nodes.InternalName, frame: Frame) -> None: + self.write(node.name) + + def visit_ContextReference( + self, node: nodes.ContextReference, frame: Frame + ) -> None: + self.write("context") + + def visit_DerivedContextReference( + self, node: nodes.DerivedContextReference, frame: Frame + ) -> None: + self.write(self.derive_context(frame)) + + def visit_Continue(self, node: nodes.Continue, frame: Frame) -> None: + self.writeline("continue", node) + + def visit_Break(self, node: nodes.Break, frame: Frame) -> None: + self.writeline("break", node) + + def visit_Scope(self, node: nodes.Scope, frame: Frame) -> None: + scope_frame = frame.inner() + scope_frame.symbols.analyze_node(node) + self.enter_frame(scope_frame) + self.blockvisit(node.body, scope_frame) + self.leave_frame(scope_frame) + + def visit_OverlayScope(self, node: nodes.OverlayScope, frame: Frame) -> None: + ctx = self.temporary_identifier() + self.writeline(f"{ctx} = {self.derive_context(frame)}") + self.writeline(f"{ctx}.vars = ") + self.visit(node.context, frame) + self.push_context_reference(ctx) + + scope_frame = frame.inner(isolated=True) + scope_frame.symbols.analyze_node(node) + self.enter_frame(scope_frame) + self.blockvisit(node.body, scope_frame) + self.leave_frame(scope_frame) + self.pop_context_reference() + + def visit_EvalContextModifier( + self, node: nodes.EvalContextModifier, frame: Frame + ) -> None: + for keyword in node.options: + self.writeline(f"context.eval_ctx.{keyword.key} = ") + self.visit(keyword.value, frame) + try: + val = keyword.value.as_const(frame.eval_ctx) + except nodes.Impossible: + frame.eval_ctx.volatile = True + else: + setattr(frame.eval_ctx, keyword.key, val) + + def visit_ScopedEvalContextModifier( + self, node: nodes.ScopedEvalContextModifier, frame: Frame + ) -> None: + old_ctx_name = self.temporary_identifier() + saved_ctx = frame.eval_ctx.save() + self.writeline(f"{old_ctx_name} = context.eval_ctx.save()") + self.visit_EvalContextModifier(node, frame) + for child in node.body: + self.visit(child, frame) + frame.eval_ctx.revert(saved_ctx) + self.writeline(f"context.eval_ctx.revert({old_ctx_name})") diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/constants.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/constants.py new file mode 100644 index 000000000..41a1c23b0 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/constants.py @@ -0,0 +1,20 @@ +#: list of lorem ipsum words used by the lipsum() helper function +LOREM_IPSUM_WORDS = """\ +a ac accumsan ad adipiscing aenean aliquam aliquet amet ante aptent arcu at +auctor augue bibendum blandit class commodo condimentum congue consectetuer +consequat conubia convallis cras cubilia cum curabitur curae cursus dapibus +diam dictum dictumst dignissim dis dolor donec dui duis egestas eget eleifend +elementum elit enim erat eros est et etiam eu euismod facilisi facilisis fames +faucibus felis fermentum feugiat fringilla fusce gravida habitant habitasse hac +hendrerit hymenaeos iaculis id imperdiet in inceptos integer interdum ipsum +justo lacinia lacus laoreet lectus leo libero ligula litora lobortis lorem +luctus maecenas magna magnis malesuada massa mattis mauris metus mi molestie +mollis montes morbi mus nam nascetur natoque nec neque netus nibh nisi nisl non +nonummy nostra nulla nullam nunc odio orci ornare parturient pede pellentesque +penatibus per pharetra phasellus placerat platea porta porttitor posuere +potenti praesent pretium primis proin pulvinar purus quam quis quisque rhoncus +ridiculus risus rutrum sagittis sapien scelerisque sed sem semper senectus sit +sociis sociosqu sodales sollicitudin suscipit suspendisse taciti tellus tempor +tempus tincidunt torquent tortor tristique turpis ullamcorper ultrices +ultricies urna ut varius vehicula vel velit venenatis vestibulum vitae vivamus +viverra volutpat vulputate""" diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/debug.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/debug.py new file mode 100644 index 000000000..eeeeee78b --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/debug.py @@ -0,0 +1,191 @@ +import sys +import typing as t +from types import CodeType +from types import TracebackType + +from .exceptions import TemplateSyntaxError +from .utils import internal_code +from .utils import missing + +if t.TYPE_CHECKING: + from .runtime import Context + + +def rewrite_traceback_stack(source: t.Optional[str] = None) -> BaseException: + """Rewrite the current exception to replace any tracebacks from + within compiled template code with tracebacks that look like they + came from the template source. + + This must be called within an ``except`` block. + + :param source: For ``TemplateSyntaxError``, the original source if + known. + :return: The original exception with the rewritten traceback. + """ + _, exc_value, tb = sys.exc_info() + exc_value = t.cast(BaseException, exc_value) + tb = t.cast(TracebackType, tb) + + if isinstance(exc_value, TemplateSyntaxError) and not exc_value.translated: + exc_value.translated = True + exc_value.source = source + # Remove the old traceback, otherwise the frames from the + # compiler still show up. + exc_value.with_traceback(None) + # Outside of runtime, so the frame isn't executing template + # code, but it still needs to point at the template. + tb = fake_traceback( + exc_value, None, exc_value.filename or "", exc_value.lineno + ) + else: + # Skip the frame for the render function. + tb = tb.tb_next + + stack = [] + + # Build the stack of traceback object, replacing any in template + # code with the source file and line information. + while tb is not None: + # Skip frames decorated with @internalcode. These are internal + # calls that aren't useful in template debugging output. + if tb.tb_frame.f_code in internal_code: + tb = tb.tb_next + continue + + template = tb.tb_frame.f_globals.get("__jinja_template__") + + if template is not None: + lineno = template.get_corresponding_lineno(tb.tb_lineno) + fake_tb = fake_traceback(exc_value, tb, template.filename, lineno) + stack.append(fake_tb) + else: + stack.append(tb) + + tb = tb.tb_next + + tb_next = None + + # Assign tb_next in reverse to avoid circular references. + for tb in reversed(stack): + tb.tb_next = tb_next + tb_next = tb + + return exc_value.with_traceback(tb_next) + + +def fake_traceback( # type: ignore + exc_value: BaseException, tb: t.Optional[TracebackType], filename: str, lineno: int +) -> TracebackType: + """Produce a new traceback object that looks like it came from the + template source instead of the compiled code. The filename, line + number, and location name will point to the template, and the local + variables will be the current template context. + + :param exc_value: The original exception to be re-raised to create + the new traceback. + :param tb: The original traceback to get the local variables and + code info from. + :param filename: The template filename. + :param lineno: The line number in the template source. + """ + if tb is not None: + # Replace the real locals with the context that would be + # available at that point in the template. + locals = get_template_locals(tb.tb_frame.f_locals) + locals.pop("__jinja_exception__", None) + else: + locals = {} + + globals = { + "__name__": filename, + "__file__": filename, + "__jinja_exception__": exc_value, + } + # Raise an exception at the correct line number. + code: CodeType = compile( + "\n" * (lineno - 1) + "raise __jinja_exception__", filename, "exec" + ) + + # Build a new code object that points to the template file and + # replaces the location with a block name. + location = "template" + + if tb is not None: + function = tb.tb_frame.f_code.co_name + + if function == "root": + location = "top-level template code" + elif function.startswith("block_"): + location = f"block {function[6:]!r}" + + if sys.version_info >= (3, 8): + code = code.replace(co_name=location) + else: + code = CodeType( + code.co_argcount, + code.co_kwonlyargcount, + code.co_nlocals, + code.co_stacksize, + code.co_flags, + code.co_code, + code.co_consts, + code.co_names, + code.co_varnames, + code.co_filename, + location, + code.co_firstlineno, + code.co_lnotab, + code.co_freevars, + code.co_cellvars, + ) + + # Execute the new code, which is guaranteed to raise, and return + # the new traceback without this frame. + try: + exec(code, globals, locals) + except BaseException: + return sys.exc_info()[2].tb_next # type: ignore + + +def get_template_locals(real_locals: t.Mapping[str, t.Any]) -> t.Dict[str, t.Any]: + """Based on the runtime locals, get the context that would be + available at that point in the template. + """ + # Start with the current template context. + ctx: t.Optional[Context] = real_locals.get("context") + + if ctx is not None: + data: t.Dict[str, t.Any] = ctx.get_all().copy() + else: + data = {} + + # Might be in a derived context that only sets local variables + # rather than pushing a context. Local variables follow the scheme + # l_depth_name. Find the highest-depth local that has a value for + # each name. + local_overrides: t.Dict[str, t.Tuple[int, t.Any]] = {} + + for name, value in real_locals.items(): + if not name.startswith("l_") or value is missing: + # Not a template variable, or no longer relevant. + continue + + try: + _, depth_str, name = name.split("_", 2) + depth = int(depth_str) + except ValueError: + continue + + cur_depth = local_overrides.get(name, (-1,))[0] + + if cur_depth < depth: + local_overrides[name] = (depth, value) + + # Modify the context with any derived context. + for name, (_, value) in local_overrides.items(): + if value is missing: + data.pop(name, None) + else: + data[name] = value + + return data diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/defaults.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/defaults.py new file mode 100644 index 000000000..638cad3d2 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/defaults.py @@ -0,0 +1,48 @@ +import typing as t + +from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401 +from .tests import TESTS as DEFAULT_TESTS # noqa: F401 +from .utils import Cycler +from .utils import generate_lorem_ipsum +from .utils import Joiner +from .utils import Namespace + +if t.TYPE_CHECKING: + import typing_extensions as te + +# defaults for the parser / lexer +BLOCK_START_STRING = "{%" +BLOCK_END_STRING = "%}" +VARIABLE_START_STRING = "{{" +VARIABLE_END_STRING = "}}" +COMMENT_START_STRING = "{#" +COMMENT_END_STRING = "#}" +LINE_STATEMENT_PREFIX: t.Optional[str] = None +LINE_COMMENT_PREFIX: t.Optional[str] = None +TRIM_BLOCKS = False +LSTRIP_BLOCKS = False +NEWLINE_SEQUENCE: "te.Literal['\\n', '\\r\\n', '\\r']" = "\n" +KEEP_TRAILING_NEWLINE = False + +# default filters, tests and namespace + +DEFAULT_NAMESPACE = { + "range": range, + "dict": dict, + "lipsum": generate_lorem_ipsum, + "cycler": Cycler, + "joiner": Joiner, + "namespace": Namespace, +} + +# default policies +DEFAULT_POLICIES: t.Dict[str, t.Any] = { + "compiler.ascii_str": True, + "urlize.rel": "noopener", + "urlize.target": None, + "urlize.extra_schemes": None, + "truncate.leeway": 5, + "json.dumps_function": None, + "json.dumps_kwargs": {"sort_keys": True}, + "ext.i18n.trimmed": False, +} diff --git a/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/environment.py b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/environment.py new file mode 100644 index 000000000..0fc6e5be8 --- /dev/null +++ b/staybuddy/server/venv/lib/python3.10/site-packages/jinja2/environment.py @@ -0,0 +1,1672 @@ +"""Classes for managing templates and their runtime and compile time +options. +""" + +import os +import typing +import typing as t +import weakref +from collections import ChainMap +from functools import lru_cache +from functools import partial +from functools import reduce +from types import CodeType + +from markupsafe import Markup + +from . import nodes +from .compiler import CodeGenerator +from .compiler import generate +from .defaults import BLOCK_END_STRING +from .defaults import BLOCK_START_STRING +from .defaults import COMMENT_END_STRING +from .defaults import COMMENT_START_STRING +from .defaults import DEFAULT_FILTERS # type: ignore[attr-defined] +from .defaults import DEFAULT_NAMESPACE +from .defaults import DEFAULT_POLICIES +from .defaults import DEFAULT_TESTS # type: ignore[attr-defined] +from .defaults import KEEP_TRAILING_NEWLINE +from .defaults import LINE_COMMENT_PREFIX +from .defaults import LINE_STATEMENT_PREFIX +from .defaults import LSTRIP_BLOCKS +from .defaults import NEWLINE_SEQUENCE +from .defaults import TRIM_BLOCKS +from .defaults import VARIABLE_END_STRING +from .defaults import VARIABLE_START_STRING +from .exceptions import TemplateNotFound +from .exceptions import TemplateRuntimeError +from .exceptions import TemplatesNotFound +from .exceptions import TemplateSyntaxError +from .exceptions import UndefinedError +from .lexer import get_lexer +from .lexer import Lexer +from .lexer import TokenStream +from .nodes import EvalContext +from .parser import Parser +from .runtime import Context +from .runtime import new_context +from .runtime import Undefined +from .utils import _PassArg +from .utils import concat +from .utils import consume +from .utils import import_string +from .utils import internalcode +from .utils import LRUCache +from .utils import missing + +if t.TYPE_CHECKING: + import typing_extensions as te + + from .bccache import BytecodeCache + from .ext import Extension + from .loaders import BaseLoader + +_env_bound = t.TypeVar("_env_bound", bound="Environment") + + +# for direct template usage we have up to ten living environments +@lru_cache(maxsize=10) +def get_spontaneous_environment(cls: t.Type[_env_bound], *args: t.Any) -> _env_bound: + """Return a new spontaneous environment. A spontaneous environment + is used for templates created directly rather than through an + existing environment. + + :param cls: Environment class to create. + :param args: Positional arguments passed to environment. + """ + env = cls(*args) + env.shared = True + return env + + +def create_cache( + size: int, +) -> t.Optional[t.MutableMapping[t.Tuple["weakref.ref[t.Any]", str], "Template"]]: + """Return the cache class for the given size.""" + if size == 0: + return None + + if size < 0: + return {} + + return LRUCache(size) # type: ignore + + +def copy_cache( + cache: t.Optional[t.MutableMapping[t.Any, t.Any]], +) -> t.Optional[t.MutableMapping[t.Tuple["weakref.ref[t.Any]", str], "Template"]]: + """Create an empty copy of the given cache.""" + if cache is None: + return None + + if type(cache) is dict: # noqa E721 + return {} + + return LRUCache(cache.capacity) # type: ignore + + +def load_extensions( + environment: "Environment", + extensions: t.Sequence[t.Union[str, t.Type["Extension"]]], +) -> t.Dict[str, "Extension"]: + """Load the extensions from the list and bind it to the environment. + Returns a dict of instantiated extensions. + """ + result = {} + + for extension in extensions: + if isinstance(extension, str): + extension = t.cast(t.Type["Extension"], import_string(extension)) + + result[extension.identifier] = extension(environment) + + return result + + +def _environment_config_check(environment: _env_bound) -> _env_bound: + """Perform a sanity check on the environment.""" + assert issubclass( + environment.undefined, Undefined + ), "'undefined' must be a subclass of 'jinja2.Undefined'." + assert ( + environment.block_start_string + != environment.variable_start_string + != environment.comment_start_string + ), "block, variable and comment start strings must be different." + assert environment.newline_sequence in { + "\r", + "\r\n", + "\n", + }, "'newline_sequence' must be one of '\\n', '\\r\\n', or '\\r'." + return environment + + +class Environment: + r"""The core component of Jinja is the `Environment`. It contains + important shared variables like configuration, filters, tests, + globals and others. Instances of this class may be modified if + they are not shared and if no template was loaded so far. + Modifications on environments after the first template was loaded + will lead to surprising effects and undefined behavior. + + Here are the possible initialization parameters: + + `block_start_string` + The string marking the beginning of a block. Defaults to ``'{%'``. + + `block_end_string` + The string marking the end of a block. Defaults to ``'%}'``. + + `variable_start_string` + The string marking the beginning of a print statement. + Defaults to ``'{{'``. + + `variable_end_string` + The string marking the end of a print statement. Defaults to + ``'}}'``. + + `comment_start_string` + The string marking the beginning of a comment. Defaults to ``'{#'``. + + `comment_end_string` + The string marking the end of a comment. Defaults to ``'#}'``. + + `line_statement_prefix` + If given and a string, this will be used as prefix for line based + statements. See also :ref:`line-statements`. + + `line_comment_prefix` + If given and a string, this will be used as prefix for line based + comments. See also :ref:`line-statements`. + + .. versionadded:: 2.2 + + `trim_blocks` + If this is set to ``True`` the first newline after a block is + removed (block, not variable tag!). Defaults to `False`. + + `lstrip_blocks` + If this is set to ``True`` leading spaces and tabs are stripped + from the start of a line to a block. Defaults to `False`. + + `newline_sequence` + The sequence that starts a newline. Must be one of ``'\r'``, + ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a + useful default for Linux and OS X systems as well as web + applications. + + `keep_trailing_newline` + Preserve the trailing newline when rendering templates. + The default is ``False``, which causes a single newline, + if present, to be stripped from the end of the template. + + .. versionadded:: 2.7 + + `extensions` + List of Jinja extensions to use. This can either be import paths + as strings or extension classes. For more information have a + look at :ref:`the extensions documentation `. + + `optimized` + should the optimizer be enabled? Default is ``True``. + + `undefined` + :class:`Undefined` or a subclass of it that is used to represent + undefined values in the template. + + `finalize` + A callable that can be used to process the result of a variable + expression before it is output. For example one can convert + ``None`` implicitly into an empty string here. + + `autoescape` + If set to ``True`` the XML/HTML autoescaping feature is enabled by + default. For more details about autoescaping see + :class:`~markupsafe.Markup`. As of Jinja 2.4 this can also + be a callable that is passed the template name and has to + return ``True`` or ``False`` depending on autoescape should be + enabled by default. + + .. versionchanged:: 2.4 + `autoescape` can now be a function + + `loader` + The template loader for this environment. + + `cache_size` + The size of the cache. Per default this is ``400`` which means + that if more than 400 templates are loaded the loader will clean + out the least recently used template. If the cache size is set to + ``0`` templates are recompiled all the time, if the cache size is + ``-1`` the cache will not be cleaned. + + .. versionchanged:: 2.8 + The cache size was increased to 400 from a low 50. + + `auto_reload` + Some loaders load templates from locations where the template + sources may change (ie: file system or database). If + ``auto_reload`` is set to ``True`` (default) every time a template is + requested the loader checks if the source changed and if yes, it + will reload the template. For higher performance it's possible to + disable that. + + `bytecode_cache` + If set to a bytecode cache object, this object will provide a + cache for the internal Jinja bytecode so that templates don't + have to be parsed if they were not changed. + + See :ref:`bytecode-cache` for more information. + + `enable_async` + If set to true this enables async template execution which + allows using async functions and generators. + """ + + #: if this environment is sandboxed. Modifying this variable won't make + #: the environment sandboxed though. For a real sandboxed environment + #: have a look at jinja2.sandbox. This flag alone controls the code + #: generation by the compiler. + sandboxed = False + + #: True if the environment is just an overlay + overlayed = False + + #: the environment this environment is linked to if it is an overlay + linked_to: t.Optional["Environment"] = None + + #: shared environments have this set to `True`. A shared environment + #: must not be modified + shared = False + + #: the class that is used for code generation. See + #: :class:`~jinja2.compiler.CodeGenerator` for more information. + code_generator_class: t.Type["CodeGenerator"] = CodeGenerator + + concat = "".join + + #: the context class that is used for templates. See + #: :class:`~jinja2.runtime.Context` for more information. + context_class: t.Type[Context] = Context + + template_class: t.Type["Template"] + + def __init__( + self, + block_start_string: str = BLOCK_START_STRING, + block_end_string: str = BLOCK_END_STRING, + variable_start_string: str = VARIABLE_START_STRING, + variable_end_string: str = VARIABLE_END_STRING, + comment_start_string: str = COMMENT_START_STRING, + comment_end_string: str = COMMENT_END_STRING, + line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX, + line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX, + trim_blocks: bool = TRIM_BLOCKS, + lstrip_blocks: bool = LSTRIP_BLOCKS, + newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE, + keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE, + extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (), + optimized: bool = True, + undefined: t.Type[Undefined] = Undefined, + finalize: t.Optional[t.Callable[..., t.Any]] = None, + autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False, + loader: t.Optional["BaseLoader"] = None, + cache_size: int = 400, + auto_reload: bool = True, + bytecode_cache: t.Optional["BytecodeCache"] = None, + enable_async: bool = False, + ): + # !!Important notice!! + # The constructor accepts quite a few arguments that should be + # passed by keyword rather than position. However it's important to + # not change the order of arguments because it's used at least + # internally in those cases: + # - spontaneous environments (i18n extension and Template) + # - unittests + # If parameter changes are required only add parameters at the end + # and don't change the arguments (or the defaults!) of the arguments + # existing already. + + # lexer / parser information + self.block_start_string = block_start_string + self.block_end_string = block_end_string + self.variable_start_string = variable_start_string + self.variable_end_string = variable_end_string + self.comment_start_string = comment_start_string + self.comment_end_string = comment_end_string + self.line_statement_prefix = line_statement_prefix + self.line_comment_prefix = line_comment_prefix + self.trim_blocks = trim_blocks + self.lstrip_blocks = lstrip_blocks + self.newline_sequence = newline_sequence + self.keep_trailing_newline = keep_trailing_newline + + # runtime information + self.undefined: t.Type[Undefined] = undefined + self.optimized = optimized + self.finalize = finalize + self.autoescape = autoescape + + # defaults + self.filters = DEFAULT_FILTERS.copy() + self.tests = DEFAULT_TESTS.copy() + self.globals = DEFAULT_NAMESPACE.copy() + + # set the loader provided + self.loader = loader + self.cache = create_cache(cache_size) + self.bytecode_cache = bytecode_cache + self.auto_reload = auto_reload + + # configurable policies + self.policies = DEFAULT_POLICIES.copy() + + # load extensions + self.extensions = load_extensions(self, extensions) + + self.is_async = enable_async + _environment_config_check(self) + + def add_extension(self, extension: t.Union[str, t.Type["Extension"]]) -> None: + """Adds an extension after the environment was created. + + .. versionadded:: 2.5 + """ + self.extensions.update(load_extensions(self, [extension])) + + def extend(self, **attributes: t.Any) -> None: + """Add the items to the instance of the environment if they do not exist + yet. This is used by :ref:`extensions ` to register + callbacks and configuration values without breaking inheritance. + """ + for key, value in attributes.items(): + if not hasattr(self, key): + setattr(self, key, value) + + def overlay( + self, + block_start_string: str = missing, + block_end_string: str = missing, + variable_start_string: str = missing, + variable_end_string: str = missing, + comment_start_string: str = missing, + comment_end_string: str = missing, + line_statement_prefix: t.Optional[str] = missing, + line_comment_prefix: t.Optional[str] = missing, + trim_blocks: bool = missing, + lstrip_blocks: bool = missing, + newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = missing, + keep_trailing_newline: bool = missing, + extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = missing, + optimized: bool = missing, + undefined: t.Type[Undefined] = missing, + finalize: t.Optional[t.Callable[..., t.Any]] = missing, + autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = missing, + loader: t.Optional["BaseLoader"] = missing, + cache_size: int = missing, + auto_reload: bool = missing, + bytecode_cache: t.Optional["BytecodeCache"] = missing, + enable_async: bool = missing, + ) -> "te.Self": + """Create a new overlay environment that shares all the data with the + current environment except for cache and the overridden attributes. + Extensions cannot be removed for an overlayed environment. An overlayed + environment automatically gets all the extensions of the environment it + is linked to plus optional extra extensions. + + Creating overlays should happen after the initial environment was set + up completely. Not all attributes are truly linked, some are just + copied over so modifications on the original environment may not shine + through. + + .. versionchanged:: 3.1.5 + ``enable_async`` is applied correctly. + + .. versionchanged:: 3.1.2 + Added the ``newline_sequence``, ``keep_trailing_newline``, + and ``enable_async`` parameters to match ``__init__``. + """ + args = dict(locals()) + del args["self"], args["cache_size"], args["extensions"], args["enable_async"] + + rv = object.__new__(self.__class__) + rv.__dict__.update(self.__dict__) + rv.overlayed = True + rv.linked_to = self + + for key, value in args.items(): + if value is not missing: + setattr(rv, key, value) + + if cache_size is not missing: + rv.cache = create_cache(cache_size) + else: + rv.cache = copy_cache(self.cache) + + rv.extensions = {} + for key, value in self.extensions.items(): + rv.extensions[key] = value.bind(rv) + if extensions is not missing: + rv.extensions.update(load_extensions(rv, extensions)) + + if enable_async is not missing: + rv.is_async = enable_async + + return _environment_config_check(rv) + + @property + def lexer(self) -> Lexer: + """The lexer for this environment.""" + return get_lexer(self) + + def iter_extensions(self) -> t.Iterator["Extension"]: + """Iterates over the extensions by priority.""" + return iter(sorted(self.extensions.values(), key=lambda x: x.priority)) + + def getitem( + self, obj: t.Any, argument: t.Union[str, t.Any] + ) -> t.Union[t.Any, Undefined]: + """Get an item or attribute of an object but prefer the item.""" + try: + return obj[argument] + except (AttributeError, TypeError, LookupError): + if isinstance(argument, str): + try: + attr = str(argument) + except Exception: + pass + else: + try: + return getattr(obj, attr) + except AttributeError: + pass + return self.undefined(obj=obj, name=argument) + + def getattr(self, obj: t.Any, attribute: str) -> t.Any: + """Get an item or attribute of an object but prefer the attribute. + Unlike :meth:`getitem` the attribute *must* be a string. + """ + try: + return getattr(obj, attribute) + except AttributeError: + pass + try: + return obj[attribute] + except (TypeError, LookupError, AttributeError): + return self.undefined(obj=obj, name=attribute) + + def _filter_test_common( + self, + name: t.Union[str, Undefined], + value: t.Any, + args: t.Optional[t.Sequence[t.Any]], + kwargs: t.Optional[t.Mapping[str, t.Any]], + context: t.Optional[Context], + eval_ctx: t.Optional[EvalContext], + is_filter: bool, + ) -> t.Any: + if is_filter: + env_map = self.filters + type_name = "filter" + else: + env_map = self.tests + type_name = "test" + + func = env_map.get(name) # type: ignore + + if func is None: + msg = f"No {type_name} named {name!r}." + + if isinstance(name, Undefined): + try: + name._fail_with_undefined_error() + except Exception as e: + msg = f"{msg} ({e}; did you forget to quote the callable name?)" + + raise TemplateRuntimeError(msg) + + args = [value, *(args if args is not None else ())] + kwargs = kwargs if kwargs is not None else {} + pass_arg = _PassArg.from_obj(func) + + if pass_arg is _PassArg.context: + if context is None: + raise TemplateRuntimeError( + f"Attempted to invoke a context {type_name} without context." + ) + + args.insert(0, context) + elif pass_arg is _PassArg.eval_context: + if eval_ctx is None: + if context is not None: + eval_ctx = context.eval_ctx + else: + eval_ctx = EvalContext(self) + + args.insert(0, eval_ctx) + elif pass_arg is _PassArg.environment: + args.insert(0, self) + + return func(*args, **kwargs) + + def call_filter( + self, + name: str, + value: t.Any, + args: t.Optional[t.Sequence[t.Any]] = None, + kwargs: t.Optional[t.Mapping[str, t.Any]] = None, + context: t.Optional[Context] = None, + eval_ctx: t.Optional[EvalContext] = None, + ) -> t.Any: + """Invoke a filter on a value the same way the compiler does. + + This might return a coroutine if the filter is running from an + environment in async mode and the filter supports async + execution. It's your responsibility to await this if needed. + + .. versionadded:: 2.7 + """ + return self._filter_test_common( + name, value, args, kwargs, context, eval_ctx, True + ) + + def call_test( + self, + name: str, + value: t.Any, + args: t.Optional[t.Sequence[t.Any]] = None, + kwargs: t.Optional[t.Mapping[str, t.Any]] = None, + context: t.Optional[Context] = None, + eval_ctx: t.Optional[EvalContext] = None, + ) -> t.Any: + """Invoke a test on a value the same way the compiler does. + + This might return a coroutine if the test is running from an + environment in async mode and the test supports async execution. + It's your responsibility to await this if needed. + + .. versionchanged:: 3.0 + Tests support ``@pass_context``, etc. decorators. Added + the ``context`` and ``eval_ctx`` parameters. + + .. versionadded:: 2.7 + """ + return self._filter_test_common( + name, value, args, kwargs, context, eval_ctx, False + ) + + @internalcode + def parse( + self, + source: str, + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + ) -> nodes.Template: + """Parse the sourcecode and return the abstract syntax tree. This + tree of nodes is used by the compiler to convert the template into + executable source- or bytecode. This is useful for debugging or to + extract information from templates. + + If you are :ref:`developing Jinja extensions ` + this gives you a good overview of the node tree generated. + """ + try: + return self._parse(source, name, filename) + except TemplateSyntaxError: + self.handle_exception(source=source) + + def _parse( + self, source: str, name: t.Optional[str], filename: t.Optional[str] + ) -> nodes.Template: + """Internal parsing function used by `parse` and `compile`.""" + return Parser(self, source, name, filename).parse() + + def lex( + self, + source: str, + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + ) -> t.Iterator[t.Tuple[int, str, str]]: + """Lex the given sourcecode and return a generator that yields + tokens as tuples in the form ``(lineno, token_type, value)``. + This can be useful for :ref:`extension development ` + and debugging templates. + + This does not perform preprocessing. If you want the preprocessing + of the extensions to be applied you have to filter source through + the :meth:`preprocess` method. + """ + source = str(source) + try: + return self.lexer.tokeniter(source, name, filename) + except TemplateSyntaxError: + self.handle_exception(source=source) + + def preprocess( + self, + source: str, + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + ) -> str: + """Preprocesses the source with all extensions. This is automatically + called for all parsing and compiling methods but *not* for :meth:`lex` + because there you usually only want the actual source tokenized. + """ + return reduce( + lambda s, e: e.preprocess(s, name, filename), + self.iter_extensions(), + str(source), + ) + + def _tokenize( + self, + source: str, + name: t.Optional[str], + filename: t.Optional[str] = None, + state: t.Optional[str] = None, + ) -> TokenStream: + """Called by the parser to do the preprocessing and filtering + for all the extensions. Returns a :class:`~jinja2.lexer.TokenStream`. + """ + source = self.preprocess(source, name, filename) + stream = self.lexer.tokenize(source, name, filename, state) + + for ext in self.iter_extensions(): + stream = ext.filter_stream(stream) # type: ignore + + if not isinstance(stream, TokenStream): + stream = TokenStream(stream, name, filename) + + return stream + + def _generate( + self, + source: nodes.Template, + name: t.Optional[str], + filename: t.Optional[str], + defer_init: bool = False, + ) -> str: + """Internal hook that can be overridden to hook a different generate + method in. + + .. versionadded:: 2.5 + """ + return generate( # type: ignore + source, + self, + name, + filename, + defer_init=defer_init, + optimized=self.optimized, + ) + + def _compile(self, source: str, filename: str) -> CodeType: + """Internal hook that can be overridden to hook a different compile + method in. + + .. versionadded:: 2.5 + """ + return compile(source, filename, "exec") + + @typing.overload + def compile( + self, + source: t.Union[str, nodes.Template], + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + raw: "te.Literal[False]" = False, + defer_init: bool = False, + ) -> CodeType: ... + + @typing.overload + def compile( + self, + source: t.Union[str, nodes.Template], + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + raw: "te.Literal[True]" = ..., + defer_init: bool = False, + ) -> str: ... + + @internalcode + def compile( + self, + source: t.Union[str, nodes.Template], + name: t.Optional[str] = None, + filename: t.Optional[str] = None, + raw: bool = False, + defer_init: bool = False, + ) -> t.Union[str, CodeType]: + """Compile a node or template source code. The `name` parameter is + the load name of the template after it was joined using + :meth:`join_path` if necessary, not the filename on the file system. + the `filename` parameter is the estimated filename of the template on + the file system. If the template came from a database or memory this + can be omitted. + + The return value of this method is a python code object. If the `raw` + parameter is `True` the return value will be a string with python + code equivalent to the bytecode returned otherwise. This method is + mainly used internally. + + `defer_init` is use internally to aid the module code generator. This + causes the generated code to be able to import without the global + environment variable to be set. + + .. versionadded:: 2.4 + `defer_init` parameter added. + """ + source_hint = None + try: + if isinstance(source, str): + source_hint = source + source = self._parse(source, name, filename) + source = self._generate(source, name, filename, defer_init=defer_init) + if raw: + return source + if filename is None: + filename = "