Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'maintainer', 'download_url', 'maintainer_email'}) and 3 missing columns ({'keywords', 'requires_python', 'license_expression'}). This happened while the json dataset builder was generating data using hf://datasets/labofsahil/pypi-packages-metadata-dataset/pypi-packages-metadata-000000000001.json (at revision 87a8040060ae01aeffc4f4c7b1feeaa1cc8e6ca9) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast metadata_version: string name: string version: string summary: string description: string author: string author_email: string license: string classifiers: list<item: string> child 0, item: string platform: list<item: string> child 0, item: string home_page: string requires: list<item: string> child 0, item: string provides: list<item: string> child 0, item: string obsoletes: list<item: null> child 0, item: null requires_dist: list<item: string> child 0, item: string provides_dist: list<item: null> child 0, item: null obsoletes_dist: list<item: null> child 0, item: null requires_external: list<item: null> child 0, item: null project_urls: list<item: string> child 0, item: string uploaded_via: string upload_time: string filename: string size: string path: string python_version: string packagetype: string has_signature: bool md5_digest: string sha256_digest: string blake2_256_digest: string license_files: list<item: null> child 0, item: null description_content_type: string download_url: string maintainer: string maintainer_email: string to {'metadata_version': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'version': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'description_content_type': Value(dtype='string', id=None), 'author_email': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'keywords': Value(dtype='string', id=None), 'classifiers': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'platform': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'requires_python': Value(dtype='string', id=None), 'requires': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'provides': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'obsoletes': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'requires_dist': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'provides_dist': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'obsoletes_dist': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'requires_external': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'project_urls': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'uploaded_via': Value(dtype='string', id=None), 'upload_time': Value(dtype='string', id=None), 'filename': Value(dtype='string', id=None), 'size': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'python_version': Value(dtype='string', id=None), 'packagetype': Value(dtype='string', id=None), 'has_signature': Value(dtype='bool', id=None), 'md5_digest': Value(dtype='string', id=None), 'sha256_digest': Value(dtype='string', id=None), 'blake2_256_digest': Value(dtype='string', id=None), 'license_files': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'author': Value(dtype='string', id=None), 'home_page': Value(dtype='string', id=None), 'license_expression': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'maintainer', 'download_url', 'maintainer_email'}) and 3 missing columns ({'keywords', 'requires_python', 'license_expression'}). This happened while the json dataset builder was generating data using hf://datasets/labofsahil/pypi-packages-metadata-dataset/pypi-packages-metadata-000000000001.json (at revision 87a8040060ae01aeffc4f4c7b1feeaa1cc8e6ca9) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
metadata_version
string | name
string | version
string | summary
string | description
string | description_content_type
string | author_email
string | license
string | keywords
null | classifiers
sequence | platform
sequence | requires_python
string | requires
sequence | provides
sequence | obsoletes
sequence | requires_dist
sequence | provides_dist
sequence | obsoletes_dist
sequence | requires_external
sequence | project_urls
sequence | uploaded_via
string | upload_time
string | filename
string | size
string | path
string | python_version
string | packagetype
string | has_signature
bool | md5_digest
string | sha256_digest
string | blake2_256_digest
string | license_files
sequence | author
string | home_page
string | license_expression
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | falgueras | 1.1.6 | Common code for Python projects involving GCP, Pandas, and Spark. |
# Falgueras ๐ชด
[](https://pypi.org/project/falgueras/)
Development framework for Python projects involving GCP, Pandas, and Spark.
The main goal is to accelerate development of data-driven projects by offering a unified framework
for developers with different backgrounds, from software and data engineers to data scientists.
## Set up
Base package: `pip install falgueras` (requieres Python>=3.10)
PySpark dependencies: `pip install falgueras[spark]` (PySpark 3.5.2)
PySpark libraries are optional to keep the package lightweight and because in most cases
they are already provided by the environment. If you don't use falgueras PySpark
dependencies, keep in mind that versions of the numpy, pandas and pyarrow packages were
tested against PySpark version 3.5.2. Behavior with other versions may change.
### Run Spark 3.5.2 applications locally in Windows from IntelliJ
_try fast fail fast learn fast_
For local Spark execution in Windows, the following environment variables must be set appropriately:
- SPARK_HOME; version spark-3.5.2-bin-hadoop3.
- HADOOP_HOME; same value than SPARK_HOME.
- JAVA_HOME; recommended Java SDK 11.
- PATH += %HADOOP_HOME%\bin, %JAVA_HOME%\bin.
%HADOOP_HOME%\bin must contain files winutils.exe and hadoop.dll, download from
[here](https://github.com/kontext-tech/winutils/blob/master/hadoop-3.3.0/bin).
Additionally, add `findspark.init()` at the beginning of the script in order to set and add
environment variables and dependencies to sys.path.
### Connect to BigQuery from Spark
As shown in the `spark_session_utils.py`, the SparkSession used must include the jar
`com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.41.1`
in order to communicate with BigQuery.
## Packages
### `falgueras.common`
Shared code between other packages and utils functions: datetime, json, enums, logging.
### `falgueras.gcp`
The functionalities of various Google Cloud Platform (GCP) services are encapsulated within
custom client classes. This approach enhances clarity and promotes better encapsulation.
File `services_toolbox.py` contains standalone functions to interact with GCP services. If there's
more than one function for a particular service, they must be grouped in a custom client class.
For instance, Google Cloud Storage (GCS) operations are wrapped in the `gcp.GcsClient` class,
which has an attribute that holds the actual `storage.Client` object from GCS. Multiple `GcsClient`
instances can share the same `storage.Client` object.
### `falgueras.pandas`
Pandas related code.
The pandas_repo.py file provides a modular and extensible framework for handling pandas DataFrame operations
across various storage systems. Using the `PandasRepo` abstract base class and `PandasRepoProtocol`,
it standardizes read and write operations while enabling custom implementations for specific backends
such as BigQuery (`BqPandasRepo`). These implementations encapsulate backend-specific logic, allowing
users to interact with data sources using a consistent interface.
`BqPandasRepo` uses `gcp.BqClient` to interact with BigQuery.
### `falgueras.spark`
Spark related code.
In the same way than the pandas_repo.py file, the spark_repo.py file provides a modular and extensible
framework for handling Spark DataFrame operations across various storage systems. Using the `SparkRepo` abstract base
class and `SparkRepoProtocol`, it standardizes read and write operations while enabling custom implementations for
specific backends such as BigQuery (`BqSparkRepo`). These implementations encapsulate backend-specific logic, allowing
users to interact with data sources using a consistent interface.
In contrast to `BqPandasRepo`, `BqSparkRepo` uses connectors
gcs-connector-hadoop3 and spark-bigquery-with-dependencies in order to interact with BigQuery. | text/markdown | Aleix Falgueras Casals <falguerasaleix@gmail.com> | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | >=3.10 | [] | [] | [] | [
"colorama~=0.4.6",
"db-dtypes~=1.3.1",
"google-api-core~=2.24.0",
"google-api-python-client~=2.156.0",
"google-auth~=2.37.0",
"google-cloud-bigquery-storage~=2.27.0",
"google-cloud-bigquery~=3.27.0",
"google-cloud-language~=2.16.0",
"google-cloud-secret-manager~=2.22.0",
"google-cloud-storage~=2.19.0",
"google-cloud-translate~=3.20.1",
"numpy==1.26.4",
"pandas==2.1.4",
"protobuf~=5.29.2",
"pytz~=2024.2",
"requests~=2.32.3",
"findspark==1.4.2; extra == \"spark\"",
"pyarrow==12.0.1; extra == \"spark\"",
"pyspark==3.5.2; extra == \"spark\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.16 | 2025-02-22 18:39:06.911874 UTC | falgueras-1.1.6-py3-none-any.whl | 28381 | c2/f1/895e7121207c32fc0ca9bf290de7efa6422fe96d6b16382a6233526eaecd/falgueras-1.1.6-py3-none-any.whl | py3 | bdist_wheel | false | 66c2e94e6ac8ce8473489922e2f0dee6 | 8d9362ac52dc5be22f76032646b38ec974f68033ed5470c755485dfa723a3faf | c2f1895e7121207c32fc0ca9bf290de7efa6422fe96d6b16382a6233526eaecd | [
"LICENSE"
] | null | null | null |
2.4 | falgueras | 1.1.6 | Common code for Python projects involving GCP, Pandas, and Spark. |
# Falgueras ๐ชด
[](https://pypi.org/project/falgueras/)
Development framework for Python projects involving GCP, Pandas, and Spark.
The main goal is to accelerate development of data-driven projects by offering a unified framework
for developers with different backgrounds, from software and data engineers to data scientists.
## Set up
Base package: `pip install falgueras` (requieres Python>=3.10)
PySpark dependencies: `pip install falgueras[spark]` (PySpark 3.5.2)
PySpark libraries are optional to keep the package lightweight and because in most cases
they are already provided by the environment. If you don't use falgueras PySpark
dependencies, keep in mind that versions of the numpy, pandas and pyarrow packages were
tested against PySpark version 3.5.2. Behavior with other versions may change.
### Run Spark 3.5.2 applications locally in Windows from IntelliJ
_try fast fail fast learn fast_
For local Spark execution in Windows, the following environment variables must be set appropriately:
- SPARK_HOME; version spark-3.5.2-bin-hadoop3.
- HADOOP_HOME; same value than SPARK_HOME.
- JAVA_HOME; recommended Java SDK 11.
- PATH += %HADOOP_HOME%\bin, %JAVA_HOME%\bin.
%HADOOP_HOME%\bin must contain files winutils.exe and hadoop.dll, download from
[here](https://github.com/kontext-tech/winutils/blob/master/hadoop-3.3.0/bin).
Additionally, add `findspark.init()` at the beginning of the script in order to set and add
environment variables and dependencies to sys.path.
### Connect to BigQuery from Spark
As shown in the `spark_session_utils.py`, the SparkSession used must include the jar
`com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.41.1`
in order to communicate with BigQuery.
## Packages
### `falgueras.common`
Shared code between other packages and utils functions: datetime, json, enums, logging.
### `falgueras.gcp`
The functionalities of various Google Cloud Platform (GCP) services are encapsulated within
custom client classes. This approach enhances clarity and promotes better encapsulation.
File `services_toolbox.py` contains standalone functions to interact with GCP services. If there's
more than one function for a particular service, they must be grouped in a custom client class.
For instance, Google Cloud Storage (GCS) operations are wrapped in the `gcp.GcsClient` class,
which has an attribute that holds the actual `storage.Client` object from GCS. Multiple `GcsClient`
instances can share the same `storage.Client` object.
### `falgueras.pandas`
Pandas related code.
The pandas_repo.py file provides a modular and extensible framework for handling pandas DataFrame operations
across various storage systems. Using the `PandasRepo` abstract base class and `PandasRepoProtocol`,
it standardizes read and write operations while enabling custom implementations for specific backends
such as BigQuery (`BqPandasRepo`). These implementations encapsulate backend-specific logic, allowing
users to interact with data sources using a consistent interface.
`BqPandasRepo` uses `gcp.BqClient` to interact with BigQuery.
### `falgueras.spark`
Spark related code.
In the same way than the pandas_repo.py file, the spark_repo.py file provides a modular and extensible
framework for handling Spark DataFrame operations across various storage systems. Using the `SparkRepo` abstract base
class and `SparkRepoProtocol`, it standardizes read and write operations while enabling custom implementations for
specific backends such as BigQuery (`BqSparkRepo`). These implementations encapsulate backend-specific logic, allowing
users to interact with data sources using a consistent interface.
In contrast to `BqPandasRepo`, `BqSparkRepo` uses connectors
gcs-connector-hadoop3 and spark-bigquery-with-dependencies in order to interact with BigQuery. | text/markdown | Aleix Falgueras Casals <falguerasaleix@gmail.com> | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | >=3.10 | [] | [] | [] | [
"colorama~=0.4.6",
"db-dtypes~=1.3.1",
"google-api-core~=2.24.0",
"google-api-python-client~=2.156.0",
"google-auth~=2.37.0",
"google-cloud-bigquery-storage~=2.27.0",
"google-cloud-bigquery~=3.27.0",
"google-cloud-language~=2.16.0",
"google-cloud-secret-manager~=2.22.0",
"google-cloud-storage~=2.19.0",
"google-cloud-translate~=3.20.1",
"numpy==1.26.4",
"pandas==2.1.4",
"protobuf~=5.29.2",
"pytz~=2024.2",
"requests~=2.32.3",
"findspark==1.4.2; extra == \"spark\"",
"pyarrow==12.0.1; extra == \"spark\"",
"pyspark==3.5.2; extra == \"spark\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.16 | 2025-02-22 18:39:07.813615 UTC | falgueras-1.1.6.tar.gz | 23561 | 90/bf/af7712733446ba3f68b79d975c2f9e0c0d39fd8b0f0fe8699a10bd4eb60f/falgueras-1.1.6.tar.gz | source | sdist | false | e3df8bd4ec9dd9d427027e16573a5357 | 26495e20ff0759b6cbc34789312dca0c9528cd7f12420d3dd96eb8057ba171c1 | 90bfaf7712733446ba3f68b79d975c2f9e0c0d39fd8b0f0fe8699a10bd4eb60f | [
"LICENSE"
] | null | null | null |
2.3 | chess-gen | 1.2.1 | Generate chess positions and practise on Lichess. | # Chess Gen
[](https://pypi.org/project/chess-gen/)
Generate chess positions and practise on Lichess.
The generated positions are random, which is different to Lichess' presets.
## Example
```text
$ chessg
โญโโโโโโโโโโโโโโโโโโโโโโโโโโ Piece Input โโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโโโโโโโโโ Commands โโโโโโโโโโโฎ
โ Generate chess positions and practise on Lichess. โ โ h Help โ
โ โ โ Enter Use previous input โ
โ Provide the symbols of the pieces to place on the board. White โ โ Ctrl+D Quit โ
โ pieces are P, N, B, R, Q, black pieces are p, n, b, r, q. Kings โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ are automatically added and must not be part of the input. โ
โ You can separate piece symbols by commas and/or spaces. โ
โ โ
โ Examples: โ
โ โ
โ Qr - queen against rook โ
โ R, p, p - rook against two pawns โ
โ N B B q - knight and two bishops against a queen โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Position: BN
. . . k . . . .
. . . . . . . .
. . . . . . . .
. N . . . . . .
. . . . . . . .
B . . . . . . .
. . . . K . . .
. . . . . . . .
https://lichess.org/?fen=3k4/8/8/1N6/8/B7/4K3/8%20w%20-%20-%200%201#ai
Position (enter = BN): ^D
Bye!
```
## Installation
```shell
pip install chess-gen
```
| text/markdown | null | null | null | [
"License :: OSI Approved :: MIT License"
] | [] | >=3.9 | [] | [
"chess_gen"
] | [] | [
"chess",
"rich",
"flit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://github.com/Stannislav/chess-gen",
"Home, https://github.com/Stannislav/chess-gen",
"Source, https://github.com/Stannislav/chess-gen"
] | python-requests/2.32.3 | 2025-02-22 18:39:13.735443 UTC | chess_gen-1.2.1-py3-none-any.whl | 5028 | ee/99/af632aa47d26d637b51208d7b7a370d2a3d94a4b05eacdc4d9a355d4556f/chess_gen-1.2.1-py3-none-any.whl | py3 | bdist_wheel | false | 441eb367e8f0aa1ad2cd9d7417a5fa89 | 79d2683af8f2c5c9ab212cd3051371766582d657b294a16ca71e3e61ebf8861e | ee99af632aa47d26d637b51208d7b7a370d2a3d94a4b05eacdc4d9a355d4556f | [] | Stanislav Schmidt | null | null |
2.3 | chess-gen | 1.2.1 | Generate chess positions and practise on Lichess. | # Chess Gen
[](https://pypi.org/project/chess-gen/)
Generate chess positions and practise on Lichess.
The generated positions are random, which is different to Lichess' presets.
## Example
```text
$ chessg
โญโโโโโโโโโโโโโโโโโโโโโโโโโโ Piece Input โโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโโโโโโโโโ Commands โโโโโโโโโโโฎ
โ Generate chess positions and practise on Lichess. โ โ h Help โ
โ โ โ Enter Use previous input โ
โ Provide the symbols of the pieces to place on the board. White โ โ Ctrl+D Quit โ
โ pieces are P, N, B, R, Q, black pieces are p, n, b, r, q. Kings โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ are automatically added and must not be part of the input. โ
โ You can separate piece symbols by commas and/or spaces. โ
โ โ
โ Examples: โ
โ โ
โ Qr - queen against rook โ
โ R, p, p - rook against two pawns โ
โ N B B q - knight and two bishops against a queen โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Position: BN
. . . k . . . .
. . . . . . . .
. . . . . . . .
. N . . . . . .
. . . . . . . .
B . . . . . . .
. . . . K . . .
. . . . . . . .
https://lichess.org/?fen=3k4/8/8/1N6/8/B7/4K3/8%20w%20-%20-%200%201#ai
Position (enter = BN): ^D
Bye!
```
## Installation
```shell
pip install chess-gen
```
| text/markdown | null | null | null | [
"License :: OSI Approved :: MIT License"
] | [] | >=3.9 | [] | [
"chess_gen"
] | [] | [
"chess",
"rich",
"flit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://github.com/Stannislav/chess-gen",
"Home, https://github.com/Stannislav/chess-gen",
"Source, https://github.com/Stannislav/chess-gen"
] | python-requests/2.32.3 | 2025-02-22 18:39:15.730531 UTC | chess_gen-1.2.1.tar.gz | 6330 | 22/78/e4a2d7ae973cbb5eeeb284241a7c142275401f88d6cd170aa1cd3e36b5dd/chess_gen-1.2.1.tar.gz | source | sdist | false | 3b1d8ec5de2ecad72e7564a2f03aa17d | 79ba8327b4c87a09e8bf43324b741e7e8f5b503d47fbdc713c4628829c327fde | 2278e4a2d7ae973cbb5eeeb284241a7c142275401f88d6cd170aa1cd3e36b5dd | [] | Stanislav Schmidt | null | null |
2.2 | shc-utilities | 0.0.1 | Utilities for working config and logger | null | null | onkarantad@gmail.com | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.11 | 2025-02-22 18:39:33.79128 UTC | shc_utilities-0.0.1-py3-none-any.whl | 3571 | aa/f2/128d4c688cd409db490b3420fb0c946afd137a079d2709bd5dba40f179cd/shc_utilities-0.0.1-py3-none-any.whl | py3 | bdist_wheel | false | 7c1c652bcb3e067139d1871a8456ae66 | eff8cc75dc266c480831f4e324b48042675516afb54c3fb934bc6f37eda596d4 | aaf2128d4c688cd409db490b3420fb0c946afd137a079d2709bd5dba40f179cd | [] | Onkar Antad | https://gitlab.com/straw-hat-crew/python-shc-lib | null |
2.2 | shc-utilities | 0.0.1 | Utilities for working config and logger | null | null | onkarantad@gmail.com | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.11 | 2025-02-22 18:39:35.420942 UTC | shc_utilities-0.0.1.tar.gz | 3490 | c4/c2/1ef105d379267f7dc62d88142197abccdb4e200f7f473545656d37a3c738/shc_utilities-0.0.1.tar.gz | source | sdist | false | dcfa188f41b4a4b0434b501697efbef8 | f180d24ba0ae6262c05d32532ed2bab5eac7ef66ab57b55265da7bf1c9e63bae | c4c21ef105d379267f7dc62d88142197abccdb4e200f7f473545656d37a3c738 | [] | Onkar Antad | https://gitlab.com/straw-hat-crew/python-shc-lib | null |
2.4 | proj-flow | 0.13.1 | C++ project maintenance, automated | # Project Flow
[](https://github.com/mzdun/proj-flow/actions)
[](https://pypi.python.org/pypi/proj-flow)
[](https://pypi.python.org/pypi/proj-flow)
**Project Flow** aims at being a one-stop tool for C++ projects, from creating new
project, though building and verifying, all the way to publishing releases to
the repository. It will run a set of known steps and will happily consult your
project what do you want to call any subset of those steps.
Currently, it will make use of Conan for external dependencies, CMake presets
for config and build and GitHub CLI for releases.
## Installation
To create a new project with _Project Flow_, first install it using pip:
```sh
(.venv) $ pip install proj-flow
```
Every project created with _Project Flow_ has a self-bootstrapping helper script,
which will install `proj-flow` if it is needed, using either current virtual
environment or switching to a private virtual environment (created inside
`.flow/.venv` directory). This is used by the GitHub workflow in the generated
projects through the `bootstrap` command.
On any platform, this command (and any other) may be called from the root of the
project with:
```sh
python .flow/flow.py bootstrap
```
From Bash with:
```sh
./flow bootstrap
```
From PowerShell with:
```sh
.\flow bootstrap
```
## Creating a project
A fresh C++ project can be created with a
```sh
proj-flow init cxx
```
This command will ask multiple questions to build Mustache context for the
project template. For more information, see [the documentation](https://proj-flow.readthedocs.io/en/latest/).
| text/markdown | Marcin Zdun <marcin.zdun@gmail.com> | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Build Tools"
] | [] | >=3.10 | [] | [] | [] | [
"argcomplete",
"chevron2021",
"prompt-toolkit",
"pyyaml",
"toml"
] | [] | [] | [] | [
"Changelog, https://github.com/mzdun/proj-flow/blob/main/CHANGELOG.rst",
"Documentation, https://proj-flow.readthedocs.io/en/latest/",
"Homepage, https://pypi.org/project/proj-flow/",
"Source Code, https://github.com/mzdun/proj-flow"
] | twine/6.1.0 CPython/3.12.9 | 2025-02-22 18:40:19.238851 UTC | proj_flow-0.13.1-py3-none-any.whl | 162914 | 53/8b/e8b6a031df1de50286c35089a6458760696d69c7ab7b52bc48a22b23f136/proj_flow-0.13.1-py3-none-any.whl | py3 | bdist_wheel | false | 13fd304ce772d0b58a2f7fba2d914af1 | 735879e1268697ac98ae5a42e311bdbc0456f79a47978e35dbd3a9046aa3879c | 538be8b6a031df1de50286c35089a6458760696d69c7ab7b52bc48a22b23f136 | [
"LICENSE"
] | null | null | null |
2.2 | spssimage | 0.1.5 | A lightweight library for image creation and manipulation. | # spssimage
A lightweight Python library for creating and manipulating images, including generating star maps with dynamic effects like twinkling stars and gradients.
## Features
- Create static images with shapes, gradients, and colors.
- Generate animations (e.g., twinkling stars) and export them as GIFs.
- Apply radial gradients to simulate gas clouds.
- Lightweight and independent of external libraries like OpenCV or Pillow.
## Installation
Install the library using pip:
```bash
pip install spssimage
```
USAGE:
from spssimage.core import Canvas
# Create a canvas
canvas = Canvas(100, 100, background=(0, 0, 0))
# Define pixel positions for twinkling
pixel_positions = [(20, 20), (50, 50), (80, 80)]
# Define the base color of the twinkling pixels
base_color = (255, 255, 255)
# Save the twinkling pixels as a GIF
canvas.save_gif(pixel_positions, base_color, "twinkling_pixels.gif", frames=30, duration=100, loop=0)
---
### 2. Prepare for Deployment
#### Build the Package
Run the following commands to build the package:
```bash
# Ensure your virtual environment is active
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install build tools
pip install --upgrade build twine
# Build the package
python -m build
```
| text/markdown | sumedh.patil@aipresso.co.uk | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.6 | [] | [] | [] | [
"numpy",
"Pillow"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.9.12 | 2025-02-22 18:40:20.996012 UTC | spssimage-0.1.5-py3-none-any.whl | 4394 | 19/23/089014ce27e750e6e9316063c0505073c6958e52550988fc071712184df2/spssimage-0.1.5-py3-none-any.whl | py3 | bdist_wheel | false | 783f348ade7abd090d0070b332b0cfe0 | 94db95f9ce9782da393a55b542fe91b7cc974bd2b98a4c656f4f847f0c57918b | 1923089014ce27e750e6e9316063c0505073c6958e52550988fc071712184df2 | [] | Sumedh Patil | https://github.com/Sumedh1599/spssimage | null |
2.4 | proj-flow | 0.13.1 | C++ project maintenance, automated | # Project Flow
[](https://github.com/mzdun/proj-flow/actions)
[](https://pypi.python.org/pypi/proj-flow)
[](https://pypi.python.org/pypi/proj-flow)
**Project Flow** aims at being a one-stop tool for C++ projects, from creating new
project, though building and verifying, all the way to publishing releases to
the repository. It will run a set of known steps and will happily consult your
project what do you want to call any subset of those steps.
Currently, it will make use of Conan for external dependencies, CMake presets
for config and build and GitHub CLI for releases.
## Installation
To create a new project with _Project Flow_, first install it using pip:
```sh
(.venv) $ pip install proj-flow
```
Every project created with _Project Flow_ has a self-bootstrapping helper script,
which will install `proj-flow` if it is needed, using either current virtual
environment or switching to a private virtual environment (created inside
`.flow/.venv` directory). This is used by the GitHub workflow in the generated
projects through the `bootstrap` command.
On any platform, this command (and any other) may be called from the root of the
project with:
```sh
python .flow/flow.py bootstrap
```
From Bash with:
```sh
./flow bootstrap
```
From PowerShell with:
```sh
.\flow bootstrap
```
## Creating a project
A fresh C++ project can be created with a
```sh
proj-flow init cxx
```
This command will ask multiple questions to build Mustache context for the
project template. For more information, see [the documentation](https://proj-flow.readthedocs.io/en/latest/).
| text/markdown | Marcin Zdun <marcin.zdun@gmail.com> | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Build Tools"
] | [] | >=3.10 | [] | [] | [] | [
"argcomplete",
"chevron2021",
"prompt-toolkit",
"pyyaml",
"toml"
] | [] | [] | [] | [
"Changelog, https://github.com/mzdun/proj-flow/blob/main/CHANGELOG.rst",
"Documentation, https://proj-flow.readthedocs.io/en/latest/",
"Homepage, https://pypi.org/project/proj-flow/",
"Source Code, https://github.com/mzdun/proj-flow"
] | twine/6.1.0 CPython/3.12.9 | 2025-02-22 18:40:20.869706 UTC | proj_flow-0.13.1.tar.gz | 120195 | ba/8d/26e18e929cd35ff339c94658fee7a86b517a7b5ad98b53e45dac91e73969/proj_flow-0.13.1.tar.gz | source | sdist | false | b31d1c33fc79ea7c89f0794262a7817e | b9b60704ce1a25bb6681d696dfc6534099500a498deb7d3236135e6065a1a8f2 | ba8d26e18e929cd35ff339c94658fee7a86b517a7b5ad98b53e45dac91e73969 | [
"LICENSE"
] | null | null | null |
2.2 | spssimage | 0.1.5 | A lightweight library for image creation and manipulation. | # spssimage
A lightweight Python library for creating and manipulating images, including generating star maps with dynamic effects like twinkling stars and gradients.
## Features
- Create static images with shapes, gradients, and colors.
- Generate animations (e.g., twinkling stars) and export them as GIFs.
- Apply radial gradients to simulate gas clouds.
- Lightweight and independent of external libraries like OpenCV or Pillow.
## Installation
Install the library using pip:
```bash
pip install spssimage
```
USAGE:
from spssimage.core import Canvas
# Create a canvas
canvas = Canvas(100, 100, background=(0, 0, 0))
# Define pixel positions for twinkling
pixel_positions = [(20, 20), (50, 50), (80, 80)]
# Define the base color of the twinkling pixels
base_color = (255, 255, 255)
# Save the twinkling pixels as a GIF
canvas.save_gif(pixel_positions, base_color, "twinkling_pixels.gif", frames=30, duration=100, loop=0)
---
### 2. Prepare for Deployment
#### Build the Package
Run the following commands to build the package:
```bash
# Ensure your virtual environment is active
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install build tools
pip install --upgrade build twine
# Build the package
python -m build
```
| text/markdown | sumedh.patil@aipresso.co.uk | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.6 | [] | [] | [] | [
"numpy",
"Pillow"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.9.12 | 2025-02-22 18:40:22.819434 UTC | spssimage-0.1.5.tar.gz | 4068 | a8/79/9359dce7fb48c0ad162ae5ffe5bd7a7966a9833a1c63d0e7d68ea73c44cd/spssimage-0.1.5.tar.gz | source | sdist | false | 9d9beb95f65a0c55b568acb482e44a2f | db06f8e0a2bc5f00e54f5ab079733a64d1abdacb638e443d94c866d52bb701e5 | a8799359dce7fb48c0ad162ae5ffe5bd7a7966a9833a1c63d0e7d68ea73c44cd | [] | Sumedh Patil | https://github.com/Sumedh1599/spssimage | null |
2.1 | textual | 2.1.1 | Modern Text User Interface framework |
[](https://discord.gg/Enf6Z3qhVr)
[](https://pypi.org/project/textual/)
[](https://badge.fury.io/py/textual)


# Textual
<img align="right" width="250" alt="clock" src="https://github.com/user-attachments/assets/63e839c3-5b8e-478d-b78e-cf7647eb85e8" />
Build cross-platform user interfaces with a simple Python API. Run your apps in the terminal *or* a web browser.
Textual's API combines modern Python with the best of developments from the web world, for a lean app development experience.
De-coupled components and an advanced [testing](https://textual.textualize.io/guide/testing/) framework ensure you can maintain your app for the long-term.
Want some more examples? See the [examples](https://github.com/Textualize/textual/tree/main/examples) directory.
```python
"""
An App to show the current time.
"""
from datetime import datetime
from textual.app import App, ComposeResult
from textual.widgets import Digits
class ClockApp(App):
CSS = """
Screen { align: center middle; }
Digits { width: auto; }
"""
def compose(self) -> ComposeResult:
yield Digits("")
def on_ready(self) -> None:
self.update_clock()
self.set_interval(1, self.update_clock)
def update_clock(self) -> None:
clock = datetime.now().time()
self.query_one(Digits).update(f"{clock:%T}")
if __name__ == "__main__":
app = ClockApp()
app.run()
```
> [!TIP]
> Textual is an asynchronous framework under the hood. Which means you can integrate your apps with async libraries — if you want to.
> If you don't want or need to use async, Textual won't force it on you.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="64"/>
## Widgets
Textual's library of [widgets](https://textual.textualize.io/widget_gallery/) covers everything from buttons, tree controls, data tables, inputs, text areas, and moreโฆ
Combined with a flexible [layout](https://textual.textualize.io/how-to/design-a-layout/) system, you can realize any User Interface you need.
Predefined themes ensure your apps will look good out of the box.
<table>
<tr>
<td>

</td>
<td>

</td>
</tr>
<tr>
<td>

</td>
<td>

</td>
</tr>
<tr>
<td>

</td>
<td>

</td>
</tr>
</table>
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Installing
Install Textual via pip:
```
pip install textual textual-dev
```
See [getting started](https://textual.textualize.io/getting_started/) for details.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Demo
Run the following command to see a little of what Textual can do:
```
python -m textual
```
Or try the [textual demo](https://github.com/textualize/textual-demo) *without* installing (requires [uv](https://docs.astral.sh/uv/)):
```bash
uvx --python 3.12 textual-demo
```
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Dev Console
<img align="right" width="40%" alt="devtools" src="https://github.com/user-attachments/assets/12c60d65-e342-4b2f-9372-bae0459a7552" />
How do you debug an app in the terminal that is also running in the terminal?
The `textual-dev` package supplies a dev console that connects to your application from another terminal.
In addition to system messages and events, your logged messages and print statements will appear in the dev console.
See [the guide](https://textual.textualize.io/guide/devtools/) for other helpful tools provided by the `textual-dev` package.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Command Palette
Textual apps have a *fuzzy search* command palette.
Hit `ctrl+p` to open the command palette.
It is easy to extend the command palette with [custom commands](https://textual.textualize.io/guide/command_palette/) for your application.

<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
# Textual โค๏ธ Web
<img align="right" width="40%" alt="textual-serve" src="https://github.com/user-attachments/assets/a25820fb-87ae-433a-858b-ac3940169242">
Textual apps are equally at home in the browser as they are the terminal. Any Textual app may be served with `textual serve` — so you can share your creations on the web.
Here's how to serve the demo app:
```
textual serve "python -m textual"
```
In addition to serving your apps locally, you can serve apps with [Textual Web](https://github.com/Textualize/textual-web).
Textual Web's firewall-busting technology can serve an unlimited number of applications.
Since Textual apps have low system requirements, you can install them anywhere Python also runs. Turning any device into a connected device.
No desktop required!
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Join us on Discord
Join the Textual developers and community on our [Discord Server](https://discord.gg/Enf6Z3qhVr).
| text/markdown | will@textualize.io | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Operating System :: Microsoft :: Windows :: Windows 11",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.8",
"Typing :: Typed"
] | [] | <4.0.0,>=3.8.1 | [] | [] | [] | [
"markdown-it-py[linkify,plugins]>=2.1.0",
"rich>=13.3.3",
"typing-extensions<5.0.0,>=4.4.0",
"platformdirs<5,>=3.6.0",
"tree-sitter>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-python>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-markdown>=0.3.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-json>=0.24.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-toml>=0.6.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-yaml>=0.6.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-html>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-css>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-javascript>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-rust>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-go>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-regex>=0.24.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-xml>=0.7.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-sql<0.3.8,>=0.3.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-java>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-bash>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\""
] | [] | [] | [] | [
"Repository, https://github.com/Textualize/textual",
"Documentation, https://textual.textualize.io/",
"Bug Tracker, https://github.com/Textualize/textual/issues"
] | poetry/1.8.2 CPython/3.12.7 Darwin/24.3.0 | 2025-02-22 18:41:01.943203 UTC | textual-2.1.1-py3-none-any.whl | 679910 | 61/34/3be201becd44605ca2c5e964fc9606832c3c1c86465bc4f17e63141e25b1/textual-2.1.1-py3-none-any.whl | py3 | bdist_wheel | false | 1110f146cf9c10dc947e2be4460922a7 | 789c9ba1b2f6b78224ea0fe396e5188feb6882ca43894fc15f6ebbd237525263 | 61343be201becd44605ca2c5e964fc9606832c3c1c86465bc4f17e63141e25b1 | [] | Will McGugan | https://github.com/Textualize/textual | null |
2.1 | textual | 2.1.1 | Modern Text User Interface framework |
[](https://discord.gg/Enf6Z3qhVr)
[](https://pypi.org/project/textual/)
[](https://badge.fury.io/py/textual)


# Textual
<img align="right" width="250" alt="clock" src="https://github.com/user-attachments/assets/63e839c3-5b8e-478d-b78e-cf7647eb85e8" />
Build cross-platform user interfaces with a simple Python API. Run your apps in the terminal *or* a web browser.
Textual's API combines modern Python with the best of developments from the web world, for a lean app development experience.
De-coupled components and an advanced [testing](https://textual.textualize.io/guide/testing/) framework ensure you can maintain your app for the long-term.
Want some more examples? See the [examples](https://github.com/Textualize/textual/tree/main/examples) directory.
```python
"""
An App to show the current time.
"""
from datetime import datetime
from textual.app import App, ComposeResult
from textual.widgets import Digits
class ClockApp(App):
CSS = """
Screen { align: center middle; }
Digits { width: auto; }
"""
def compose(self) -> ComposeResult:
yield Digits("")
def on_ready(self) -> None:
self.update_clock()
self.set_interval(1, self.update_clock)
def update_clock(self) -> None:
clock = datetime.now().time()
self.query_one(Digits).update(f"{clock:%T}")
if __name__ == "__main__":
app = ClockApp()
app.run()
```
> [!TIP]
> Textual is an asynchronous framework under the hood. Which means you can integrate your apps with async libraries — if you want to.
> If you don't want or need to use async, Textual won't force it on you.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="64"/>
## Widgets
Textual's library of [widgets](https://textual.textualize.io/widget_gallery/) covers everything from buttons, tree controls, data tables, inputs, text areas, and moreโฆ
Combined with a flexible [layout](https://textual.textualize.io/how-to/design-a-layout/) system, you can realize any User Interface you need.
Predefined themes ensure your apps will look good out of the box.
<table>
<tr>
<td>

</td>
<td>

</td>
</tr>
<tr>
<td>

</td>
<td>

</td>
</tr>
<tr>
<td>

</td>
<td>

</td>
</tr>
</table>
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Installing
Install Textual via pip:
```
pip install textual textual-dev
```
See [getting started](https://textual.textualize.io/getting_started/) for details.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Demo
Run the following command to see a little of what Textual can do:
```
python -m textual
```
Or try the [textual demo](https://github.com/textualize/textual-demo) *without* installing (requires [uv](https://docs.astral.sh/uv/)):
```bash
uvx --python 3.12 textual-demo
```
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Dev Console
<img align="right" width="40%" alt="devtools" src="https://github.com/user-attachments/assets/12c60d65-e342-4b2f-9372-bae0459a7552" />
How do you debug an app in the terminal that is also running in the terminal?
The `textual-dev` package supplies a dev console that connects to your application from another terminal.
In addition to system messages and events, your logged messages and print statements will appear in the dev console.
See [the guide](https://textual.textualize.io/guide/devtools/) for other helpful tools provided by the `textual-dev` package.
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Command Palette
Textual apps have a *fuzzy search* command palette.
Hit `ctrl+p` to open the command palette.
It is easy to extend the command palette with [custom commands](https://textual.textualize.io/guide/command_palette/) for your application.

<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
# Textual โค๏ธ Web
<img align="right" width="40%" alt="textual-serve" src="https://github.com/user-attachments/assets/a25820fb-87ae-433a-858b-ac3940169242">
Textual apps are equally at home in the browser as they are the terminal. Any Textual app may be served with `textual serve` — so you can share your creations on the web.
Here's how to serve the demo app:
```
textual serve "python -m textual"
```
In addition to serving your apps locally, you can serve apps with [Textual Web](https://github.com/Textualize/textual-web).
Textual Web's firewall-busting technology can serve an unlimited number of applications.
Since Textual apps have low system requirements, you can install them anywhere Python also runs. Turning any device into a connected device.
No desktop required!
<img src="https://img.spacergif.org/spacer.gif" width="1" height="32"/>
## Join us on Discord
Join the Textual developers and community on our [Discord Server](https://discord.gg/Enf6Z3qhVr).
| text/markdown | will@textualize.io | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Operating System :: Microsoft :: Windows :: Windows 11",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.8",
"Typing :: Typed"
] | [] | <4.0.0,>=3.8.1 | [] | [] | [] | [
"markdown-it-py[linkify,plugins]>=2.1.0",
"rich>=13.3.3",
"typing-extensions<5.0.0,>=4.4.0",
"platformdirs<5,>=3.6.0",
"tree-sitter>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-python>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-markdown>=0.3.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-json>=0.24.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-toml>=0.6.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-yaml>=0.6.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-html>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-css>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-javascript>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-rust>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-go>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-regex>=0.24.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-xml>=0.7.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-sql<0.3.8,>=0.3.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-java>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\"",
"tree-sitter-bash>=0.23.0; python_version >= \"3.9\" and extra == \"syntax\""
] | [] | [] | [] | [
"Repository, https://github.com/Textualize/textual",
"Documentation, https://textual.textualize.io/",
"Bug Tracker, https://github.com/Textualize/textual/issues"
] | poetry/1.8.2 CPython/3.12.7 Darwin/24.3.0 | 2025-02-22 18:41:04.684734 UTC | textual-2.1.1.tar.gz | 1596324 | 1a/a7/b0c42e9ccea22dc59b4074c848e2daf9f9d82250ae56f4bd2c918d5f3f2c/textual-2.1.1.tar.gz | source | sdist | false | f126b16caae5a50752676700affbfb4d | c1dd54fce53c3abe87a021735efbbfd8af5313191f0729a02ecdb3083367cf62 | 1aa7b0c42e9ccea22dc59b4074c848e2daf9f9d82250ae56f4bd2c918d5f3f2c | [] | Will McGugan | https://github.com/Textualize/textual | null |
2.3 | airbyte-source-microsoft-sharepoint | 0.6.1 | Source implementation for Microsoft SharePoint. | # Microsoft SharePoint source connector
This is the repository for the Microsoft SharePoint source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/microsoft-sharepoint).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/microsoft-sharepoint)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_microsoft_sharepoint/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-microsoft-sharepoint spec
poetry run source-microsoft-sharepoint check --config secrets/config.json
poetry run source-microsoft-sharepoint discover --config secrets/config.json
poetry run source-microsoft-sharepoint read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-microsoft-sharepoint build
```
An image will be available on your host with the tag `airbyte/source-microsoft-sharepoint:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-microsoft-sharepoint:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-microsoft-sharepoint:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-microsoft-sharepoint:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-microsoft-sharepoint:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-microsoft-sharepoint test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-microsoft-sharepoint test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/microsoft-sharepoint.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | contact@airbyte.io | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"
] | [] | <3.12,>=3.11 | [] | [] | [] | [
"msal==1.25.0",
"Office365-REST-Python-Client==2.5.5",
"smart-open==6.4.0",
"airbyte-cdk[file-based]<7,>=6"
] | [] | [] | [] | [
"Homepage, https://airbyte.com",
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/microsoft-sharepoint"
] | poetry/2.1.1 CPython/3.11.11 Linux/6.8.0-1021-azure | 2025-02-22 18:41:41.864269 UTC | airbyte_source_microsoft_sharepoint-0.6.1-py3-none-any.whl | 14059 | 94/c6/ce097675a713b555014c555b11e8cdd446ce14c370b77488eba515f6b042/airbyte_source_microsoft_sharepoint-0.6.1-py3-none-any.whl | py3 | bdist_wheel | false | 32f13dd08e999fdbc3b20cc20efd39d1 | a4e4646ebc5a5b404e8556767a61562749c2a8140db15ccd8a328cec90923f6f | 94c6ce097675a713b555014c555b11e8cdd446ce14c370b77488eba515f6b042 | [] | Airbyte | https://airbyte.com | null |
2.3 | airbyte-source-microsoft-sharepoint | 0.6.1 | Source implementation for Microsoft SharePoint. | # Microsoft SharePoint source connector
This is the repository for the Microsoft SharePoint source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/microsoft-sharepoint).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/microsoft-sharepoint)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_microsoft_sharepoint/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-microsoft-sharepoint spec
poetry run source-microsoft-sharepoint check --config secrets/config.json
poetry run source-microsoft-sharepoint discover --config secrets/config.json
poetry run source-microsoft-sharepoint read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-microsoft-sharepoint build
```
An image will be available on your host with the tag `airbyte/source-microsoft-sharepoint:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-microsoft-sharepoint:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-microsoft-sharepoint:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-microsoft-sharepoint:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-microsoft-sharepoint:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-microsoft-sharepoint test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-microsoft-sharepoint test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/microsoft-sharepoint.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | contact@airbyte.io | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"
] | [] | <3.12,>=3.11 | [] | [] | [] | [
"msal==1.25.0",
"Office365-REST-Python-Client==2.5.5",
"smart-open==6.4.0",
"airbyte-cdk[file-based]<7,>=6"
] | [] | [] | [] | [
"Homepage, https://airbyte.com",
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/microsoft-sharepoint"
] | poetry/2.1.1 CPython/3.11.11 Linux/6.8.0-1021-azure | 2025-02-22 18:41:42.822665 UTC | airbyte_source_microsoft_sharepoint-0.6.1.tar.gz | 12322 | 93/14/1e339ce17445c83bbc70a364ba7351347c49033294a2bb305a3301a5d5e6/airbyte_source_microsoft_sharepoint-0.6.1.tar.gz | source | sdist | false | 6eb1cadc7eb81b9be828c423cafd11b3 | f66b38f97b627218dc5ba5364685e8dc7a206e0052ee0496b1e2235c21411ab8 | 93141e339ce17445c83bbc70a364ba7351347c49033294a2bb305a3301a5d5e6 | [] | Airbyte | https://airbyte.com | null |
2.1 | pyrdpdb | 4.2.0 | Pure Python RapidsDB Driver | # pyrdpdb
Table of Contents
- Requirements
- Installation
- Documentation
- Example
- Resources
- License
pyrdpdb package is a Python DB-API 2.0 compliant driver package for RapidsDB database, which contains two pure-Python RapidsDB DB-API sub-packages: pyrdp and aiordp, based on PEP 249. Each driver itself also contains a SQLAlchemy dialect driver to allow seamless operations between SQLAlchemy and RapidsDB as a database source.
## Requirements
Python -- one of the following:
CPython : >= 3.9
PyPy : Latest version
RapidsDB Server:
RapidsDB >= 4.x
## Installation
Install package with `pip`:
```shell
python3 -m pip install pyrdpdb
```
## Documentation
## Example
```shell
# Demonstrate DB-API direct database connection
$ python -m pyrdpdb.pyrdp.example.dbapi <hostname>
$ python -m pyrdpdb.pyrdp.example.simple_sa <table_name> <hostname>
# assume RDP running on local host, use argument of either aiordp or pyrdp
$ python -m pyrdpdb.pyrdp.example.many [aiordp | pyrdp]
# Demonstrate DB-API direct database connection
$ python -m pyrdpdb.aiordp.example.engine <hostname>
$ python -m pyrdpdb.aiordp.example.simple_sa <hostname>
$ python -m pyrdpdb.aiordp.example.dbapi_cursor <hostname>
# assume RDP running on local host, use argument of either aiordp or pyrdp
$ python -m pyrdpdb.pyrdp.example.many [aiordp | pyrdp]
```
> Note: \<hostname> is optional, default to **localhost** if not provided.
## Resources
DB-API 2.0: <http://www.python.org/dev/peps/pep-0249>
## License
pyrdpdb is released under the MIT License. See LICENSE for more information.
| text/markdown | robert.li@boraydata.com | MIT | null | [
"Programming Language :: Python :: 3.9",
"Topic :: Database",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.12 | 2025-02-22 18:42:27.100105 UTC | pyrdpdb-4.2.0.tar.gz | 152770 | a7/23/e998c472871fe883b344e7032f6b5100832ca74ebdd9749883c3b6e8d5ad/pyrdpdb-4.2.0.tar.gz | source | sdist | false | 8c59c4ff0966b85f8c656e2e8f49b507 | db93c7669781aece2c551086b5e1d7cb16bcac8a602d2f72341ddc66247f2253 | a723e998c472871fe883b344e7032f6b5100832ca74ebdd9749883c3b6e8d5ad | [] | Robert Li | null | null |
2.1 | locusts | 0.0.81 | Distributes many short tasks on multicore and hpc systems | # Locusts
Locusts is a Python package for distributing many small jobs on a system (which can be your machine or a remote HPC running SLURM).
## Installation
Locusts package is currently part of the [PyPI](https://test.pypi.org) Test archive.
In order to install it, type
`python3 -m pip install --index-url https://test.pypi.org/simple/ --no-deps locusts`
Note: PyPI Test is not a permanent archive. Expect this installation procedure to change over time.
## How it works
Locusts is thought for whom has to run a **huge amount of small, independent jobs** and has problems with the most used schedulers which will scatter the jobs over over too many nodes, or queue them indefinitely.
Moreover, this package provides a **safe, clean environment for each job instance**, and keeps and collects notable inputs and outputs.
In short, locusts creates a minimal filesystem where it prepares one environment for each job it has to execute. The runs are directed by a manager bash script, which schedules them and reports its stauts and the one of the jobs to the main locusts routine, which will always be run locally. Finally, it checks for a set of compulsory output files and compiles a list of success and failures.
### Modes
Locusts can help you distributing your jobs when you are facing one of these three situations:
* You want to run everything on your local machine (**local mode**)
* You want to submit jobs to a HPC (**remote mode**)
* You want to submit jobs to a HPC which shares a directory with your local machine (**remote-shared mode**)
### Environments
Once you give locusts the set of input to consider and the command to execute, it creates the Generic Environment, a minimal filesystem composed of three folders:
* An **execution folder**, where the main manager scripts will be placed and executed and where execution cache files will keep them updated on the progress of the single jobs
* A **work folder**, where the specific inputs of each job are considered and where outputs are addressed
* A **shared folder**, where common inputs have to be placed in case a group of different jobs wants to use them
Basing on this architecture, Locusts provides two types of environments the user can choose from depending on her needs:
#### Default Locusts Environment

If the user only needs to process a (possibly huge) amount of files and get another (still huge) amount of output files in return, this environment is the optimal choice: it allows for minimal data transfer and disk space usage while each of the parallel runs will run in a protected sub-environment. The desired output files and the corresponding logs will then be collected and put in a folder designated by the user
#### Custom Environment

The user could nonetheless want to parallelize a program or a code having more complex effects than taking in a bunch of input files and returning some outputs: for example, a program displacing files around a filesystem will not be able to run in the Default Locusts Environment. In these situations, the program needs to have access to a whole environment rather than to a set of input files.
Starting from this common base, there are two different environments that can be used:
* The default Locusts Environment consists in having one folder corresponding to each set of files for running one instance of the command
* The Custom Environment lets the user employ any other filesystem
## Tutorial
### Example 1: Running a script requiring input/output management (Default Environment)
You can find this example in the directory `tests/test_manager/`
In `tests/test_manager/my_input_dir/` you will find 101 pairs of input files: `inputfile\_\#.txt` and `secondinputfile\_\#.txt`, where 0 <= \# <= 100. Additionally, you will also find a single file named `sharedfile.txt`.
The aim here is executing this small script over the 101 sets of inputs:
`
sleep 1;
ls -lrth <inputfile> <secondinputfile> <sharedfile> > <outputfile>;
cat <inputfile> <secondinputfile> <sharedfile> > <secondoutputfile>
`
For each pair, the script takes in `inputfile\_\#.txt`, `secondinputfile\_\#.txt` (both vary from instance to instance) and `sharedfile.txt` (which instead remains always the same), and returns `ls\_output\_\#.txt` and `cat\_output\_\#.txt`. In order to mimick a longer process, the script is artificially made to last at least one second.
The file `tests/test_manager/test_manager.py` gives you an example (and also a template) of how ou can submit a job on Locusts.
The function you want to call is `locusts.swarm.launch`, which takes several arguments.
Before describing them, let's look at the strategy used by Locusts: in essence, you give Locusts a template of the command you want to execute, and the you tell Locusts where to look for files to execute that template with. In our case, the template is:
`
sleep 1;
ls -lrth inputfile_<id>.txt secondinputfile_<id>.txt <shared>sf1 > ls_output_<id>.txt;
cat inputfile_<id>.txt secondinputfile_<id>.txt <shared>sf1 > cat_output_<id>.txt
`
Notice there are two handles that Locusts will know how to replace: `<id>` and `<shared>`. The `<id>` handle is there to specify the variable part of a filename (in our case, an integer in the [0,100] interval). The `<shared>` tag tells locust
* `indir` takes the location (absolute path or relative from where you are calling the script) of the directory containing all your input files
* `outdir` takes the location (absolute path or relative from where you are calling the script) of the directory where you want to collect your results
* `code` takes a unique codename for the job you want to launch
* `spcins` takes a list containing the template names for the
shdins=shared_inputs,
outs=outputs,
cmd=command_template,
parf=parameter_file
### Example 2: Running a script requiring input/output management (Default Environment)
You will find the material
| text/markdown | edoardo.sarti@gmail.com | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Science/Research",
"Topic :: System :: Distributed Computing",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3 :: Only"
] | [] | <4,>=3.5 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Reports, https://github.com/pypa/sampleproject/issues",
"Funding, https://donate.pypi.org",
"Say Thanks!, http://saythanks.io/to/example",
"Source, https://github.com/pypa/sampleproject/"
] | twine/6.1.0 CPython/3.8.19 | 2025-02-22 18:43:06.170485 UTC | locusts-0.0.81-py3-none-any.whl | 27133 | 1b/33/3f5207348b0d305848ee369062a073a3b2f509aa07bf297acb0bdb88cd77/locusts-0.0.81-py3-none-any.whl | py3 | bdist_wheel | false | 8b9571f256ba676637f488daddaa50c8 | 657e03fe9f6e7e8a76b3fbcdf2167b4e09e10eb88780d4f953a0b70dd826d5b4 | 1b333f5207348b0d305848ee369062a073a3b2f509aa07bf297acb0bdb88cd77 | [] | Edoardo Sarti | https://github.com/pypa/sampleproject | null |
2.1 | locusts | 0.0.81 | Distributes many short tasks on multicore and hpc systems | # Locusts
Locusts is a Python package for distributing many small jobs on a system (which can be your machine or a remote HPC running SLURM).
## Installation
Locusts package is currently part of the [PyPI](https://test.pypi.org) Test archive.
In order to install it, type
`python3 -m pip install --index-url https://test.pypi.org/simple/ --no-deps locusts`
Note: PyPI Test is not a permanent archive. Expect this installation procedure to change over time.
## How it works
Locusts is thought for whom has to run a **huge amount of small, independent jobs** and has problems with the most used schedulers which will scatter the jobs over over too many nodes, or queue them indefinitely.
Moreover, this package provides a **safe, clean environment for each job instance**, and keeps and collects notable inputs and outputs.
In short, locusts creates a minimal filesystem where it prepares one environment for each job it has to execute. The runs are directed by a manager bash script, which schedules them and reports its stauts and the one of the jobs to the main locusts routine, which will always be run locally. Finally, it checks for a set of compulsory output files and compiles a list of success and failures.
### Modes
Locusts can help you distributing your jobs when you are facing one of these three situations:
* You want to run everything on your local machine (**local mode**)
* You want to submit jobs to a HPC (**remote mode**)
* You want to submit jobs to a HPC which shares a directory with your local machine (**remote-shared mode**)
### Environments
Once you give locusts the set of input to consider and the command to execute, it creates the Generic Environment, a minimal filesystem composed of three folders:
* An **execution folder**, where the main manager scripts will be placed and executed and where execution cache files will keep them updated on the progress of the single jobs
* A **work folder**, where the specific inputs of each job are considered and where outputs are addressed
* A **shared folder**, where common inputs have to be placed in case a group of different jobs wants to use them
Basing on this architecture, Locusts provides two types of environments the user can choose from depending on her needs:
#### Default Locusts Environment

If the user only needs to process a (possibly huge) amount of files and get another (still huge) amount of output files in return, this environment is the optimal choice: it allows for minimal data transfer and disk space usage while each of the parallel runs will run in a protected sub-environment. The desired output files and the corresponding logs will then be collected and put in a folder designated by the user
#### Custom Environment

The user could nonetheless want to parallelize a program or a code having more complex effects than taking in a bunch of input files and returning some outputs: for example, a program displacing files around a filesystem will not be able to run in the Default Locusts Environment. In these situations, the program needs to have access to a whole environment rather than to a set of input files.
Starting from this common base, there are two different environments that can be used:
* The default Locusts Environment consists in having one folder corresponding to each set of files for running one instance of the command
* The Custom Environment lets the user employ any other filesystem
## Tutorial
### Example 1: Running a script requiring input/output management (Default Environment)
You can find this example in the directory `tests/test_manager/`
In `tests/test_manager/my_input_dir/` you will find 101 pairs of input files: `inputfile\_\#.txt` and `secondinputfile\_\#.txt`, where 0 <= \# <= 100. Additionally, you will also find a single file named `sharedfile.txt`.
The aim here is executing this small script over the 101 sets of inputs:
`
sleep 1;
ls -lrth <inputfile> <secondinputfile> <sharedfile> > <outputfile>;
cat <inputfile> <secondinputfile> <sharedfile> > <secondoutputfile>
`
For each pair, the script takes in `inputfile\_\#.txt`, `secondinputfile\_\#.txt` (both vary from instance to instance) and `sharedfile.txt` (which instead remains always the same), and returns `ls\_output\_\#.txt` and `cat\_output\_\#.txt`. In order to mimick a longer process, the script is artificially made to last at least one second.
The file `tests/test_manager/test_manager.py` gives you an example (and also a template) of how ou can submit a job on Locusts.
The function you want to call is `locusts.swarm.launch`, which takes several arguments.
Before describing them, let's look at the strategy used by Locusts: in essence, you give Locusts a template of the command you want to execute, and the you tell Locusts where to look for files to execute that template with. In our case, the template is:
`
sleep 1;
ls -lrth inputfile_<id>.txt secondinputfile_<id>.txt <shared>sf1 > ls_output_<id>.txt;
cat inputfile_<id>.txt secondinputfile_<id>.txt <shared>sf1 > cat_output_<id>.txt
`
Notice there are two handles that Locusts will know how to replace: `<id>` and `<shared>`. The `<id>` handle is there to specify the variable part of a filename (in our case, an integer in the [0,100] interval). The `<shared>` tag tells locust
* `indir` takes the location (absolute path or relative from where you are calling the script) of the directory containing all your input files
* `outdir` takes the location (absolute path or relative from where you are calling the script) of the directory where you want to collect your results
* `code` takes a unique codename for the job you want to launch
* `spcins` takes a list containing the template names for the
shdins=shared_inputs,
outs=outputs,
cmd=command_template,
parf=parameter_file
### Example 2: Running a script requiring input/output management (Default Environment)
You will find the material
| text/markdown | edoardo.sarti@gmail.com | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Science/Research",
"Topic :: System :: Distributed Computing",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3 :: Only"
] | [] | <4,>=3.5 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Reports, https://github.com/pypa/sampleproject/issues",
"Funding, https://donate.pypi.org",
"Say Thanks!, http://saythanks.io/to/example",
"Source, https://github.com/pypa/sampleproject/"
] | twine/6.1.0 CPython/3.8.19 | 2025-02-22 18:43:08.044156 UTC | locusts-0.0.81.tar.gz | 28632 | fd/d4/839b30631677917610ccb918ed8b2f3b3441ecd359b133124b51ce37231f/locusts-0.0.81.tar.gz | source | sdist | false | 1faf1e5c1c062ebaa48e76d70d35bc28 | 28f257c3287df6f17ae515379d6490aa94f0f85094fd03e6d0063b6ff3ffa0db | fdd4839b30631677917610ccb918ed8b2f3b3441ecd359b133124b51ce37231f | [] | Edoardo Sarti | https://github.com/pypa/sampleproject | null |
2.2 | fastapi-webserver | 0.3.0 | A simple FastAPI webserver with a bunch of useful resources. | # FastAPI WebServer
This is a wrapper of a FAST API application with some additional features that might be useful for quick web development.
It features:
- Powerful environment and settings handling with Dynamic module import like Django;
- A Database Adapter to connect to any database on-the-fly;
- A Data Migration Tool, to run `.sql` files, or migrate `json` data;
- An SMTP service, implemented on top of [fastapi-mail](https://pypi.org/project/fastapi-mail/);
- [SASS](https://sass-lang.com/) Compiler;
- Server-Side Rendering via `Jinja2Template`;
- Static Files provider;
- CORS Support;
- TLS Support + [mkcert](https://github.com/FiloSottile/mkcert) certificates (local/development only)
## Roadmap
The following features are expected to be implemented in the future. Contribution is welcome.
- [ ] OCI-Compliant Image for Docker/Podman
- [ ] Local Key-Value Cache
- [ ] Logging and Tracing API (via OpenTelemetry)
- [ ] Authentication and Authorization
- [ ] OAuth2 support
- [ ] OpenID Connect support
- [ ] Passkey support (via [Bitwarden passwordless](https://docs.passwordless.dev/guide/))
- [ ] Traffic Analyzer
- [ ] (AI) Bot detector
- [ ] VPN detector
- [ ] Rate limiter
- [ ] IP-based Access-Control List (ACL)
- [ ] Content Providers (HTTP client and proxy)
- [ ] Google Fonts API
- [ ] Gravatar
- [ ] GIPHY
## Getting Started
Optionally, set up the environment variables. All environment variables can be found on `.env` file in the root of this repository.
```python
import webserver
from fastapi import APIRouter, FastAPI
router: APIRouter = APIRouter()
app: FastAPI = webserver.app
@router.get("/")
def index():
return {"Hello World": f"from {webserver.settings.APP_NAME}"}
app.include_router(router)
if __name__ == "__main__":
webserver.start()
```
This enables both local execution through `main` method as well as `fastapi (dev|run)` commands.
| text/markdown | Artemis Resende <artemis@aresende.com> | # ๐ณ๏ธโ๐ Opinionated Queer License v1.1
ยฉ Copyright [Andrea Vos](https://avris.it), [Kolektyw โRada Jฤzyka Neutralnegoโ](https://zaimki.pl/kolektyw-rjn)
<div class="table-responsive">
<table class="table">
<thead>
<tr class="text-center">
<th>You can</th>
<th>You cannot</th>
<th>You must</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<ul>
<li>Use privately</li>
<li>Use commercially</li>
<li>Modify</li>
<li>Adapt</li>
<li>Distribute</li>
<li>Sublicense</li>
<li>Use a patent</li>
<li>Add a warranty</li>
</ul>
</td>
<td>
<ul>
<li>Hold the Licensor liable</li>
<li>Be a big corporation</li>
<li>Be law enforcement or military</li>
<li>Use for bigoted purposes</li>
<li>Use for violent purposes</li>
<li>Just blatantly resell it<br/><small>(even if laundered through machine learning)</small></li>
</ul>
</td>
<td>
<ul>
<li>Give credit</li>
<li>Indicate changes made</li>
<li>Include license or a link</li>
</ul>
</td>
</tr>
</tbody>
</table >
</div>
## Permissions
The creators of this Work (โThe Licensorโ) grant permission
to any person, group or legal entity that doesn't violate the prohibitions below (โThe Userโ),
to do everything with this Work that would otherwise infringe their copyright or any patent claims,
subject to the following conditions:
## Obligations
The User must give appropriate credit to the Licensor,
provide a copy of this license or a (clickable, if the medium allows) link to
[oql.avris.it/license/v1.1](https://oql.avris.it/license/v1.1),
and indicate whether and what kind of changes were made.
The User may do so in any reasonable manner,
but not in any way that suggests the Licensor endorses the User or their use.
## Prohibitions
No one may use this Work for prejudiced or bigoted purposes, including but not limited to:
racism, xenophobia, queerphobia, queer exclusionism, homophobia, transphobia, enbyphobia, misogyny.
No one may use this Work to inflict or facilitate violence or abuse of human rights as defined in the
[Universal Declaration of Human Rights](https://www.un.org/en/about-us/universal-declaration-of-human-rights).
No law enforcement, carceral institutions, immigration enforcement entities, military entities or military contractors
may use the Work for any reason. This also applies to any individuals employed by those entities.
No business entity where the ratio of pay (salaried, freelance, stocks, or other benefits)
between the highest and lowest individual in the entity is greater than 50 : 1
may use the Work for any reason.
No private business run for profit with more than a thousand employees
may use the Work for any reason.
Unless the User has made substantial changes to the Work,
or uses it only as a part of a new work (eg. as a library, as a part of an anthology, etc.),
they are prohibited from selling the Work.
That prohibition includes processing the Work with machine learning models.
## Sanctions
If the Licensor notifies the User that they have not complied with the rules of the license,
they can keep their license by complying within 30 days after the notice.
If they do not do so, their license ends immediately.
## Warranty
This Work is provided โas isโ, without warranty of any kind, express or implied.
The Licensor will not be liable to anyone for any damages related to the Work or this license,
under any kind of legal claim as far as the law allows.
| null | [
"Development Status :: 3 - Alpha",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Programming Language :: Python :: 3.13"
] | [] | >=3.13 | [] | [] | [] | [
"fastapi[standard]>=0.115.8",
"fastapi-mail>=1.4.2",
"email-validator>=2.2.0",
"pydantic-settings>=2.7.1",
"jinja2>=3.1.5",
"sqlmodel>=0.0.22",
"starlette>=0.45.3",
"command-runner>=1.7.0",
"httpx>=0.28.1",
"commons-library>=0.1.1"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/aresende/fastapi-webserver"
] | twine/6.1.0 CPython/3.13.2 | 2025-02-22 18:43:27.28054 UTC | fastapi_webserver-0.3.0-py3-none-any.whl | 13037 | 79/68/1b9236bc0ec455f53b122f8cf52ba44fab6ffbb606d967a64f9648d66463/fastapi_webserver-0.3.0-py3-none-any.whl | py3 | bdist_wheel | false | c591e368a393b26841c6787d8340cd71 | 6a3665e2069da5c31d4ebf478b05b9e19ce6b5a8e32915eab899a62f4f8e1d9b | 79681b9236bc0ec455f53b122f8cf52ba44fab6ffbb606d967a64f9648d66463 | [] | null | null | null |
2.2 | fastapi-webserver | 0.3.0 | A simple FastAPI webserver with a bunch of useful resources. | # FastAPI WebServer
This is a wrapper of a FAST API application with some additional features that might be useful for quick web development.
It features:
- Powerful environment and settings handling with Dynamic module import like Django;
- A Database Adapter to connect to any database on-the-fly;
- A Data Migration Tool, to run `.sql` files, or migrate `json` data;
- An SMTP service, implemented on top of [fastapi-mail](https://pypi.org/project/fastapi-mail/);
- [SASS](https://sass-lang.com/) Compiler;
- Server-Side Rendering via `Jinja2Template`;
- Static Files provider;
- CORS Support;
- TLS Support + [mkcert](https://github.com/FiloSottile/mkcert) certificates (local/development only)
## Roadmap
The following features are expected to be implemented in the future. Contribution is welcome.
- [ ] OCI-Compliant Image for Docker/Podman
- [ ] Local Key-Value Cache
- [ ] Logging and Tracing API (via OpenTelemetry)
- [ ] Authentication and Authorization
- [ ] OAuth2 support
- [ ] OpenID Connect support
- [ ] Passkey support (via [Bitwarden passwordless](https://docs.passwordless.dev/guide/))
- [ ] Traffic Analyzer
- [ ] (AI) Bot detector
- [ ] VPN detector
- [ ] Rate limiter
- [ ] IP-based Access-Control List (ACL)
- [ ] Content Providers (HTTP client and proxy)
- [ ] Google Fonts API
- [ ] Gravatar
- [ ] GIPHY
## Getting Started
Optionally, set up the environment variables. All environment variables can be found on `.env` file in the root of this repository.
```python
import webserver
from fastapi import APIRouter, FastAPI
router: APIRouter = APIRouter()
app: FastAPI = webserver.app
@router.get("/")
def index():
return {"Hello World": f"from {webserver.settings.APP_NAME}"}
app.include_router(router)
if __name__ == "__main__":
webserver.start()
```
This enables both local execution through `main` method as well as `fastapi (dev|run)` commands.
| text/markdown | Artemis Resende <artemis@aresende.com> | # ๐ณ๏ธโ๐ Opinionated Queer License v1.1
ยฉ Copyright [Andrea Vos](https://avris.it), [Kolektyw โRada Jฤzyka Neutralnegoโ](https://zaimki.pl/kolektyw-rjn)
<div class="table-responsive">
<table class="table">
<thead>
<tr class="text-center">
<th>You can</th>
<th>You cannot</th>
<th>You must</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<ul>
<li>Use privately</li>
<li>Use commercially</li>
<li>Modify</li>
<li>Adapt</li>
<li>Distribute</li>
<li>Sublicense</li>
<li>Use a patent</li>
<li>Add a warranty</li>
</ul>
</td>
<td>
<ul>
<li>Hold the Licensor liable</li>
<li>Be a big corporation</li>
<li>Be law enforcement or military</li>
<li>Use for bigoted purposes</li>
<li>Use for violent purposes</li>
<li>Just blatantly resell it<br/><small>(even if laundered through machine learning)</small></li>
</ul>
</td>
<td>
<ul>
<li>Give credit</li>
<li>Indicate changes made</li>
<li>Include license or a link</li>
</ul>
</td>
</tr>
</tbody>
</table >
</div>
## Permissions
The creators of this Work (โThe Licensorโ) grant permission
to any person, group or legal entity that doesn't violate the prohibitions below (โThe Userโ),
to do everything with this Work that would otherwise infringe their copyright or any patent claims,
subject to the following conditions:
## Obligations
The User must give appropriate credit to the Licensor,
provide a copy of this license or a (clickable, if the medium allows) link to
[oql.avris.it/license/v1.1](https://oql.avris.it/license/v1.1),
and indicate whether and what kind of changes were made.
The User may do so in any reasonable manner,
but not in any way that suggests the Licensor endorses the User or their use.
## Prohibitions
No one may use this Work for prejudiced or bigoted purposes, including but not limited to:
racism, xenophobia, queerphobia, queer exclusionism, homophobia, transphobia, enbyphobia, misogyny.
No one may use this Work to inflict or facilitate violence or abuse of human rights as defined in the
[Universal Declaration of Human Rights](https://www.un.org/en/about-us/universal-declaration-of-human-rights).
No law enforcement, carceral institutions, immigration enforcement entities, military entities or military contractors
may use the Work for any reason. This also applies to any individuals employed by those entities.
No business entity where the ratio of pay (salaried, freelance, stocks, or other benefits)
between the highest and lowest individual in the entity is greater than 50 : 1
may use the Work for any reason.
No private business run for profit with more than a thousand employees
may use the Work for any reason.
Unless the User has made substantial changes to the Work,
or uses it only as a part of a new work (eg. as a library, as a part of an anthology, etc.),
they are prohibited from selling the Work.
That prohibition includes processing the Work with machine learning models.
## Sanctions
If the Licensor notifies the User that they have not complied with the rules of the license,
they can keep their license by complying within 30 days after the notice.
If they do not do so, their license ends immediately.
## Warranty
This Work is provided โas isโ, without warranty of any kind, express or implied.
The Licensor will not be liable to anyone for any damages related to the Work or this license,
under any kind of legal claim as far as the law allows.
| null | [
"Development Status :: 3 - Alpha",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Programming Language :: Python :: 3.13"
] | [] | >=3.13 | [] | [] | [] | [
"fastapi[standard]>=0.115.8",
"fastapi-mail>=1.4.2",
"email-validator>=2.2.0",
"pydantic-settings>=2.7.1",
"jinja2>=3.1.5",
"sqlmodel>=0.0.22",
"starlette>=0.45.3",
"command-runner>=1.7.0",
"httpx>=0.28.1",
"commons-library>=0.1.1"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/aresende/fastapi-webserver"
] | twine/6.1.0 CPython/3.13.2 | 2025-02-22 18:43:29.331448 UTC | fastapi_webserver-0.3.0.tar.gz | 10367 | 34/43/f58a8c9c77ee0d101b17245d1e8f8f995d3071803a547e7d928c4981bfeb/fastapi_webserver-0.3.0.tar.gz | source | sdist | false | 63bd8d8161d6094477b0d4ec28b50b6f | a2dd412af41d36770a11ee14c299368e0802c4c9eee968085df6b1cd38878c7b | 3443f58a8c9c77ee0d101b17245d1e8f8f995d3071803a547e7d928c4981bfeb | [] | null | null | null |
2.2 | dataguzzler-python | 0.4.1 | dataguzzler-python | Dataguzzler-Python
==================
Dataguzzler-Python is a tool to facilitate data acquisition,
leveraging Python for scripting and interaction.
A Dataguzzler-Python data acquisition system consists of *modules* that
can control and/or capture data from your measurement hardware, and
often additional higher-level *modules* that integrate functionality
provided by the hardware into some sort of virtual instrument.
For basic information see: doc/source/about.rst
For installation instructions see: doc/source/installation.rst
For a quickstart guide see: doc/source/quickstart.rst
Basic requirements are Python v3.8 or above with the following packages: numpy, setuptools, wheel, build, setuptools_scm
Basic installation is (possibly as root or Administrator):
pip install --no-deps --no-build-isolation .
More detailed documentation is also available in doc/source/
To render the documentation use a command prompt, change to the
doc/ directory and type "make". On Windows it will create HTML
documentation in the doc/build/html directory. On Linux you get options
such as "make html" and "make latexpdf" to get different forms
of documentation.
| text/markdown | null | null | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License"
] | [] | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2025-02-22 18:43:30.332252 UTC | dataguzzler_python-0.4.1-py3-none-any.whl | 205035 | 74/03/f85a264b28f4342fe7bde5498c81dfe0dc75cd67d2a10ddd5d17f17d341d/dataguzzler_python-0.4.1-py3-none-any.whl | py3 | bdist_wheel | false | 18abb18f52c9d4d5da58eb0bd7ca1c8d | fa39272eeed5f2abfb16eac743a3c2e15b8def44afd20d035fa24160866ef2e6 | 7403f85a264b28f4342fe7bde5498c81dfe0dc75cd67d2a10ddd5d17f17d341d | [] | Stephen D. Holland | http://thermal.cnde.iastate.edu | null |
2.2 | dataguzzler-python | 0.4.1 | dataguzzler-python | Dataguzzler-Python
==================
Dataguzzler-Python is a tool to facilitate data acquisition,
leveraging Python for scripting and interaction.
A Dataguzzler-Python data acquisition system consists of *modules* that
can control and/or capture data from your measurement hardware, and
often additional higher-level *modules* that integrate functionality
provided by the hardware into some sort of virtual instrument.
For basic information see: doc/source/about.rst
For installation instructions see: doc/source/installation.rst
For a quickstart guide see: doc/source/quickstart.rst
Basic requirements are Python v3.8 or above with the following packages: numpy, setuptools, wheel, build, setuptools_scm
Basic installation is (possibly as root or Administrator):
pip install --no-deps --no-build-isolation .
More detailed documentation is also available in doc/source/
To render the documentation use a command prompt, change to the
doc/ directory and type "make". On Windows it will create HTML
documentation in the doc/build/html directory. On Linux you get options
such as "make html" and "make latexpdf" to get different forms
of documentation.
| text/markdown | null | null | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License"
] | [] | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2025-02-22 18:43:32.808411 UTC | dataguzzler_python-0.4.1.tar.gz | 663533 | 8c/41/93daa859cbf70fa68a6885b1ce7172ca3cd1acef0000639ac9c3d8e3650a/dataguzzler_python-0.4.1.tar.gz | source | sdist | false | 12d0522ecfc5840c51cebfa937960cb0 | 7d3c45c31cd228d17d48dbddd9def3b97af96c5269a5590a190f068b3f87d16a | 8c4193daa859cbf70fa68a6885b1ce7172ca3cd1acef0000639ac9c3d8e3650a | [] | Stephen D. Holland | http://thermal.cnde.iastate.edu | null |
2.1 | TFQ0tool | 0.2.4 | is a command-line utility for extracting text from various file formats, including text files, PDFs, Word documents, spreadsheets, and code files in popular programming languages. | # TFQ0tool
**is a command-line utility for extracting text from various file formats, including text files, PDFs, Word documents, spreadsheets, and code files in popular programming languages.**
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/tfq0tool/)
## Features โจ
- ๐ **Multi-format support**: PDF, Word, Excel, TXT, and 8+ code formats
- โก **Parallel processing**: Multi-threaded extraction for bulk operations
- ๐ก๏ธ **Robust error handling**: Clear error messages and file validation
- ๐ฆ **Auto-output naming**: Generates organized output files/directories
## Installation ๐ป
### From PyPI (Recommended)
1. Download from pipx
```bash
pipx install tfq0tool
1. Run tool
```bash
pipx run tfq0tool
2. Used by repository
```bash
git clone https://github.com/tfq0/TFQ0tool.git
cd tfq-tool
pip install -r requirements.txt
python tfq-tool.py
3. Usage ๐ ๏ธ
```bash
"Basic Command"
tfq0tool [FILES] [OPTIONS]
"Single file extraction"
tfq0tool document.pdf --output results.txt
"Batch processing with 4 threads"
tfq0tool *.pdf *.docx --threads 4 --output ./extracted_texts
"Force overwrite existing files"
tfq0tool data.xlsx --output output.txt --force
## Optionsโ๏ธ
- **Flag Description**
- -o, --output Output path (file or directory)
- -t, --threads Thread count (default: 1)
- -v, --verbose Show detailed processing logs
- -f, --force Overwrite files without confirmation
| text/markdown | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.8 | [] | [] | [] | [
"PyPDF2",
"python-docx",
"openpyxl",
"pdfminer.six"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.9 | 2025-02-22 18:43:52.986199 UTC | TFQ0tool-0.2.4-py3-none-any.whl | 5604 | 89/92/a1c44cb5a3f654cdeb61fd7ee9209b2f81f419d1b1f4a8c98827c639d921/TFQ0tool-0.2.4-py3-none-any.whl | py3 | bdist_wheel | false | f6c7d00082a29613643e4dc43760016e | 6a80d7eacac5a98b467c37613a067b9d3f0f0252a6c7852905b48edec8ed976a | 8992a1c44cb5a3f654cdeb61fd7ee9209b2f81f419d1b1f4a8c98827c639d921 | [] | Talal | https://github.com/tfq0/tfq0tool | null |
2.1 | TFQ0tool | 0.2.4 | is a command-line utility for extracting text from various file formats, including text files, PDFs, Word documents, spreadsheets, and code files in popular programming languages. | # TFQ0tool
**is a command-line utility for extracting text from various file formats, including text files, PDFs, Word documents, spreadsheets, and code files in popular programming languages.**
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/tfq0tool/)
## Features โจ
- ๐ **Multi-format support**: PDF, Word, Excel, TXT, and 8+ code formats
- โก **Parallel processing**: Multi-threaded extraction for bulk operations
- ๐ก๏ธ **Robust error handling**: Clear error messages and file validation
- ๐ฆ **Auto-output naming**: Generates organized output files/directories
## Installation ๐ป
### From PyPI (Recommended)
1. Download from pipx
```bash
pipx install tfq0tool
1. Run tool
```bash
pipx run tfq0tool
2. Used by repository
```bash
git clone https://github.com/tfq0/TFQ0tool.git
cd tfq-tool
pip install -r requirements.txt
python tfq-tool.py
3. Usage ๐ ๏ธ
```bash
"Basic Command"
tfq0tool [FILES] [OPTIONS]
"Single file extraction"
tfq0tool document.pdf --output results.txt
"Batch processing with 4 threads"
tfq0tool *.pdf *.docx --threads 4 --output ./extracted_texts
"Force overwrite existing files"
tfq0tool data.xlsx --output output.txt --force
## Optionsโ๏ธ
- **Flag Description**
- -o, --output Output path (file or directory)
- -t, --threads Thread count (default: 1)
- -v, --verbose Show detailed processing logs
- -f, --force Overwrite files without confirmation
| text/markdown | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.9 | 2025-02-22 18:43:53.984702 UTC | TFQ0tool-0.2.4.tar.gz | 4725 | 31/f6/285c318a94ec187553dcb6770ce8278222e78498fda1aaaa5d41ebfc214c/TFQ0tool-0.2.4.tar.gz | source | sdist | false | 784b6bca6019a767ada5817e20813407 | 7322ac6e20e6684051905db508540bee4a572b2d8e884ec55c5f5e904ddb282b | 31f6285c318a94ec187553dcb6770ce8278222e78498fda1aaaa5d41ebfc214c | [] | Talal | https://github.com/tfq0/tfq0tool | null |
2.2 | hape | 0.2.95 | HAPE Framework: Build an Automation Tool With Ease |
<img src="https://raw.githubusercontent.com/hazemataya94/hape-framework/refs/heads/main/docs/logo.png" width="100%">
# HAPE Framework: Overview & Features
## What is HAPE Framework?
HAPE Framework is a lightweight and extensible Python framework designed to help platform engineers build customized CLI and API-driven platforms with minimal effort. It provides a structured way to develop orchestrators for managing infrastructure, CI/CD pipelines, cloud resources, and other platform engineering needs.
HAPE Framework is built around abstraction and automation, allowing engineers to define and manage resources like AWS, Kubernetes, GitHub, GitLab, ArgoCD, Prometheus, Grafana, HashiCorp Vault, and many others in a unified manner. It eliminates the need to manually integrate multiple packages for each tool, offering a streamlined way to build self-service developer portals and engineering platforms.
## Idea Origin
Modern organizations manage hundreds of microservices, each with its own infrastructure, CI/CD, monitoring, and deployment configurations. This complexity increases the cognitive load on developers and slows down platform operations.
HAPE Framework aims to reduce this complexity by enabling platform engineers to build opinionated, yet flexible automation tools that simplify the work to build a platform.
With HAPE, developers can interact with a CLI or API to create, deploy, and manage their services without diving into complex configurations. The framework also supports custom state management via databases, and integration with existing DevOps tools.
## Done Features
### Automate everyday commands
```sh
$ make list
build Build the package in dist. Runs: bump-version.
bump-version Bump the patch version in setup.py.
clean Clean up build, cache, playground and zip files.
docker-down Stop Docker services.
docker-exec Execute a shell in the HAPE Docker container.
docker-ps List running Docker services.
docker-python Runs a Python container in playground directory.
docker-restart Restart Docker services.
docker-up Start Docker services.
freeze-cli Freeze dependencies for CLI.
freeze-dev Freeze dependencies for development.
git-hooks Install hooks in .git-hooks/ to .git/hooks/.
init-cli Install CLI dependencies.
init-dev Install development dependencies in .venv, docker-compose up -d, and create .env if not exist.
install Install the package.
list Show available commands.
migration-create Create a new database migration.
migration-run Apply the latest database migrations.
play Run hape.playground Playground.paly() and print the execution time.
publish Publish package to public PyPI, commit, tag, and push the version. Runs: test-code,build.
reset-data Deletes hello-world project from previous tests, drops and creates database hape_db.
reset-local Deletes hello-world project from previous tests, drops and creates database hape_db, runs migrations, and runs the playground.
source-env Print export statements for the environment variables from .env file.
test-cli Run a new python container, installs hape cli and runs all tests inside it.
test-code Runs containers in dockerfiles/docker-compose.yml, Deletes hello-world project from previous tests, and run all code automated tests.
zip Create a zip archive excluding local files and playground.
```
### Publish to public PyPI repository
```sh
$ make publish
๐ Bumping patch version in setup.py...
Version updated to 0.x.x
* Creating isolated environment: venv+pip...
* Installing packages in isolated environment:
- setuptools >= 40.8.0
* Getting build dependencies for sdist...
0.x.x
.
Successfully built hape-0.x.x.tar.gz and hape-0.x.x-py3-none-any.whl
Uploading distributions to https://upload.pypi.org/legacy/
Uploading hape-0.x.x-py3-none-any.whl
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 63.6/63.6 kB โข 00:00 โข 55.1 MB/s
Uploading hape-0.x.x.tar.gz
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 54.3/54.3 kB โข 00:00 โข 35.6 MB/s
.
View at:
https://pypi.org/project/hape/0.x.x/
.
Pushing commits
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
.
Pushing tags
Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
To github.com:hazemataya94/hape-framework.git
* [new tag] 0.x.x -> 0.x.x
Python files detected, running code tests...
Making sure hape container is running
hape hape:dev "sleep infinity" hape 9 hours ago Up 9 hours
Removing hello-world project from previous tests
Dropping and creating database hape_db
Running all tests in hape container defined in dockerfiles/docker-compose.yml
=============================================================
Running all code tests
=============================================================
Running ./tests/init-project.sh
--------------------------------
Installing tree if not installed
Deleting project hello-world if exists
Initializing project hello-world
...
$ hape crud delete --delete test-model
Deleted: hello_world/models/test_model_model.py
Deleted: hello_world/controllers/test_model_controller.py
Deleted: hello_world/argument_parsers/test_model_argument_parser.py
All model files -except the migration file- have been deleted successfully!
=============================================================
All tests finished successfully!
```
### Install latest `hape` CLI
```sh
$ make install
```
or
```sh
$ pip install --upgrade hape
```
### Support Initializing Project
```sh
$ hape init project --name hello-world
Project hello-world has been successfully initialized!
$ tree hello-world
hello-world
โโโ MANIFEST.in
โโโ Makefile
โโโ README.md
โโโ alembic.ini
โโโ dockerfiles
โ โโโ Dockerfile.dev
โ โโโ Dockerfile.prod
โ โโโ docker-compose.yml
โโโ hello_world
โ โโโ __init__.py
โ โโโ argument_parsers
โ โ โโโ __init__.py
โ โ โโโ main_argument_parser.py
โ โ โโโ playground_argument_parser.py
โ โโโ bootstrap.py
โ โโโ cli.py
โ โโโ controllers
โ โ โโโ __init__.py
โ โโโ enums
โ โ โโโ __init__.py
โ โโโ migrations
โ โ โโโ README
โ โ โโโ env.py
โ โ โโโ json
โ โ โ โโโ 000001_migration.json
โ โ โโโ script.py.mako
โ โ โโโ versions
โ โ โ โโโ 000001_migration.py
โ โ โโโ yaml
โ โ โโโ 000001_migration.yaml
โ โโโ models
โ โ โโโ __init__.py
โ โ โโโ test_model_cost_model.py
โ โโโ playground.py
โ โโโ services
โ โโโ __init__.py
โโโ main.py
โโโ requirements-cli.txt
โโโ requirements-dev.txt
โโโ setup.py
```
### Generate CRUD JSON Schema
```sh
$ hape json get --model-schema
{
"valid_types": ["string", "int", "bool", "float", "date", "datetime", "timestamp"],
"valid_properties": ["nullable", "required", "unique", "primary", "autoincrement"],
"name": "model-name",
"schema": {
"column_name": {"valid-type": ["valid-property"]},
"id": {"valid-type": ["valid-property"]},
"updated_at": {"valid-type": []},
"name": {"valid-type": ["valid-property", "valid-property"]},
"enabled": {"valid-type": []},
}
"example_schema": {
"id": {"int": ["primary"]},
"updated_at": {"timestamp": []},
"name": {"string": ["required", "unique"]},
"enabled": {"bool": []}
}
}
```
### Generate CRUD YAML Schema
```sh
$ hape yaml get --model-schema
valid_types: ["string", "int", "bool", "float", "date", "datetime", "timestamp"]
valid_properties: ["nullable", "required", "unique", "primary", "autoincrement"]
name: model-name
schema:
column_name:
valid-type:
- valid-property
id:
valid-type:
- valid-property
updated_at:
valid-type: []
name:
valid-type:
- valid-property
- valid-property
enabled:
valid-type: []
example_schema:
id:
int:
- primary
updated_at:
timestamp: []
name:
string:
- required
- unique
enabled:
bool: []
```
## In Progress Features
### Create GitHub Project to Manage issues, tasks and future workfr
### Support CRUD Generate and Create migrations/json/model_name.json
```sh
$ hape crud generate --json '
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"'
$ hape deployment-cost --help
usage: myawesomeplatform deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
## TODO Features
### Create migrations/json/model_name.json and run CRUD Geneartion for file in migrations/schema_json/{*}.json if models/file.py doesn't exist
```sh
$ export MY_JSON_FILE="""
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"""
$ echo "${MY_JSON_FILE}" > migrations/schema_json/deployment_cost.json
$ hape crud generate
$ hape deployment-cost --help
usage: hape deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
### Generate CHANGELOG.md
```sh
$ hape changelog generate
$ echo "empty" > file.txt
$ git add file.txt
$ git commit -m "empty"
$ git push
$ make publish
$ hape changelog generate # generate CHANGELOG.md from scratch
$ hape changelog update # append missing versions to CHANGELOG.md
```
### Support Scalable Secure RESTful API
```sh
$ export MY_JSON_FILE="""
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"""
$ echo "${MY_JSON_FILE}" > migrations/schema_json/deployment_cost.json
$ hape crud generate
$ hape deployment-cost --help
usage: hape deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
### Support Scalable Secure RESTful API
```sh
$ hape serve http --allow-cidr '0.0.0.0/0,10.0.1.0/24' --deny-cidr '10.200.0.0/24,0,10.0.1.0/24,10.107.0.0/24' --workers 2 --port 80
or
$ hape serve http --json """
{
"port": 8088
"allow-cidr": "0.0.0.0/0,10.0.1.0/24",
"deny-cidr": "10.200.0.0/24,0,10.0.1.0/24,10.107.0.0/24"
}
"""
Spawnining workers
hape-worker-random-string-1 is up
hape-worker-random-string-2 failed
hape-worker-random-string-2 restarting (up to 3 times)
hape-worker-random-string-2 is up
All workers are up
Database connection established
Any other needed step
Serving HAPE on http://127.0.0.1:8088
```
### Support CRUD Environment Variables
```sh
$ hape env add --key MY_ENV_KEY --value MY_ENV_VALUE
$ hape env get --key MY_ENV_KEY
MY_ENV_KEY=MY_ENV_VALUE
$ hape env delete --key MY_ENV_KEY
$ hape env get --key MY_ENV_KEY
MY_ENV_KEY=MY_ENV_VALUE
```
### Store Configuration in Database
```sh
$ hape config add --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config set --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config set --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config get --key MY_CONFIG_KEY
MY_CONFIG_KEY=MY_CONFIG_VALUE
$ hape config delete --key MY_CONFIG_KEY
$ hape config get --key MY_CONFIG_KEY
MY_CONFIG_KEY=MY_CONFIG_VALUE
```
### Run Using Environment Variables or Database Configuration
```sh
$ hape config set --config_source env
$ hape config set --config_source db
$ hape config set --config_env_prefix MY_ENV_PREFIX
```
| text/markdown | hazem.ataya94@gmail.com | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.9 | [] | [] | [] | [
"alembic==1.14.1",
"build==1.2.2.post1",
"cachetools==5.5.1",
"certifi==2024.12.14",
"cffi==1.17.1",
"charset-normalizer==3.4.1",
"cryptography==44.0.0",
"docutils==0.21.2",
"durationpy==0.9",
"google-auth==2.38.0",
"greenlet==3.1.1",
"idna==3.10",
"iniconfig==2.0.0",
"jaraco.classes==3.4.0",
"jaraco.context==6.0.1",
"jaraco.functools==4.1.0",
"keyring==25.6.0",
"kubernetes==31.0.0",
"Mako==1.3.8",
"markdown-it-py==3.0.0",
"MarkupSafe==3.0.2",
"mdurl==0.1.2",
"more-itertools==10.6.0",
"mysql==0.0.3",
"mysql-connector-python==9.2.0",
"mysqlclient==2.2.7",
"nh3==0.2.20",
"oauthlib==3.2.2",
"packaging==24.2",
"pkginfo==1.12.0",
"pluggy==1.5.0",
"pyasn1==0.6.1",
"pyasn1_modules==0.4.1",
"pycparser==2.22",
"Pygments==2.19.1",
"PyMySQL==1.1.1",
"pyproject_hooks==1.2.0",
"pytest==8.3.4",
"python-dateutil==2.9.0.post0",
"python-dotenv==1.0.1",
"python-gitlab==5.6.0",
"python-json-logger==3.2.1",
"PyYAML==6.0.2",
"readme_renderer==44.0",
"requests==2.32.3",
"requests-oauthlib==2.0.0",
"requests-toolbelt==1.0.0",
"rfc3986==2.0.0",
"rich==13.9.4",
"rsa==4.9",
"ruamel.yaml==0.18.10",
"ruamel.yaml.clib==0.2.12",
"setuptools==75.8.0",
"six==1.17.0",
"SQLAlchemy==2.0.37",
"twine==6.0.1",
"typing_extensions==4.12.2",
"urllib3==2.3.0",
"websocket-client==1.8.0",
"wheel==0.45.1"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.13.1 | 2025-02-22 18:44:34.775102 UTC | hape-0.2.95-py3-none-any.whl | 65624 | 7f/95/6ce1ab828eaf64e5e6f86aeb3d048aebac5623d545d388f81ad4307a3a95/hape-0.2.95-py3-none-any.whl | py3 | bdist_wheel | false | c0cff297d4f632567585494b942c7f78 | 6da28f10e87cbd45f618cd6445b235301eb88efc1a0657d5e2171cac89b8b605 | 7f956ce1ab828eaf64e5e6f86aeb3d048aebac5623d545d388f81ad4307a3a95 | [] | Hazem Ataya | https://github.com/hazemataya94/hape-framework | null |
2.2 | hape | 0.2.95 | HAPE Framework: Build an Automation Tool With Ease |
<img src="https://raw.githubusercontent.com/hazemataya94/hape-framework/refs/heads/main/docs/logo.png" width="100%">
# HAPE Framework: Overview & Features
## What is HAPE Framework?
HAPE Framework is a lightweight and extensible Python framework designed to help platform engineers build customized CLI and API-driven platforms with minimal effort. It provides a structured way to develop orchestrators for managing infrastructure, CI/CD pipelines, cloud resources, and other platform engineering needs.
HAPE Framework is built around abstraction and automation, allowing engineers to define and manage resources like AWS, Kubernetes, GitHub, GitLab, ArgoCD, Prometheus, Grafana, HashiCorp Vault, and many others in a unified manner. It eliminates the need to manually integrate multiple packages for each tool, offering a streamlined way to build self-service developer portals and engineering platforms.
## Idea Origin
Modern organizations manage hundreds of microservices, each with its own infrastructure, CI/CD, monitoring, and deployment configurations. This complexity increases the cognitive load on developers and slows down platform operations.
HAPE Framework aims to reduce this complexity by enabling platform engineers to build opinionated, yet flexible automation tools that simplify the work to build a platform.
With HAPE, developers can interact with a CLI or API to create, deploy, and manage their services without diving into complex configurations. The framework also supports custom state management via databases, and integration with existing DevOps tools.
## Done Features
### Automate everyday commands
```sh
$ make list
build Build the package in dist. Runs: bump-version.
bump-version Bump the patch version in setup.py.
clean Clean up build, cache, playground and zip files.
docker-down Stop Docker services.
docker-exec Execute a shell in the HAPE Docker container.
docker-ps List running Docker services.
docker-python Runs a Python container in playground directory.
docker-restart Restart Docker services.
docker-up Start Docker services.
freeze-cli Freeze dependencies for CLI.
freeze-dev Freeze dependencies for development.
git-hooks Install hooks in .git-hooks/ to .git/hooks/.
init-cli Install CLI dependencies.
init-dev Install development dependencies in .venv, docker-compose up -d, and create .env if not exist.
install Install the package.
list Show available commands.
migration-create Create a new database migration.
migration-run Apply the latest database migrations.
play Run hape.playground Playground.paly() and print the execution time.
publish Publish package to public PyPI, commit, tag, and push the version. Runs: test-code,build.
reset-data Deletes hello-world project from previous tests, drops and creates database hape_db.
reset-local Deletes hello-world project from previous tests, drops and creates database hape_db, runs migrations, and runs the playground.
source-env Print export statements for the environment variables from .env file.
test-cli Run a new python container, installs hape cli and runs all tests inside it.
test-code Runs containers in dockerfiles/docker-compose.yml, Deletes hello-world project from previous tests, and run all code automated tests.
zip Create a zip archive excluding local files and playground.
```
### Publish to public PyPI repository
```sh
$ make publish
๐ Bumping patch version in setup.py...
Version updated to 0.x.x
* Creating isolated environment: venv+pip...
* Installing packages in isolated environment:
- setuptools >= 40.8.0
* Getting build dependencies for sdist...
0.x.x
.
Successfully built hape-0.x.x.tar.gz and hape-0.x.x-py3-none-any.whl
Uploading distributions to https://upload.pypi.org/legacy/
Uploading hape-0.x.x-py3-none-any.whl
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 63.6/63.6 kB โข 00:00 โข 55.1 MB/s
Uploading hape-0.x.x.tar.gz
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 54.3/54.3 kB โข 00:00 โข 35.6 MB/s
.
View at:
https://pypi.org/project/hape/0.x.x/
.
Pushing commits
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
.
Pushing tags
Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
To github.com:hazemataya94/hape-framework.git
* [new tag] 0.x.x -> 0.x.x
Python files detected, running code tests...
Making sure hape container is running
hape hape:dev "sleep infinity" hape 9 hours ago Up 9 hours
Removing hello-world project from previous tests
Dropping and creating database hape_db
Running all tests in hape container defined in dockerfiles/docker-compose.yml
=============================================================
Running all code tests
=============================================================
Running ./tests/init-project.sh
--------------------------------
Installing tree if not installed
Deleting project hello-world if exists
Initializing project hello-world
...
$ hape crud delete --delete test-model
Deleted: hello_world/models/test_model_model.py
Deleted: hello_world/controllers/test_model_controller.py
Deleted: hello_world/argument_parsers/test_model_argument_parser.py
All model files -except the migration file- have been deleted successfully!
=============================================================
All tests finished successfully!
```
### Install latest `hape` CLI
```sh
$ make install
```
or
```sh
$ pip install --upgrade hape
```
### Support Initializing Project
```sh
$ hape init project --name hello-world
Project hello-world has been successfully initialized!
$ tree hello-world
hello-world
โโโ MANIFEST.in
โโโ Makefile
โโโ README.md
โโโ alembic.ini
โโโ dockerfiles
โ โโโ Dockerfile.dev
โ โโโ Dockerfile.prod
โ โโโ docker-compose.yml
โโโ hello_world
โ โโโ __init__.py
โ โโโ argument_parsers
โ โ โโโ __init__.py
โ โ โโโ main_argument_parser.py
โ โ โโโ playground_argument_parser.py
โ โโโ bootstrap.py
โ โโโ cli.py
โ โโโ controllers
โ โ โโโ __init__.py
โ โโโ enums
โ โ โโโ __init__.py
โ โโโ migrations
โ โ โโโ README
โ โ โโโ env.py
โ โ โโโ json
โ โ โ โโโ 000001_migration.json
โ โ โโโ script.py.mako
โ โ โโโ versions
โ โ โ โโโ 000001_migration.py
โ โ โโโ yaml
โ โ โโโ 000001_migration.yaml
โ โโโ models
โ โ โโโ __init__.py
โ โ โโโ test_model_cost_model.py
โ โโโ playground.py
โ โโโ services
โ โโโ __init__.py
โโโ main.py
โโโ requirements-cli.txt
โโโ requirements-dev.txt
โโโ setup.py
```
### Generate CRUD JSON Schema
```sh
$ hape json get --model-schema
{
"valid_types": ["string", "int", "bool", "float", "date", "datetime", "timestamp"],
"valid_properties": ["nullable", "required", "unique", "primary", "autoincrement"],
"name": "model-name",
"schema": {
"column_name": {"valid-type": ["valid-property"]},
"id": {"valid-type": ["valid-property"]},
"updated_at": {"valid-type": []},
"name": {"valid-type": ["valid-property", "valid-property"]},
"enabled": {"valid-type": []},
}
"example_schema": {
"id": {"int": ["primary"]},
"updated_at": {"timestamp": []},
"name": {"string": ["required", "unique"]},
"enabled": {"bool": []}
}
}
```
### Generate CRUD YAML Schema
```sh
$ hape yaml get --model-schema
valid_types: ["string", "int", "bool", "float", "date", "datetime", "timestamp"]
valid_properties: ["nullable", "required", "unique", "primary", "autoincrement"]
name: model-name
schema:
column_name:
valid-type:
- valid-property
id:
valid-type:
- valid-property
updated_at:
valid-type: []
name:
valid-type:
- valid-property
- valid-property
enabled:
valid-type: []
example_schema:
id:
int:
- primary
updated_at:
timestamp: []
name:
string:
- required
- unique
enabled:
bool: []
```
## In Progress Features
### Create GitHub Project to Manage issues, tasks and future workfr
### Support CRUD Generate and Create migrations/json/model_name.json
```sh
$ hape crud generate --json '
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"'
$ hape deployment-cost --help
usage: myawesomeplatform deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
## TODO Features
### Create migrations/json/model_name.json and run CRUD Geneartion for file in migrations/schema_json/{*}.json if models/file.py doesn't exist
```sh
$ export MY_JSON_FILE="""
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"""
$ echo "${MY_JSON_FILE}" > migrations/schema_json/deployment_cost.json
$ hape crud generate
$ hape deployment-cost --help
usage: hape deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
### Generate CHANGELOG.md
```sh
$ hape changelog generate
$ echo "empty" > file.txt
$ git add file.txt
$ git commit -m "empty"
$ git push
$ make publish
$ hape changelog generate # generate CHANGELOG.md from scratch
$ hape changelog update # append missing versions to CHANGELOG.md
```
### Support Scalable Secure RESTful API
```sh
$ export MY_JSON_FILE="""
{
"name": "deployment-cost"
"schema": {
"id": ["int","autoincrement"],
"service-name": ["string"],
"pod-cpu": ["string"],
"pod-ram": ["string"],
"autoscaling": ["bool"],
"min-replicas": ["int","nullable"],
"max-replicas": ["int","nullable"],
"current-replicas": ["int"],
"pod-cost": ["string"],
"number-of-pods": ["int"],
"total-cost": ["float"],
"cost-unit": ["string"]
}
}
"""
$ echo "${MY_JSON_FILE}" > migrations/schema_json/deployment_cost.json
$ hape crud generate
$ hape deployment-cost --help
usage: hape deployment-cost [-h] {save,get,get-all,delete,delete-all} ...
positional arguments:
{save,get,get-all,delete,delete-all}
save Save DeploymentCost object based on passed arguments or filters
get Get DeploymentCost object based on passed arguments or filters
get-all Get-all DeploymentCost objects based on passed arguments or filters
delete Delete DeploymentCost object based on passed arguments or filters
delete-all Delete-all DeploymentCost objects based on passed arguments or filters
options:
-h, --help show this help message and exit
```
### Support Scalable Secure RESTful API
```sh
$ hape serve http --allow-cidr '0.0.0.0/0,10.0.1.0/24' --deny-cidr '10.200.0.0/24,0,10.0.1.0/24,10.107.0.0/24' --workers 2 --port 80
or
$ hape serve http --json """
{
"port": 8088
"allow-cidr": "0.0.0.0/0,10.0.1.0/24",
"deny-cidr": "10.200.0.0/24,0,10.0.1.0/24,10.107.0.0/24"
}
"""
Spawnining workers
hape-worker-random-string-1 is up
hape-worker-random-string-2 failed
hape-worker-random-string-2 restarting (up to 3 times)
hape-worker-random-string-2 is up
All workers are up
Database connection established
Any other needed step
Serving HAPE on http://127.0.0.1:8088
```
### Support CRUD Environment Variables
```sh
$ hape env add --key MY_ENV_KEY --value MY_ENV_VALUE
$ hape env get --key MY_ENV_KEY
MY_ENV_KEY=MY_ENV_VALUE
$ hape env delete --key MY_ENV_KEY
$ hape env get --key MY_ENV_KEY
MY_ENV_KEY=MY_ENV_VALUE
```
### Store Configuration in Database
```sh
$ hape config add --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config set --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config set --key MY_CONFIG_KEY --value MY_CONFIG_VALUE
$ hape config get --key MY_CONFIG_KEY
MY_CONFIG_KEY=MY_CONFIG_VALUE
$ hape config delete --key MY_CONFIG_KEY
$ hape config get --key MY_CONFIG_KEY
MY_CONFIG_KEY=MY_CONFIG_VALUE
```
### Run Using Environment Variables or Database Configuration
```sh
$ hape config set --config_source env
$ hape config set --config_source db
$ hape config set --config_env_prefix MY_ENV_PREFIX
```
| text/markdown | hazem.ataya94@gmail.com | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | >=3.9 | [] | [] | [] | [
"alembic==1.14.1",
"build==1.2.2.post1",
"cachetools==5.5.1",
"certifi==2024.12.14",
"cffi==1.17.1",
"charset-normalizer==3.4.1",
"cryptography==44.0.0",
"docutils==0.21.2",
"durationpy==0.9",
"google-auth==2.38.0",
"greenlet==3.1.1",
"idna==3.10",
"iniconfig==2.0.0",
"jaraco.classes==3.4.0",
"jaraco.context==6.0.1",
"jaraco.functools==4.1.0",
"keyring==25.6.0",
"kubernetes==31.0.0",
"Mako==1.3.8",
"markdown-it-py==3.0.0",
"MarkupSafe==3.0.2",
"mdurl==0.1.2",
"more-itertools==10.6.0",
"mysql==0.0.3",
"mysql-connector-python==9.2.0",
"mysqlclient==2.2.7",
"nh3==0.2.20",
"oauthlib==3.2.2",
"packaging==24.2",
"pkginfo==1.12.0",
"pluggy==1.5.0",
"pyasn1==0.6.1",
"pyasn1_modules==0.4.1",
"pycparser==2.22",
"Pygments==2.19.1",
"PyMySQL==1.1.1",
"pyproject_hooks==1.2.0",
"pytest==8.3.4",
"python-dateutil==2.9.0.post0",
"python-dotenv==1.0.1",
"python-gitlab==5.6.0",
"python-json-logger==3.2.1",
"PyYAML==6.0.2",
"readme_renderer==44.0",
"requests==2.32.3",
"requests-oauthlib==2.0.0",
"requests-toolbelt==1.0.0",
"rfc3986==2.0.0",
"rich==13.9.4",
"rsa==4.9",
"ruamel.yaml==0.18.10",
"ruamel.yaml.clib==0.2.12",
"setuptools==75.8.0",
"six==1.17.0",
"SQLAlchemy==2.0.37",
"twine==6.0.1",
"typing_extensions==4.12.2",
"urllib3==2.3.0",
"websocket-client==1.8.0",
"wheel==0.45.1"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.13.1 | 2025-02-22 18:44:36.927973 UTC | hape-0.2.95.tar.gz | 48305 | 0b/dd/d93a4849fa3b896b992248720e21e731c37f4ec902cefba24b42076bb683/hape-0.2.95.tar.gz | source | sdist | false | a08db6bc729d91b5a44699b7b06226bf | be8291fdfd261011b0745f3aea0ede12e20b008525b649fc76820a92c77b84fd | 0bddd93a4849fa3b896b992248720e21e731c37f4ec902cefba24b42076bb683 | [] | Hazem Ataya | https://github.com/hazemataya94/hape-framework | null |
2.2 | tccpy | 0.17 | A Python implementation of the Target Confusability Competition (TCC) memory model | null | null | null | null | null | [] | [] | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.5 | 2025-02-22 18:44:49.905907 UTC | tccpy-0.17-py3-none-any.whl | 7543 | 6c/bf/ea635eb77ca1b50b2f6f8c571f0d06b781e6189eaa42c73172443e9cb8df/tccpy-0.17-py3-none-any.whl | py3 | bdist_wheel | false | 87d652b324433ac2f4f58d4d3acb458c | 86318cd19a11cb2b8c8902970577807de215c16f0eaa3489be0796f4fa59029f | 6cbfea635eb77ca1b50b2f6f8c571f0d06b781e6189eaa42c73172443e9cb8df | [] | null | https://github.com/ilabsweden/tccpy | null |
2.2 | tccpy | 0.17 | A Python implementation of the Target Confusability Competition (TCC) memory model | null | null | null | null | null | [] | [] | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.5 | 2025-02-22 18:44:53.30865 UTC | tccpy-0.17.tar.gz | 6496 | 32/12/8c1b74414dd7d09fd4ad208500fcc23f2d766b93a47755cc4735649d922f/tccpy-0.17.tar.gz | source | sdist | false | e840426a84b682b8392507a0d3404f77 | 3cd122e1a327abcad52731fc19e6774032e611a11a8f423ba9b94baeb7c22a0e | 32128c1b74414dd7d09fd4ad208500fcc23f2d766b93a47755cc4735649d922f | [] | null | https://github.com/ilabsweden/tccpy | null |
2.2 | segnivo-python-sdk | 1.7.16 | Segnivo Developer API | # segnivo-python-sdk
**API Version**: 1.7
**Date**: 9th July, 2024
## ๐ Getting Started
This API is based on the REST API architecture, allowing the user to easily manage their data with this resource-based approach.
Every API call is established on which specific request type (GET, POST, PUT, DELETE) will be used.
The API must not be abused and should be used within acceptable limits.
To start using this API, you will need not create or access an existing Segnivo account to obtain your API key ([retrievable from your account settings](https://messaging.segnivo.com/account/api)).
- You must use a valid API Key to send requests to the API endpoints.
- The API only responds to HTTPS-secured communications. Any requests sent via HTTP return an HTTP 301 redirect to the corresponding HTTPS resources.
- The API returns request responses in JSON format. When an API request returns an error, it is sent in the JSON response as an error key or with details in the message key.
### ๐ **Need some help?**
In case you have questions or need clarity with interacting with some endpoints feel free to create a support ticket on your account or you can send an email ([<i>developers@segnivo.com</i>](https://mailto:developers@segnivo.com)) directly and we would be happy to help.
---
## Authentication
As noted earlier, this API uses API keys for authentication. You can generate a Segnivo API key in the [API](https://messaging.segnivo.com/account/api) section of your account settings.
You must include an API key in each request to this API with the \`X-API-KEY\` request header.
### Authentication error response
If an API key is missing, malformed, or invalid, you will receive an HTTP 401 Unauthorized response code.
## Rate and usage limits
API access rate limits apply on a per-API endpoint basis in unit time. The limit is 10k requests per hour for most endpoints and 1m requests per hour for transactional/relay email-sending endpoints. Also, depending on your plan, you may have usage limits. If you exceed either limit, your request will return an HTTP 429 Too Many Requests status code or HTTP 403 if sending credits have been exhausted.
### 503 response
An HTTP \`503\` response from our servers may indicate there is an unexpected spike in API access traffic, while this rarely happens, we ensure the server is usually operational within the next two to five minutes. If the outage persists or you receive any other form of an HTTP \`5XX\` error, contact support ([<i>developers@segnivo.com</i>](https://mailto:developers@segnivo.com)).
### Request headers
To make a successful request, some or all of the following headers must be passed with the request.
| **Header** | **Description** |
| --- | --- |
| Content-Type | Required and should be \`application/json\` in most cases. |
| Accept | Required and should be \`application/json\` in most cases |
| Content-Length | Required for \`POST\`, \`PATCH\`, and \`PUT\` requests containing a request body. The value must be the number of bytes rather than the number of characters in the request body. |
| X-API-KEY | Required. Specifies the API key used for authorization. |
##### ๐ Note with example requests and code snippets
If/when you use the code snippets used as example requests, remember to calculate and add the \`Content-Length\` header. Some request libraries, frameworks, and tools automatically add this header for you while a few do not. Kindly check and ensure yours does or add it yourself.
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.0.0
- Package version: 1.7.16
- Generator version: 7.10.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
Python 3.8+
## Installation & Usage
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
\`\`\`python
import segnivo_sdk
from segnivo_sdk.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to https://api.segnivo.com/v1
# See configuration.py for a list of all supported configuration parameters.
configuration = segnivo_sdk.Configuration(
host = "https://api.segnivo.com/v1"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: apiKeyAuth
configuration.api_key['apiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['apiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with segnivo_sdk.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = segnivo_sdk.EmailAddressVerificationApi(api_client)
email_address_verification_request = segnivo_sdk.EmailAddressVerificationRequest() # EmailAddressVerificationRequest | (optional)
try:
# Email Address Verification Validation
api_response = api_instance.validate_email_post(email_address_verification_request=email_address_verification_request)
print("The response of EmailAddressVerificationApi->validate_email_post:
")
pprint(api_response)
except ApiException as e:
print("Exception when calling EmailAddressVerificationApi->validate_email_post: %s
" % e)
\`\`\`
## Documentation for API Endpoints
All URIs are relative to *https://api.segnivo.com/v1*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*EmailAddressVerificationApi* | [**validate_email_post**](docs/EmailAddressVerificationApi.md#validate_email_post) | **POST** /validate-email | Email Address Verification Validation
*EmailCampaignsApi* | [**messages_get**](docs/EmailCampaignsApi.md#messages_get) | **GET** /messages | Get campaigns
*EmailCampaignsApi* | [**messages_post**](docs/EmailCampaignsApi.md#messages_post) | **POST** /messages | Create a Campaign
*EmailCampaignsApi* | [**messages_uid_delete_post**](docs/EmailCampaignsApi.md#messages_uid_delete_post) | **POST** /messages/{uid}/delete | Delete a campaign
*EmailCampaignsApi* | [**messages_uid_get**](docs/EmailCampaignsApi.md#messages_uid_get) | **GET** /messages/{uid} | Get a campaign
*EmailCampaignsApi* | [**messages_uid_patch**](docs/EmailCampaignsApi.md#messages_uid_patch) | **PATCH** /messages/{uid} | Update Campaign
*EmailCampaignsApi* | [**messages_uid_pause_post**](docs/EmailCampaignsApi.md#messages_uid_pause_post) | **POST** /messages/{uid}/pause | Pause a campaign
*EmailCampaignsApi* | [**messages_uid_resume_post**](docs/EmailCampaignsApi.md#messages_uid_resume_post) | **POST** /messages/{uid}/resume | Resume the delivery of a campaign
*MailingListsApi* | [**lists_get**](docs/MailingListsApi.md#lists_get) | **GET** /lists | Get mailing lists
*MailingListsApi* | [**lists_post**](docs/MailingListsApi.md#lists_post) | **POST** /lists | Create a Mailing List
*MailingListsApi* | [**lists_uid_add_field_post**](docs/MailingListsApi.md#lists_uid_add_field_post) | **POST** /lists/{uid}/add-field | Add a field
*MailingListsApi* | [**lists_uid_delete_post**](docs/MailingListsApi.md#lists_uid_delete_post) | **POST** /lists/{uid}/delete | Delete a list
*MailingListsApi* | [**lists_uid_get**](docs/MailingListsApi.md#lists_uid_get) | **GET** /lists/{uid} | Get a list
*MailingListsApi* | [**lists_uid_patch**](docs/MailingListsApi.md#lists_uid_patch) | **PATCH** /lists/{uid} | Update a List
*RelayApi* | [**relay_emails_id_get**](docs/RelayApi.md#relay_emails_id_get) | **GET** /relay/emails/{id} | Fetch Emails
*RelayApi* | [**relay_raw_post**](docs/RelayApi.md#relay_raw_post) | **POST** /relay/raw | Send a Raw Email Message
*RelayTransactionalEmailsApi* | [**relay_send_post**](docs/RelayTransactionalEmailsApi.md#relay_send_post) | **POST** /relay/send | Send an Email
*SubscribersContactsApi* | [**contacts_get**](docs/SubscribersContactsApi.md#contacts_get) | **GET** /contacts | Get contacts
*SubscribersContactsApi* | [**contacts_post**](docs/SubscribersContactsApi.md#contacts_post) | **POST** /contacts | Add a Contact
*SubscribersContactsApi* | [**contacts_uid_add_tag_post**](docs/SubscribersContactsApi.md#contacts_uid_add_tag_post) | **POST** /contacts/{uid}/add-tag | Add tags to a contact
*SubscribersContactsApi* | [**contacts_uid_delete_post**](docs/SubscribersContactsApi.md#contacts_uid_delete_post) | **POST** /contacts/{uid}/delete | Delete a contact
*SubscribersContactsApi* | [**contacts_uid_get**](docs/SubscribersContactsApi.md#contacts_uid_get) | **GET** /contacts/{uid} | Get a contact
*SubscribersContactsApi* | [**contacts_uid_patch**](docs/SubscribersContactsApi.md#contacts_uid_patch) | **PATCH** /contacts/{uid} | Update Contact
*SubscribersContactsApi* | [**contacts_uid_subscribe_patch**](docs/SubscribersContactsApi.md#contacts_uid_subscribe_patch) | **PATCH** /contacts/{uid}/subscribe | Subscribe a contact
*SubscribersContactsApi* | [**contacts_uid_unsubscribe_patch**](docs/SubscribersContactsApi.md#contacts_uid_unsubscribe_patch) | **PATCH** /contacts/{uid}/unsubscribe | Unsubscribe a contact
## Documentation For Models
- [AddContactRequest](docs/AddContactRequest.md)
- [CampaignCreateRequest](docs/CampaignCreateRequest.md)
- [CampaignUpdateRequest](docs/CampaignUpdateRequest.md)
- [ContactUpdateRequest](docs/ContactUpdateRequest.md)
- [ContactsUidAddTagPostRequest](docs/ContactsUidAddTagPostRequest.md)
- [EmailAddressVerificationRequest](docs/EmailAddressVerificationRequest.md)
- [MailingListAddFieldRequest](docs/MailingListAddFieldRequest.md)
- [MailingListRequest](docs/MailingListRequest.md)
- [MailingListRequestContact](docs/MailingListRequestContact.md)
- [RelayEmailRequest](docs/RelayEmailRequest.md)
<a id="documentation-for-authorization"></a>
## Documentation For Authorization
Authentication schemes defined for the API:
<a id="apiKeyAuth"></a>
### apiKeyAuth
- **Type**: API key
- **API key parameter name**: X-API-KEY
- **Location**: HTTP header
## Author
| text/markdown | team@openapitools.org | null | null | [] | [] | null | [] | [] | [] | [
"urllib3<3.0.0,>=1.25.3",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.0 | 2025-02-22 18:46:16.603218 UTC | segnivo_python_sdk-1.7.16-py3-none-any.whl | 88119 | 05/0a/bf8d04db8c924c16a21422b83d337fca74ac3876c4513f903fdfc4039ffe/segnivo_python_sdk-1.7.16-py3-none-any.whl | py3 | bdist_wheel | false | 632dfa25ece61a63281d89a1a0cf8d0e | 9759cc78b6c3d376a4264c79b35f4d58978ad108ff7413d8da57e51007903b33 | 050abf8d04db8c924c16a21422b83d337fca74ac3876c4513f903fdfc4039ffe | [] | OpenAPI Generator community | https://github.com/segnivo/segnivo-sdk/tree/main/sdk-python | null |
2.2 | segnivo-python-sdk | 1.7.16 | Segnivo Developer API | # segnivo-python-sdk
**API Version**: 1.7
**Date**: 9th July, 2024
## ๐ Getting Started
This API is based on the REST API architecture, allowing the user to easily manage their data with this resource-based approach.
Every API call is established on which specific request type (GET, POST, PUT, DELETE) will be used.
The API must not be abused and should be used within acceptable limits.
To start using this API, you will need not create or access an existing Segnivo account to obtain your API key ([retrievable from your account settings](https://messaging.segnivo.com/account/api)).
- You must use a valid API Key to send requests to the API endpoints.
- The API only responds to HTTPS-secured communications. Any requests sent via HTTP return an HTTP 301 redirect to the corresponding HTTPS resources.
- The API returns request responses in JSON format. When an API request returns an error, it is sent in the JSON response as an error key or with details in the message key.
### ๐ **Need some help?**
In case you have questions or need clarity with interacting with some endpoints feel free to create a support ticket on your account or you can send an email ([<i>developers@segnivo.com</i>](https://mailto:developers@segnivo.com)) directly and we would be happy to help.
---
## Authentication
As noted earlier, this API uses API keys for authentication. You can generate a Segnivo API key in the [API](https://messaging.segnivo.com/account/api) section of your account settings.
You must include an API key in each request to this API with the \`X-API-KEY\` request header.
### Authentication error response
If an API key is missing, malformed, or invalid, you will receive an HTTP 401 Unauthorized response code.
## Rate and usage limits
API access rate limits apply on a per-API endpoint basis in unit time. The limit is 10k requests per hour for most endpoints and 1m requests per hour for transactional/relay email-sending endpoints. Also, depending on your plan, you may have usage limits. If you exceed either limit, your request will return an HTTP 429 Too Many Requests status code or HTTP 403 if sending credits have been exhausted.
### 503 response
An HTTP \`503\` response from our servers may indicate there is an unexpected spike in API access traffic, while this rarely happens, we ensure the server is usually operational within the next two to five minutes. If the outage persists or you receive any other form of an HTTP \`5XX\` error, contact support ([<i>developers@segnivo.com</i>](https://mailto:developers@segnivo.com)).
### Request headers
To make a successful request, some or all of the following headers must be passed with the request.
| **Header** | **Description** |
| --- | --- |
| Content-Type | Required and should be \`application/json\` in most cases. |
| Accept | Required and should be \`application/json\` in most cases |
| Content-Length | Required for \`POST\`, \`PATCH\`, and \`PUT\` requests containing a request body. The value must be the number of bytes rather than the number of characters in the request body. |
| X-API-KEY | Required. Specifies the API key used for authorization. |
##### ๐ Note with example requests and code snippets
If/when you use the code snippets used as example requests, remember to calculate and add the \`Content-Length\` header. Some request libraries, frameworks, and tools automatically add this header for you while a few do not. Kindly check and ensure yours does or add it yourself.
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.0.0
- Package version: 1.7.16
- Generator version: 7.10.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
Python 3.8+
## Installation & Usage
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
\`\`\`python
import segnivo_sdk
from segnivo_sdk.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to https://api.segnivo.com/v1
# See configuration.py for a list of all supported configuration parameters.
configuration = segnivo_sdk.Configuration(
host = "https://api.segnivo.com/v1"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: apiKeyAuth
configuration.api_key['apiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['apiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with segnivo_sdk.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = segnivo_sdk.EmailAddressVerificationApi(api_client)
email_address_verification_request = segnivo_sdk.EmailAddressVerificationRequest() # EmailAddressVerificationRequest | (optional)
try:
# Email Address Verification Validation
api_response = api_instance.validate_email_post(email_address_verification_request=email_address_verification_request)
print("The response of EmailAddressVerificationApi->validate_email_post:
")
pprint(api_response)
except ApiException as e:
print("Exception when calling EmailAddressVerificationApi->validate_email_post: %s
" % e)
\`\`\`
## Documentation for API Endpoints
All URIs are relative to *https://api.segnivo.com/v1*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*EmailAddressVerificationApi* | [**validate_email_post**](docs/EmailAddressVerificationApi.md#validate_email_post) | **POST** /validate-email | Email Address Verification Validation
*EmailCampaignsApi* | [**messages_get**](docs/EmailCampaignsApi.md#messages_get) | **GET** /messages | Get campaigns
*EmailCampaignsApi* | [**messages_post**](docs/EmailCampaignsApi.md#messages_post) | **POST** /messages | Create a Campaign
*EmailCampaignsApi* | [**messages_uid_delete_post**](docs/EmailCampaignsApi.md#messages_uid_delete_post) | **POST** /messages/{uid}/delete | Delete a campaign
*EmailCampaignsApi* | [**messages_uid_get**](docs/EmailCampaignsApi.md#messages_uid_get) | **GET** /messages/{uid} | Get a campaign
*EmailCampaignsApi* | [**messages_uid_patch**](docs/EmailCampaignsApi.md#messages_uid_patch) | **PATCH** /messages/{uid} | Update Campaign
*EmailCampaignsApi* | [**messages_uid_pause_post**](docs/EmailCampaignsApi.md#messages_uid_pause_post) | **POST** /messages/{uid}/pause | Pause a campaign
*EmailCampaignsApi* | [**messages_uid_resume_post**](docs/EmailCampaignsApi.md#messages_uid_resume_post) | **POST** /messages/{uid}/resume | Resume the delivery of a campaign
*MailingListsApi* | [**lists_get**](docs/MailingListsApi.md#lists_get) | **GET** /lists | Get mailing lists
*MailingListsApi* | [**lists_post**](docs/MailingListsApi.md#lists_post) | **POST** /lists | Create a Mailing List
*MailingListsApi* | [**lists_uid_add_field_post**](docs/MailingListsApi.md#lists_uid_add_field_post) | **POST** /lists/{uid}/add-field | Add a field
*MailingListsApi* | [**lists_uid_delete_post**](docs/MailingListsApi.md#lists_uid_delete_post) | **POST** /lists/{uid}/delete | Delete a list
*MailingListsApi* | [**lists_uid_get**](docs/MailingListsApi.md#lists_uid_get) | **GET** /lists/{uid} | Get a list
*MailingListsApi* | [**lists_uid_patch**](docs/MailingListsApi.md#lists_uid_patch) | **PATCH** /lists/{uid} | Update a List
*RelayApi* | [**relay_emails_id_get**](docs/RelayApi.md#relay_emails_id_get) | **GET** /relay/emails/{id} | Fetch Emails
*RelayApi* | [**relay_raw_post**](docs/RelayApi.md#relay_raw_post) | **POST** /relay/raw | Send a Raw Email Message
*RelayTransactionalEmailsApi* | [**relay_send_post**](docs/RelayTransactionalEmailsApi.md#relay_send_post) | **POST** /relay/send | Send an Email
*SubscribersContactsApi* | [**contacts_get**](docs/SubscribersContactsApi.md#contacts_get) | **GET** /contacts | Get contacts
*SubscribersContactsApi* | [**contacts_post**](docs/SubscribersContactsApi.md#contacts_post) | **POST** /contacts | Add a Contact
*SubscribersContactsApi* | [**contacts_uid_add_tag_post**](docs/SubscribersContactsApi.md#contacts_uid_add_tag_post) | **POST** /contacts/{uid}/add-tag | Add tags to a contact
*SubscribersContactsApi* | [**contacts_uid_delete_post**](docs/SubscribersContactsApi.md#contacts_uid_delete_post) | **POST** /contacts/{uid}/delete | Delete a contact
*SubscribersContactsApi* | [**contacts_uid_get**](docs/SubscribersContactsApi.md#contacts_uid_get) | **GET** /contacts/{uid} | Get a contact
*SubscribersContactsApi* | [**contacts_uid_patch**](docs/SubscribersContactsApi.md#contacts_uid_patch) | **PATCH** /contacts/{uid} | Update Contact
*SubscribersContactsApi* | [**contacts_uid_subscribe_patch**](docs/SubscribersContactsApi.md#contacts_uid_subscribe_patch) | **PATCH** /contacts/{uid}/subscribe | Subscribe a contact
*SubscribersContactsApi* | [**contacts_uid_unsubscribe_patch**](docs/SubscribersContactsApi.md#contacts_uid_unsubscribe_patch) | **PATCH** /contacts/{uid}/unsubscribe | Unsubscribe a contact
## Documentation For Models
- [AddContactRequest](docs/AddContactRequest.md)
- [CampaignCreateRequest](docs/CampaignCreateRequest.md)
- [CampaignUpdateRequest](docs/CampaignUpdateRequest.md)
- [ContactUpdateRequest](docs/ContactUpdateRequest.md)
- [ContactsUidAddTagPostRequest](docs/ContactsUidAddTagPostRequest.md)
- [EmailAddressVerificationRequest](docs/EmailAddressVerificationRequest.md)
- [MailingListAddFieldRequest](docs/MailingListAddFieldRequest.md)
- [MailingListRequest](docs/MailingListRequest.md)
- [MailingListRequestContact](docs/MailingListRequestContact.md)
- [RelayEmailRequest](docs/RelayEmailRequest.md)
<a id="documentation-for-authorization"></a>
## Documentation For Authorization
Authentication schemes defined for the API:
<a id="apiKeyAuth"></a>
### apiKeyAuth
- **Type**: API key
- **API key parameter name**: X-API-KEY
- **Location**: HTTP header
## Author
| text/markdown | team@openapitools.org | null | null | [] | [] | null | [] | [] | [] | [
"urllib3<3.0.0,>=1.25.3",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.0 | 2025-02-22 18:46:18.778163 UTC | segnivo_python_sdk-1.7.16.tar.gz | 48333 | e4/c9/b19ad444a92e575e0f21f3ace8374fd21424998a1a7986cb9ff647bfaf23/segnivo_python_sdk-1.7.16.tar.gz | source | sdist | false | 9d7fc9050002eb70680c77c3af0ba3bc | a9d12e22554a8fafae77628b45266ea18c3cd02de010b0afd4ea141f4e673c6f | e4c9b19ad444a92e575e0f21f3ace8374fd21424998a1a7986cb9ff647bfaf23 | [] | OpenAPI Generator community | https://github.com/segnivo/segnivo-sdk/tree/main/sdk-python | null |
2.3 | ghostos-moss | 0.1.4 | the code-driven python interface for llms, agents and project GhostOS | # MOSS Protocol
The frameworks of mainstream AI Agents currently use methods represented by `JSON Schema Function Call` to operate the
capabilities provided by the system.
An increasing number of frameworks are beginning to use code generated by models to drive, with OpenInterpreter being
representative.
The `GhostOS` project envisions that the main means of interaction between future AI Agents and external systems will be
based on protocol-based interactions, which include four aspects:
* `Code As Prompt`: The system directly reflects code into Prompts for large models through a series of rules, allowing
large models to call directly.
* `Code Interpreter`: The system executes code generated by large models directly in the environment to drive system
behavior.
* `Runtime Injection`: The system injects various instances generated at runtime into the context.
* `Context Manager`: The system manages the storage, use, and recycling of various variables in multi-turn
conversations.
This entire set of solutions is defined as the `MOSS` protocol in `GhostOS`, with the full name
being `Model-oriented Operating System Simulator` .
## MOSS
MOSS implementations [ghostos_moss](https://github.com/ghost-in-moss/GhostOS/tree/main/libs/moss/ghostos_moss)
is meant to be a independent package.
### Purpose
The design goal of `MOSS` is to enable human engineers to read a code context as easily as a Large language model does,
with what you see is what you get.
We take `SpheroBoltGPT` (driven by code to control the toy SpheroBolt) as an example:
```python
from ghostos.prototypes.spherogpt.bolt import (
RollFunc,
Ball,
Move,
LedMatrix,
Animation,
)
from ghostos_moss import Moss as Parent
class Moss(Parent):
body: Ball
"""your sphero ball body"""
face: LedMatrix
"""you 8*8 led matrix face"""
```
This piece of code defines a Python context for controlling Sphero Bolt.
Both Large language models and human engineers reading this code can see that the behavior of SpheroBolt can be driven
through `moss.body` or `moss.face`.
The referenced libraries such as `RollFunc`, `Ball`, and `Move` in the code are automatically reflected as Prompts,
along with the source code, submitted to the LLM to generate control code.
This way, LLM can be requested to generate a function like:
```python
def run(moss: Moss):
# body spin 360 degree in 1 second.
moss.body.new_move(True).spin(360, 1)
```
The `MossRuntime` will compile this function into the current module and then execute the `run` function within it.
### Abstract Classes
Core interface of `MOSS` are:
* [MossCompiler](https://github.com/ghost-in-moss/GhostOS/tree/main/ghostos/libs/moss/ghostos_moss/abcd.py): Compile any Python module to
generate a temporary module.
* [MossPrompter](https://github.com/ghost-in-moss/GhostOS/tree/main/ghostos/libs/moss/ghostos_moss/abcd.py): Reflect a Python module to
generate a prompt for the Large Language Model.
* [MossRuntime](https://github.com/ghost-in-moss/GhostOS/tree/main/ghostos/libs/moss/ghostos_moss/abcd.py): Execute the code generated by the
Large Language Model within the temporary compiled module, and get result.

### Get MossCompiler
`MossCompiler` registered into [IoC Container](/en/concepts/ioc_container.md). Get instance of it by:
```python
from ghostos.bootstrap import get_container
from ghostos_moss import MossCompiler
compiler = get_container().force_fetch(MossCompiler)
```
### PyContext
`MossCompiler` use [PyContext](https://github.com/ghost-in-moss/GhostOS/tree/main/ghostos/libs/moss/ghostos_moss/pycontext.py)
to manage a persistence context.
It can be used to store variables defined and modified at runtime; it can also manage direct modifications to Python
code for the next execution.
Each `MossCompiler` inherits an independent IoC Container, which can be used for dependency injection registration.
```python
from ghostos_moss import MossCompiler
from ghostos_container import Provider
compiler: MossCompiler = ...
class Foo:
...
f: Foo = ...
some_provider: Provider = ...
compiler.bind(Foo, f) # ็ปๅฎๅฐ compiler.container()
compiler.register(some_provider) # ๆณจๅ provider ๅฐ compiler.container()
attr_value = ...
compiler.with_locals(attr_name=attr_value) # ๅจ็ฎๆ python module ๆณจๅ
ฅไธไธชๆฌๅฐๅ้ attr_name
```
### Compile Runtime
Using MossCompiler, you can compile a temporary module based on PyContext or a Python module name.
```python
from ghostos.bootstrap import get_container
from ghostos_moss import MossCompiler, PyContext
pycontext_instance: PyContext = ...
compiler = get_container().force_fetch(MossCompiler)
# join python context to the compiler
compiler.join_context(pycontext_instance)
runtime = compiler.compile(None)
```
### Get Compiled Module
Get the compiled module:
```python
from types import ModuleType
from ghostos_moss import MossRuntime
runtime: MossRuntime = ...
module: ModuleType = runtime.module()
```
### Moss Prompter
With `MossRuntime` we can get a `MossPrompter`, useful to generate Prompt for LLM:
```python
from ghostos_moss import MossRuntime
runtime: MossRuntime = ...
with runtime:
prompter = runtime.prompter()
# get the full Prompt
prompt = prompter.dump_module_prompt()
# prompt is composed by:
# 1. source code of the module
code = prompter.get_source_code() # ่ทๅๆจกๅ็ๆบ็
# each prompt of the imported attrs
for attr_name, attr_prompt in prompter.imported_attr_prompts():
print(attr_name, attr_prompt)
attr_prompt = prompter.dump_imported_prompt()
```
#### Hide Code to LLM
Modules compiled by `MossCompiler` will provide all their source code to the Large Language Model. If you want to hide a
portion of the code, you can use the `# <moss-hide>` marker.
```python
# <moss-hide>
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from ghostos_moss import MossPrompter
# The code defined here will execute normally but will not be submitted to the LLM.
# This code is typically used to define the logic within the lifecycle of MossCompiler/Runtime operations.
# Shielding these logics helps the LLM to focus more.
def __moss_module_prompt__(prompter: "MossPrompter") -> str:
...
# </moss-hide>
```
#### Code Reflection
We utilize reflection mechanisms to automatically generate Prompts from code information and provide them to the Large
Language Model.
The basic idea is similar to how programmers view reference libraries, only allowing the LLM to see the minimal amount
of information it cares about, mainly the definitions of classes and functions along with key variables.
Instead of directly providing all the source code to the model.
#### Default Reflection Pattern
`MossRuntime` reflects variables imported into the current Python module and generates their Prompts according to
certain rules.
The current rules are as follows:
* Function & Method: Only reflect the function name + doc
* Abstract class: Reflect the source code
* pydantic.BaseModel: Reflect the source code
Additionally, any class that implements `ghostos.prompter.PromptAbleClass` will use its `__class_prompt__` method to
generate the reflection result.
#### Custom Attr Prompt
If the target Python module file defines the magic method `__moss_attr_prompts__`, it will use the provided results to
override the automatically reflected results.
```python
def __moss_attr_prompts__() -> "AttrPrompts":
yield "key", "prompt"
```
If the returned prompt is empty, then ignore it to the LLM.
### Runtime Execution
Based on `MossRuntime`, you can execute the code generated by the Large Language Model directly within a temporarily
compiled module. The benefits of doing this are:
1. The LLM does not need to import all libraries, saving the overhead of tokens.
2. Accelerate the generation speed, expecting to surpass the output of JSON schema in many cases.
3. Avoid pollution of the context module by code generated by the Large Language Model.
4. Compared to executing code in Jupyter or a sandbox, temporarily compiling a module aims to achieve a "minimum context
unit."
The basic principle is to use the current module as the context to compile and execute the code generated by the Large
Language Model. The internal logic is as follows:
```python
import ghostos_moss
runtime: ghostos_moss.MossRuntime = ...
pycontext = runtime.dump_pycontext()
local_values = runtime.locals()
generated_code: str = ...
filename = pycontext.module if pycontext.module is not None else "<MOSS>"
compiled = compile(generated_code, filename=filename, mode='exec')
# ็ดๆฅ็ผ่ฏ
exec(compiled, local_values)
```
We can request that the code generated by the Large Language Model be a main function. After MossRuntime compiles the
code, we can immediately execute this function.
```python
import ghostos_moss
runtime: ghostos_moss.MossRuntime = ...
# ๅ
ๅซ main ๅฝๆฐ็ไปฃ็
generated_code: str = ...
with runtime:
result = runtime.execute(target="main", code=generated_code, local_args=["foo", "bar"])
# ๆง่ก่ฟ็จไธญ็ std output
std_output = runtime.dump_std_output()
# ่ทๅๅๆด่ฟ็ pycontext
pycontext = runtime.dump_pycontext()
```
### Custom Lifecycle functions
`MossRuntime`, during its lifecycle, attempts to locate and execute magic methods within the compiled modules. All magic
methods are defined
in [ghostos_moss.lifecycle](https://github.com/ghost-in-moss/GhostOS/tree/main/ghostos/libs/moss/ghostos_moss/lifecycle.py). For details,
please refer to the file. The main methods include:
```python
__all__ = [
'__moss_compile__', # prepare moss compiler, handle dependencies register
'__moss_compiled__', # when moss instance is compiled
'__moss_attr_prompts__', # generate custom local attr prompt
'__moss_module_prompt__', # define module prompt
'__moss_exec__', # execute the generated code attach to the module
]
```
### Moss ็ฑป
In the target module compiled by `MossCompiler`, you can define a class named `Moss` that inherits
from `ghostos_moss.Moss`. This allows it to receive key dependency injections during its lifecycle, achieving a
what-you-see-is-what-you-get (WYSIWYG) effect.
The `Moss` class serves two purposes:
1. Automated Dependency Injection: Abstract classes mounted on Moss will receive dependency injection from the IoC
container.
2. Managing Persistent Context: Data objects on the Moss class will be automatically stored in `PyContext`.
The existence of this class is default; even if you do not define it, an instance named `moss` will be generated in the
compiled temporary module. The `moss` instance can be passed to functions in code generated by the Large Language Model.
For example, regarding context:
```python
from abc import ABC
from ghostos_moss import Moss as Parent
class Foo(ABC):
...
class Moss(Parent):
int_val: int = 0
foo: Foo # the abstract class bound to Moss will automatically get injection from MossRuntime.container()
```
The LLM generated code are:
```python
# ๅคงๆจกๅ็ๆ็ main ๅฝๆฐ
def main(moss) -> int:
moss.int_var = 123
return moss.int_var
```
Executing this function will change the value of `Moss.int_val` to `123` in the future.
The purpose of this is to manage the context in a WYSIWYG manner. There are several default rules:
1. Variable Storage: All variables bound to the `Moss` instance, including those of type `pydantic.BaseModel`
and `int | str | float | bool`, will be automatically stored in `PyContext`.
2. Abstract Class Dependency Injection: Any class mounted on `Moss` will automatically attempt to inject instances using
the IoC Container.
3. Lifecycle Management: If a class implements `ghostos_moss.Injection`, its `on_injection` and `on_destroy`
methods will be automatically called when injected into the `moss` instance.
4. Defining a `Moss` class will not pollute or disrupt the original functionality of the target file.
You can also use `MossRuntime` to obtain all the injection results for the `Moss` class.
```python
from ghostos_moss import Moss, MossRuntime
runtime: MossRuntime = ...
moss_class = runtime.moss_type()
assert issubclass(moss_class, Moss)
moss_instance = runtime.moss()
assert isinstance(moss_instance, moss_class)
injections = runtime.moss_injections()
```
## MOSS TestSuite
All source files that can be compiled by `MossCompiler` are also referred to as `MOSS files`.
In these files, the functions, variables, and classes defined can be unit tested, but runtime dependency injection
requires the construction of a test suite.
`GhostOS` provides a default suite called `ghostos_moss.testsuite.MossTextSuite`. For more details, please refer to
the code.
| text/markdown | thirdgerb@gmail.com | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | >=3.10 | [] | [] | [] | [
"ghostos-common<0.2.0,>=0.1.0",
"ghostos-container<0.2.0,>=0.1.2"
] | [] | [] | [] | [] | poetry/2.0.1 CPython/3.10.16 Darwin/23.6.0 | 2025-02-22 18:46:45.049037 UTC | ghostos_moss-0.1.4-py3-none-any.whl | 36256 | 12/55/9d61844de0895b9020d3b0abf3044c430fa75d7f64512be0b5dc5388e65c/ghostos_moss-0.1.4-py3-none-any.whl | py3 | bdist_wheel | false | 4025216b34342bf0832d08bf0f017582 | 65de859125d1c79b0ef1b3fdc72f197f42e0dfce048227cfb883e3327e6d9299 | 12559d61844de0895b9020d3b0abf3044c430fa75d7f64512be0b5dc5388e65c | [] | thirdgerb | null | null |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 42