Welcome to segmentation-cityscape! This application allows you to perform semantic segmentation using PyTorch DeepLabV3/V3+ on Cityscapes images. You can train models, evaluate their performance, and export results ready for submission.
To get started, visit the Releases page to download the software.
Before you begin, ensure your system meets the following requirements:
Visit the Releases Page: Go to the Releases page.
Download the Application: Click on the latest release version. You will see various download options. Choose the version that matches your operating system.
python -m venv segmentation-env
segmentation-env\Scripts\activate
source segmentation-env/bin/activate
pip install torch torchvision albumentations
python run_segmentation.py
To train a model, use the following command in your terminal:
python train_model.py --config config.yaml
Adjust the config.yaml file according to your dataset paths and parameters.
After training, evaluate the model with:
python evaluate_model.py --model_path path/to/your/model.pth
This command will give you mIoU scores and other performance metrics.
To create segmentation overlays, run:
python generate_overlays.py --image_path path/to/image.jpg
This will display the segmented version of the input image.
To export label IDs, use:
python export_labels.py --output_path path/to/output.json
This command creates a file suitable for submission.
For in-depth instructions on each feature, consult the projectβs Wiki. Youβll find examples, tips, and detailed explanations.
If you run into issues:
We welcome contributions! If you have suggestions, feel free to open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
Join our community on GitHub Discussions or participate in our forums to share experiences and ask for help.
For any specific issues or questions, feel free to reach out in the Issues section on GitHub. Your feedback can help make segmentation-cityscape better for everyone.