Skip to content

Commit 0e1b0ed

Browse files
authored
Revert #579 (#581)
After Project-MONAI/MONAI#7647 revert #579 ### Status **Ready/Work in progress/Hold** ### Please ensure all the checkboxes: <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Codeformat tests passed locally by running `./runtests.sh --codeformat`. - [ ] In-line docstrings updated. - [ ] Update `version` and `changelog` in `metadata.json` if changing an existing bundle. - [ ] Please ensure the naming rules in config files meet our requirements (please refer to: `CONTRIBUTING.md`). - [ ] Ensure versions of packages such as `monai`, `pytorch` and `numpy` are correct in `metadata.json`. - [ ] Descriptions should be consistent with the content, such as `eval_metrics` of the provided weights and TorchScript modules. - [ ] Files larger than 25MB are excluded and replaced by providing download links in `large_file.yml`. - [ ] Avoid using path that contains personal information within config files (such as use `/home/your_name/` for `"bundle_root"`). Signed-off-by: YunLiu <[email protected]>
1 parent 04eef67 commit 0e1b0ed

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

models/lung_nodule_ct_detection/configs/metadata.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
{
22
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3-
"version": "0.6.4",
3+
"version": "0.6.5",
44
"changelog": {
5+
"0.6.5": "remove notes for trt_export in readme",
56
"0.6.4": "add notes for trt_export in readme",
67
"0.6.3": "add load_pretrain flag for infer",
78
"0.6.2": "add checkpoint loader for infer",

models/lung_nodule_ct_detection/docs/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,8 +130,6 @@ It is possible that your inference dataset should set `"affine_lps_to_ras": fals
130130
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision <fp32/fp16> --input_shape "[1, 1, 512, 512, 192]" --use_onnx "True" --use_trace "True" --onnx_output_names "['output_0', 'output_1', 'output_2', 'output_3', 'output_4', 'output_5']" --network_def#use_list_output "True"
131131
```
132132

133-
Note that if you're using a container based on [PyTorch 24.03](nvcr.io/nvidia/pytorch:24.03-py3), and the size of your input exceeds (432, 432, 152), the TensorRT export might fail. In such cases, it would be necessary for users to manually adjust the input_shape downwards. Keep in mind that minimizing the input_shape could potentially impact performance. Hence, always reassess the model's performance after making such adjustments to validate if it continues to meet your requirements.
134-
135133
#### Execute inference with the TensorRT model
136134

137135
```

0 commit comments

Comments
 (0)