Skip to content

Conversation

@melanieclarke
Copy link
Collaborator

@melanieclarke melanieclarke commented May 28, 2025

Resolves JP-3923

Requires spacetelescope/stcal#371

Add filters for runtime warnings that appear in JWST regression tests. Default behavior for the numerical errors encountered is expected; the generated warnings are not necessary or useful to the user.

Tasks

  • If you have a specific reviewer in mind, tag them.
  • add a build milestone, i.e. Build 12.0 (use the latest build if not sure)
  • Does this PR change user-facing code / API? (if not, label with no-changelog-entry-needed)
    • write news fragment(s) in changes/: echo "changed something" > changes/<PR#>.<changetype>.rst (see changelog readme for instructions)
    • update or add relevant tests
    • update relevant docstrings and / or docs/ page
    • start a regression test and include a link to the running job (click here for instructions)
      • Do truth files need to be updated ("okified")?
        • after the reviewer has approved these changes, run okify_regtests to update the truth files
  • if a JIRA ticket exists, make sure it is resolved properly

@melanieclarke
Copy link
Collaborator Author

Copy link
Collaborator

@emolter emolter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a controversial question, but are there any cases anywhere in the pipelien where the numpy RuntimeWarnings for "divide by zero" and "invalid value encountered in divide" and similar are useful?
can we just globally ignore them?

@pllim
Copy link
Collaborator

pllim commented May 29, 2025

Thanks for cleaning up after me!

You can also revisit the unit tests I touched in #9381 , especially where I inserted blind ignore with filterwarnings("ignore", ...) and see if those can be removed now as well.

More importantly, are we sure they are always okay to ignore. Or should we instead clean up the input data or throw exceptions instead?

If we keep the ignore but user desires to see these warnings sometimes, we can also consider re-routing the warning to log file instead as pytest warning catching does not care about log.warning unless you use caplog and do something special with that, which I don't think we do here.

xref #1397

@melanieclarke
Copy link
Collaborator Author

You can also revisit the unit tests I touched in #9381 , especially where I inserted blind ignore with filterwarnings("ignore", ...) and see if those can be removed now as well.

Thanks for pointing this out -- I removed a couple filters from unit tests and added a couple more to the code instead.

More importantly, are we sure they are always okay to ignore. Or should we instead clean up the input data or throw exceptions instead?

The ones I looked at are all related to numerical handling for arrays that may contain both valid and invalid data points. The default behavior in these cases is the expected behavior and there's no need to warn for it: NaN should be returned when there are no good values to average, division by zero should return NaN, etc. We fully expect that there will be some NaN values in our data!

The one exception in the warnings addressed here, in my opinion, is that NIRSpec values are sometimes prone to floating point overflow, due to the combination of the precision we use for storing data and the calibration scale factor used in the flat reference files. But still, the numpy behavior is correct within the constraints -- if the value is too small to propagate, it sets it to zero; if it's too large, it sets it to infinity -- and either way, there's no need to warn the user that a few values in a large array did not propagate, every time we run the NIRSpec pipeline.

If we keep the ignore but user desires to see these warnings sometimes, we can also consider re-routing the warning to log file instead as pytest warning catching does not care about log.warning unless you use caplog and do something special with that, which I don't think we do here.

I don't think there's any more value in these messages as log warnings than as runtime warnings. I can see David's point that they can be useful in development, so we probably should not globally ignore them, but I don't see a good reason to show unavoidable benign warnings to users.

@pllim
Copy link
Collaborator

pllim commented May 29, 2025

There is uncaught warning now in jwst/dark_current/tests/test_dark_sub.py, failing the CI.

Once that is dealt with, technically this LGTM, but I'll leave the actual approval to algorithm experts. Thanks!

@melanieclarke
Copy link
Collaborator Author

There is uncaught warning now in jwst/dark_current/tests/test_dark_sub.py, failing the CI.

That's because I moved the warning filter to the companion PR in stcal (spacetelescope/stcal#371), which is already required to pass regression tests. I can modify the pyproject.toml temporarily to point to my stcal branch so CI will pass.

@melanieclarke melanieclarke force-pushed the fix_warnings branch 4 times, most recently from aeb4f1f to 50ba524 Compare June 3, 2025 18:35
Copy link
Contributor

@tapastro tapastro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - honestly surprised at how few catches were required to clean up the repo!

@melanieclarke
Copy link
Collaborator Author

Thanks for reviewing! The stcal PR will need to be approved and merged before this one can go in -- I'll modify the pyproject.toml now to point to stcal main.

@melanieclarke melanieclarke enabled auto-merge June 4, 2025 15:22
@melanieclarke melanieclarke merged commit 3f2cb5f into spacetelescope:main Jun 4, 2025
28 of 34 checks passed
@melanieclarke melanieclarke deleted the fix_warnings branch June 4, 2025 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment