Skip to content

Conversation

@Luodian
Copy link
Contributor

@Luodian Luodian commented Jun 18, 2025

Before you open a pull-request, please check if a similar issue already exists or has been closed before.

When you open a pull-request, please be sure to include the following

  • A descriptive title: [xxx] XXXX
  • A detailed description

If you meet the lint warnings, you can use following scripts to reformat code.

pip install pre-commit
pre-commit install
pre-commit run --all-files

Thank you for your contributions!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 18, 2025

Important

Review skipped

More than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review.

109 files out of 279 files are above the max files limit of 100. Please upgrade to Pro plan to get higher limits.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dev/v0d4

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

…at mode (#692)

* Update deps

* Restructured

* Delete models

* Remove deprecated models

* Set up auto doc to messages and chat models

* Lint

* Allow force simple mode

* Add auto doc to messages for audio and video

Fix lint

Init server structure

Restructure to server folder

Clean base and providers

Add clean method for models

Fix loggers save result

Fix dummy server error

Suppress llava warnings

Sample evaluator on llava in the wild

Update mmmu doc to messages

Update version
@kcz358 kcz358 force-pushed the dev/v0d4 branch 2 times, most recently from e591de0 to abc7ad6 Compare June 19, 2025 07:02
… protocols

Add AsyncAzureOpenAIProvider implementation and update provider factory

Refactor sample saving in EvaluationTracker to use cleaned data and improve logging

Add llm_as_judge_eval metric to multiple tasks and integrate llm_judge API for evaluation
Luodian and others added 16 commits June 19, 2025 07:08
…generation and evaluation, enhancing API integration and error handling. Update MMBench_Evaluator to streamline API key handling based on environment variables.
…d clarity and efficiency. Update MathVerseEvaluator to streamline answer scoring by eliminating unnecessary extraction steps and enhance evaluation prompts. Remove deprecated metrics from configuration files.
…d response generation and evaluation. Streamline API configuration and error handling by removing direct API key management and utilizing a custom server configuration for requests.
… 'llm_as_judge_eval' across multiple YAML files and adjust the result processing function accordingly. This change aligns with the integration of the llm_judge server for enhanced evaluation metrics.
… evaluation. Introduce 'olympiadbench_OE_MM_maths_en_COMP.yaml' and 'olympiadbench_OE_MM_physics_en_COMP.yaml' files, while removing outdated English and Chinese test configurations. Update evaluation metrics to utilize 'llm_as_judge_eval' for consistency across tasks.
…odel. Introduced `parse_reasoning_model_answer` to clean model responses and updated answer processing in the Qwen2_5_VL class to utilize this new function, enhancing response clarity and logging.
…m 'answer' to 'final_answer' for improved clarity in response generation.
…pdated the return format to include question, response, and ground truth for improved evaluation context. Simplified judge result determination logic.
…get' from 'answer' to 'final_answer' for improved clarity in response generation.
…get' from 'answer' to 'final_answer' for consistency with recent configuration updates and improved clarity in response generation.
* add mmvu task

* fix linting videomathqa

* fix mmvu to use llm judge

* add visualwebbench task
… from 'lm_eval' to 'lmms_eval'. Update task configurations and evaluation metrics accordingly for consistency across the project.
…with 'max_new_tokens' for consistency across YAML files and documentation. This change aligns with recent updates in the generation parameters for improved clarity in model behavior.
@Luodian
Copy link
Contributor Author

Luodian commented Jun 26, 2025

@kcz358 I think we should write a markdown documentation about the change in doc_to_message.

kcz358 and others added 5 commits June 26, 2025 19:52
…at mode (#692)

* Update deps

* Restructured

* Delete models

* Remove deprecated models

* Set up auto doc to messages and chat models

* Lint

* Allow force simple mode

* Add auto doc to messages for audio and video

Fix lint

Init server structure

Restructure to server folder

Clean base and providers

Add clean method for models

Fix loggers save result

Fix dummy server error

Suppress llava warnings

Sample evaluator on llava in the wild

Update mmmu doc to messages

Update version
… protocols

Add AsyncAzureOpenAIProvider implementation and update provider factory

Refactor sample saving in EvaluationTracker to use cleaned data and improve logging

Add llm_as_judge_eval metric to multiple tasks and integrate llm_judge API for evaluation
…generation and evaluation, enhancing API integration and error handling. Update MMBench_Evaluator to streamline API key handling based on environment variables.
…d clarity and efficiency. Update MathVerseEvaluator to streamline answer scoring by eliminating unnecessary extraction steps and enhance evaluation prompts. Remove deprecated metrics from configuration files.
Luodian added 4 commits July 26, 2025 18:08
… practices, and error resolution strategies for the codebase.
…ipping whitespace from the score string before processing.
…ty files to enhance response handling by replacing Request object usage with direct server method calls for text generation across multiple evaluation tasks.
…ng direct server method calls with Request object usage. Update server configuration in multiple utility files to enhance response handling and streamline evaluation processes.
cursor[bot]

This comment was marked as outdated.

…reamline configuration. Remove redundant default settings and ensure proper handling of sampling parameters based on the do_sample flag. Update multiple YAML task files to increase max_new_tokens and comment out temperature settings for clarity. Introduce new YAML configuration for MMMU validation reasoning task.
cursor[bot]

This comment was marked as outdated.

…handling and validation. Implement robust regex patterns for score extraction, ensuring all components are accounted for and scores are clamped within valid ranges. Add logging for better traceability of errors and fallback mechanisms for invalid inputs in the mia_bench evaluation process.
cursor[bot]

This comment was marked as outdated.

kcz358 and others added 5 commits July 28, 2025 03:15
…ion logic. Remove redundant checks for tensor parallelism and streamline generation parameter settings by eliminating unused temperature and top_p configurations.
…ontent. Remove unused distributed executor backend parameter for cleaner execution logic.
cursor[bot]

This comment was marked as outdated.

@kcz358 kcz358 merged commit b7b4b1d into main Jul 30, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants