Skip to content

Conversation

@DreamMr
Copy link

@DreamMr DreamMr commented Feb 7, 2025

The HR-Bench benchmark was introduced to assess the perception capabilities of MLLMs when working with high-resolution images, specifically those in 4K and 8K resolution. The paper is here: link.

Copy link
Collaborator

@kcz358 kcz358 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @DreamMr , the changes look good to me. Do you mind adding a screenshot of the evaluation result for further reference?

You are also encourage to add a README.md at your task folders and revise the current task in docs for others to know about your task. Thank you!

@DreamMr
Copy link
Author

DreamMr commented Feb 7, 2025

Hi @DreamMr , the changes look good to me. Do you mind adding a screenshot of the evaluation result for further reference?

You are also encourage to add a README.md at your task folders and revise the current task in docs for others to know about your task. Thank you!

Here are the evaluation results of LLaVA-v1.5-7B, which are consistent with our previous results with small variance.
image

@Luodian Luodian merged commit dd4992f into EvolvingLMMs-Lab:main Feb 7, 2025
1 check passed
MichalCiesiolka pushed a commit to MichalCiesiolka/lmms-eval-llmzszl that referenced this pull request Apr 3, 2025
* add hrbench

* add hrbench

* add hrbench

* clean code

* add hrbench

---------

Co-authored-by: wenbin wang <>
dadwadw233 pushed a commit to dadwadw233/lmms-eval that referenced this pull request Apr 28, 2025
* add hrbench

* add hrbench

* add hrbench

* clean code

* add hrbench

---------

Co-authored-by: wenbin wang <>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants