Skip to content

Please consider add more datasets about logical reasoning in the Humanity's Last Exam #15

@14H034160212

Description

@14H034160212

Hi,

This is Qiming Bao and I am a graduated PhD from Strong AI Lab, NAOInstitute, University of Auckland. I am pretty interested in Humanity's Last Exam. Also one of the PhD students from our lab Gaël Gendron has contributed to this paper. Additionally, we have more benchmark papers and datasets which demonstrate the front-tier LLMs are not good at logical reasoning tasks. Please consider adding them into the future version of the paper or reopen the question submission site. Thanks a lot!

Out-of-Distribution Logical Reasoning Evaluation and Prompt Augmentation for Enhancing OOD Logical Reasoning

We present a systematically out-of-distribution evaluation on logical reasoning tasks. We presented three new more robust logical reasoning datasets ReClor-Plus, LogiQA-Plus and LogiQAv2-Plus which are basically constructed from ReClor, LogiQA and LogiQAv2 from the changes of option's order and forms. We found simply using chain-of-thought prompting will not increase models' performance on the out-of-distribution scenario while using our AMR-based logic-driven data augmentation to augment prompt can increase large language models' performance on out-of-distribution logical reasoning tasks. The three datasets have been collected by OpenAI/Evals.

[LLM@IJCAI 2023] "A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks" [Paper link]

The full version named "Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning" has been accepted by ICONIP 2024. [Paper link] [Source code] [Dataset links].

Abstract Reasoning Evaluation Benchmark

[AGI@ICLR 2024] Large language models are not strong abstract reasoners [Paper link]

The full version has been accepted by [IJCAI 2024] Large Language Models Are Not Abstract Reasoners [Paper link] [Source code and evaluation platform]

A Empirical Study on Out-Of-Distribution Multi-Step Logical Reasoning

We find that pre-trained language models are not good at on robust multi-step logical reasoning tasks and one of the main reason is that there is limited amount of training sets for deeper multi-step logical reasoning. Therefore, we present a deeper large multi-step logical reasoning datasets named PARARULE-Plus. The dataset has also been collected by OpenAI/Evals.
[IJCLR-NeSy 2022] "Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" [Paper link] [Source code] [Dataset links].

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions