Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples

1University of California San-Diego 2Allen Institute for AI

Abstract

The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications, such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning improves reasoning quality but requires vast labeled data, while reward-maximizing reinforcement learning finds top-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian Flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample divergent paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across six challenging reasoning tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), GSM8k (math reasoning), and ProntoQA (logical reasoning).

FoR Framework

The following diagram shows an overview of the FoR framework.

As illustrated in the above diagram, our FoR framework includes three main steps:

  1. Trajectory exploration: Combine both on-policy and off-policy strategies to explore the trajectory space and produce effective samples.
  2. Policy training: Train LLM policy with the collected trajectories via trajectory balance loss objective.
  3. Sample solutions: Use trained LLM policy to generate diverse reasoning paths.

Downstream Tasks

We apply our proposed FoR to the following three popular reasoning tasks:

  • Embodied reasoning: The model needs to give a sequence of actions to rearrange blocks into stacks in a particular order.
  • Mathematical reasoning: The model uses 4 integers and 4 basic arithmetic operations (+, -, ×, ÷) to reach 24, where each number can only be used once.
  • Logical reasoning: The model needs to use the rules contained in a set of facts to reason whether the question is true or false.
Here are some examples that generated by FoR:

Image 1
Image 2
Image 3

Experiment Results

Embodied Reasoning
  • FoR achieves state-of-the-art accuracy compared to the existing baseline methods such as Chain-of-Thought, Tree-of-Thought, RAP.
  • FoR generates more diverse solution reasoning paths compared to other methods
Mathematical reasoning
  • FoR outperforms other baselines both in diversity and accuracy.
Logical Reasoning
  • FoR can effectively solve logical reasoning in both in-distribution and out-of-distribution reasoning settings.

BibTeX

@article{yu2024flow,
  title={Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples},
  author={Yu, Fangxu and Jiang, Lai and Kang, Haoqiang and Hao, Shibo and Qin, Lianhui},
  journal={arXiv preprint arXiv:2406.05673},
  year={2024}
}