The essence of a large language model is to forcibly construct a self-consistent value system based on existing input data. Hallucinations can be seen as a natural manifestation and extension after self-consistency. Many new scientific discoveries are precisely because they encounter an 'error' in the natural world that cannot be explained by existing theories and cannot be self-consistent, so they must abandon the old theories. This roughly explains why, so far, no large language model (with so much data) can spontaneously make new scientific discoveries, because the model itself does not have the ability to judge right from wrong.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin