2 Comments
Jun 27Liked by Oliver Bateman Does the Work

The nature of the design of LLMs is intent to satisfy a user, with a regard for that goal over accuracy. I hesitate to assign human intent to this, but sandbagging and sycophancy are an unavoidable byproduct of reinforcement learning.

Expand full comment
author

Give them what they want: some writing-like content

Expand full comment