Large language models can do impressive things, like write poetry or generate viable computer programs, even though these models are trained to predict words that come next in a piece of text.
ADVERTISEMENT |
Such surprising capabilities can make it seem like the models are implicitly learning general truths about the world.
But that isn’t necessarily the case, according to a new study. Researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy—without having formed an accurate internal map of the city.
Despite the model’s uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.
When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting faraway intersections.
…
Add new comment