"Twelfth Night Till Candlemas" – A 40-year book-quest

davidallengreen.com

181 points

ColinWright

2 days ago


58 comments

tunesmith 2 days ago

What a wonderful story. I've also had an experience of someone in a writing community being able to name a story I read in my youth, it is really a unique feeling to have a distant hazy memory made real due to the wisdom of another. It's a similar feeling to finding the perfect word for an uncertain feeling you've never been able to give voice to.

  • WillAdams 2 days ago

    Same for me, I couldn't recall the title or author of _Hit the Bike Trail!_, but someone on the /r/cycling subreddit had the thought of looking through _Publishing Weekly_ archives to identify it.

    Still much appreciated, gifted it to some cousins of mine who are the age I was when I read it.

wmlhwl 18 hours ago

I forgot the title of a Polish book I once read, but I remember many plot points. When I described them to gpt 4o, in five responses it gave me six books, three of which were English and one was Russian, but attributed all six of them to random Polish fantasy writers (even Polish ones had the wrong authors).

jimnotgym a day ago

Sidenote: David Allen Green, the author of this blog, is a brilliant writer on constitutional law in the UK. It is quite a subject since Britain doesn't have a written constitution. He was a wonderful guide through the Brexit chicanery.

  • freedomben a day ago

    For people like me wondering how it's possible not to have a written constitution: https://en.wikipedia.org/wiki/Constitution_of_the_United_Kin...

    tldr: it is actually (literally) "written", but not all in the same place. It's scatterred throughout various places.

    • jfengel a day ago

      The US Constitution is "written" but is essentially incomprehensible without a stack of Supreme Court decisions the size of Mount Everest. (Perhaps literally.) None of the words mean what you would think they mean.

      • aidenn0 a day ago

        With the current court signalling a somewhat higher-than-normal willingness to overturn past decisions, those words not mean what many lawyers previously might have advised they meant.

rm445 21 hours ago

It's interesting that the author is so disparaging of ChatGPT, when he himself had misremembered the title of the story as containing 'Michaelmas' and the importance of the goblins. What are these but hallucinations?

Obviously, it's no good that tools are offering useless results and making searches harder. There's also how you use the tools - asking GPT to one-shot find something that may not have been in the training data is a risk of hallucination that's easy to be aware about, while using AI to assist a web and archive search might have produced the same end result as what happened, a smart librarian kindly doing some searching.

Probably more fundamental breakthroughs are needed in how AIs 'know' stuff. But the gap between how humans and machines produce obscure half-remembered knowledge doesn't seem that big.

  • cjs_ac 20 hours ago

    A tool needs to complement its user. A tool that has the same weaknesses as its user isn't useful.

    • derbOac 17 hours ago

      He even emphasizes this in the piece — he extols the librarian as having the skills to recognize patterns in how people recall books, which includes imperfect memories.

    • ToValueFunfetti 18 hours ago

      I've found a great deal of use in working with other people, and they share my weaknesses far more than an LLM does. Even if I worked with exact duplicates of myself, I expect I (the one I, not the collection of clones) would still be more productive than if I worked alone- I routinely improve my own work when I read it back later after having lost context, so the context-free mes should be able to help in the same way.

    • 1970-01-01 15 hours ago

      It's worse than that. A hammer that shatters it's weak handle after normal use is a somewhat useful tool to the homebuilder. The hammer made out of painted cheese is only delaying construction of the house.

dang a day ago

I've got one of these too. When I was 6 or 7 I read a book in the school library in which the villains were "bone people" and the heroes were space explorers. The bone people could read minds, so they could tell everything the heroes were planning and thwart them.

Eventually the heroes learn to keep their true thoughts in the back of their minds, where the bone people can't access them, and with this technique they defeat the bone people. It was the best book I'd ever read (up till then!) and I remember practicing keeping my thoughts in the back of my mind.

I've been trying for years to find this book again. Anybody have ideas?

Edit: it occurred to me that I should try an LLM, so I pasted the above text into O1 and, in a combination of anticlimactic and thrilling, it gave the answer straightaway. The book is https://www.amazon.com/Bone-People-Space-Science-Fiction/dp/... and the title is, er, "Bone People". Thanks ChatGPT!

Edit 2: having now read the OP (and it's great), I see that the ChatGPT angle was already in there. But it produced an opposite result in my case.

  • ddoeth 21 hours ago

    Maybe the difference is that you used O1 that has access to the internet, and the other users might not have.

    • pvg 20 hours ago

      It seems like the biggest difference is that the book is about bone people and is actually called Bone People.

      • dang 13 hours ago
        3 more

        A triumph of LLM literalism. It may never have occurred to me that that might literally have been the title. Googling "bone people" or "bone people children's book" doesn't work; the SERPs are flooded by a different novel that won the Booker in 1986, interesting zero six-year-olds. But "bone people science fiction" does work!

        I read (er, reread) it last night using the book-borrowing feature of archive.org which got them existentially sued: https://archive.org/details/bonepeople0000unse/. The mind-reading business hardly figures in the action; it's there, but marginal. Actually the humans defeat the bone people by kicking one of them in the head (er, skull) and then by blasting them with an air gun. Literally an air gun: it shoots air, and it turns out that air kills bone people. That was lucky!

        Upon second reading I remain of the opinion that defeating mind-readers by keeping one's thoughts in the back of one's mind is the most interesting thing about the book, so it's not surprising that was what stuck in my mind, though not so much in the back. It's been in a trunk in the middle somewhere.

        • tptacek 5 hours ago
          2 more

          For anybody else playing the home game like I am, this is an easier read than Bruno Schulz "Street Of Crocodiles".

          Later

          No, that's not quite fair; you get halfway into this and it starts to be written like the Hadiths. "The Bone People do not think about a thing but what it is an evil thought." I'm always saying!

          • dang an hour ago

            I also found it strangely written and hard to read (as an adult). At the very end, on p. 72, there is a sort of appendix for teachers which says:

              The total vocabulary of this book is 280 words, excluding proper names 
              and sound words. The 15 words in roman type should be familiar to 
              children reading on a third-grade level. The 20 words above third-grade 
              level are shown in italic type.
            
            280 words seems remarkably few! That's probably why it reads like Gertrude Stein.
titchard 21 hours ago

I am glad the book was found in the end, I had a similar case that took me years to track down. My great grandmother used to always send on christmas a Penguin Classic childrens book and 100 brand new pennies in a handcherchief - for years I tried to track down one of the books describing wolves crowding round a dying explorers fire - which turned out to be White Fang by Jack London.

CobrastanJorji 2 days ago

It's funny. As soon as he described his problem, I suspected ChatGPT would enter the picture. It's often significantly better than search engines for finding the name of even an obscure work from a description, so of course folks on book-finding subreddits would use it a lot.

But the author's absolutely right to warn that it also regularly fails us, and the author's also right the celebrate the folks who are trained specifically in finding this sort of information in ways that the automated tools can't replicate yet.

  • jimnotgym a day ago

    I was sent a photo of a page from a book with a great piece of writing. He didn't know the book. I ocr'd the page and pasted it into ChatGPT. It lead me on a merry dance where it started unequivocally that it was a book it couldn't have been. It then started making up books from similar authors. Every time I said, 'there is no such book', it appologised and then made up a new book. It was like talking to a salesman, trying to bullshit it's way through a pitch.

    I put a short piece of it into Google books and it found it! I asked ChatGPT about the actual book and it claimed to know it!

    It was a book called Blood Knots by Luke Jennings. I bought it, and before I read it I saw the friend who sent me the excerpt, and gave it to him. A year later I saw the same book, shelf soiled, in an independent store. It was worth the wait, it was a great read.

    I also saw David Allen Green (author of the above) ask his question on Bluesky on my first day using it. Somehow I feel part of this story

    • hypercube33 20 hours ago

      I typed a reply and deleted it but I've had the same experience. Also you don't need to OCR with the phone app typically you can snap a picture.

      Beyond books, it's really awesome at finding movies even with super weird things I happen to remember - I assume it's trained on quotes, scripts and maybe fan knowledge from IMDb or something.

      Song lyrics it did the same thing. yes it was for a niche melodeath metal song but the lyrics are very distinct and Google/bing can't seem to do exact string searches anymore but GPT was confidently incorrect on what band it was.

    • jccalhoun 16 hours ago

      I had a similar experience with OpenAI's Dall-e. I told it to make an image about public speaking without a microphone. It made one with a micrphone. I said, "that has a microphone in it. I don't want anything resembling a microphone" and it gave me an image of a microphone surrounded by a star of david that said public speaking no microphone.

  • sumtechguy 2 days ago

    They were also pointing out an interesting point that ChatGPT does. It treats everything as relevant. Whereas the librarian who found the book. Systematically discarded possible 'facts' and substituted others (goblins->demons) to find out what was going on. Not sure any AI does this currently.

    • ben_w a day ago

      ChatGPT does do that for me, when I'm using it for tasks like David Allen Green's book hunt.

      This has yet to help. If it can find it, it (so far) has not needed to change the details I provided; if it doesn't know, it will change them to something thematically similar but still not find what I wanted (and if I insist on requiring certain story elements that it had changed, it will say something along the lines of "no results found" but with more flowery language).

    • jcutrell a day ago

      I suspect that, given a reasonable prompt, it would absolutely discard certain phrases or concepts for others. I think it may find it difficult to cross check and synthesize, but "term families" are sort of a core idea of using multi-dimensional embedding. Related terms have low square distances in embeddings. I'm not super well versed on LLMs but I do believe this would be represented in the models.

  • mrob 2 days ago

    I think a more robust approach would be to restrict the generative AI to generating summaries of book texts. First summarize every book (this only has to be done once), and then use vector search to find the most similar summaries to the provided summary. Small mistakes will make little difference, e.g. "goblin" will have a similar embedding to "demon", and even entirely wrong information will only increase the number of books that have to be manually checked. Or better yet, develop an embedding model that can handle whole books at once and compare the vectors directly.

    Perhaps somebody with more compute than they know what to do with could try this with Project Gutenberg as a proof of concept.

    • hypercube33 20 hours ago

      That works for description searches id imagine, however I'm the type who'd remember "book with a boy who had a yellow ball" type things that only happen on one page of a book.

  • metalliqaz 2 days ago

    It's also interesting that years of trying on Twitter and Reddit failed, but asking on Bluesky succeeded. I'm certainly not claiming that Bluesky is some kind of great leap forward compared to Twitter. But it could be that being a new service it just isn't as crowded with bots, spam, and BS -- thus allowing the signal to come through.

jjulius 2 days ago

Reminds me of an old radio broadcast or some kind of audio recording that I've been trying to find for ~25 years. My mom had listened to it when she was younger, and had somehow managed to get it onto cassette tape for us to listen to when we were kids. It was some kind of Christmas story we'd listen to while decorating cookies, a kind of crazy tale that you never heard anywhere else, involving the towns of "Twinkle Twankle" and "Twonkle Twonkle" and other crazy wordplay like that. Unfortunately, that's the only unique bit that I remember, save for recalling a melody or two here and there and the timbre of the narrator's voice, neither of which help in tracking it down.

I'd love the satisfaction of tracking it down some day just like this person did.

  • Reubachi 2 days ago

    You'd be happy to know that googling "Twankle Twonkle Twonkle" will yeild the result you're looking for :) I just found a few that look to be exactly what you describe.

    • jjulius a day ago

      No it doesn't[1]. Dropping the quotes just yields a bunch of results with those words, but nothing resembling what I'm looking for. I was very confident when I posted my initial comment that this was the case - after all, those four words are the only thing I recall, and therefore are what I have frequently Googled for. :)

      [1]https://imgur.com/a/BYM3d1r

    • teruakohatu 2 days ago

      I am not the OP but nothing Google served up to me resembled an old radio show. , even after instructing Google to search for the exact phrase.

ThinkingGuy a day ago

For anyone who has a similar quest (trying to find semi-remembered media from one’s youth, based on vague details), r/tipofmytongue on Reddit is a great resource. People there often succeed where AIs fail.

inanutshellus 2 days ago

I have one of these "40 year quests" too, but it's a cartoon.

Maybe you clever folk will be more ingenious than I've been.

The story goes like this:

An old king's life is upended one day when a beautiful, mysterious woman appears and says she'll grant youth and her hand in marriage to the man that completes some challenges. The only challenge I remember was that she sets up 3 cauldrons: One had boiling oil. One had milk. The last had ice-cold water.

The king wants to see it work first so he points to a random little boy and orders him to jump into the cauldrons or he'd be put to death.

The boy leaps into each cauldron and there's a terrible delay on the last one. The cold water cauldron even freezes over.

The boy breaks out of the last cauldron and has been transformed into a strapping young man.

The king, seeing proof that it works, decides to jump into the cauldrons. However, when he hops out, he's still an old man.

The woman announces that the magic only works once, and she and the stable boy walk away together, arm-in-arm.

...

I've searched for it online a fair bit but I've never found it.

Some details from my memory:

  * The cartoon was very short (less than 30 minutes. probably closer to 10 or 15) 
  * It had no dialog, only sound effects and music.
  * A woman's voice narrated it. I can still hear her.
  * Now that I'm grown, I see it having a Slavic or Russian aesthetic.
  * The woman had black hair and a long white dress.
  * The king was very short with a big white beard.
  * The boy, when he turns into a man, has pointy boots and shoulder pads. :)
  * Probably made between 1975 and 1985
  * Part of an anthology (many cartoons on one VHS tape... ours had been run so much that it started to skew and stretch the image)
...

In my mind, it's aesthetically very similar to an heirloom that my grandmother made and I assume that's why I've always wanted to find it.

ChatGPT and the intertubes in general haven't been very useful.

  • romanhn a day ago

    Oh hey, that totally sounded familiar :) Pretty sure this is from Konyok Gorbunok (The Little Humpbacked Horse), a 1975 Soviet cartoon based on a famous Russian fairy tale. The bit you're describing is at 1:07:30 of https://youtu.be/fKc22eSL1gA.

    This doesn't quite fit several of the points you remember (very much in line with the post!), so perhaps it was some other edition of that same story.

    EDIT: so, I just plugged your description into ChatGPT and it gave the exact same answer, including an identical timestamp! Weird.

    • Terretta 21 hours ago

      Good lord. When you apologized it "doesn't quite fit" I had lowered expectations -- but then I was watching GP's description come alive!

      I can see the high collar getting misremembered as shoulder pads but the whole bit seems too dead on with the recollections to not be the match!

      • inanutshellus 14 hours ago

        The heirloom I have is a small rug and it has horses and riders on it, with pointy boots and shoulder pads. So I'm guessing that's where the overlap came in but yes, this is it! It's "The Magic Pony" and the English version is online too!

    • inanutshellus 16 hours ago

      HOLY MOLY THIS IS IT! You're my hero! Forever!

      The one I saw was dubbed in English but yes, this is it!

      I'd forgotten his flying horse and that's the title! HA!

      THE MAGIC PONY!

      https://www.youtube.com/watch?v=l0pY1P8Cw5o

      I also didn't remember it as being an hour long. This is so exciting! Thank you!

      > EDIT: so, I just plugged your description into ChatGPT and it gave the exact same answer, including an identical timestamp! Weird.

      In my defense I pasted my post into chatgpt before posting it so that this wouldn't happen, and it suggested "The King's Daughter and the Three Cauldrons" and "The Three Brothers".

      • romanhn 16 hours ago
        3 more

        Haha, you're very welcome. The ChatGPT thing is odd in how close the answer was to my comment, including specific words I used. Makes me wonder about the likelihood of it being scraped and processed by OpenAI in the 10-15 minutes between posting the comment and trying it in ChatGPT. Or perhaps our brains are just LLMs after all :)

        • inanutshellus 15 hours ago
          2 more

          My email address is in my profile, send me your address and I'll send you a thank you gift :)

          • romanhn 11 hours ago

            Not necessary at all, but the thought is very much appreciated regardless. Have a good one!

  • niccl a day ago

    This sounds slightly familiar. Were you in the UK at the time? There was a series on BBC during the children's watching time (pre-6:00 pm, I'd guess, about the same time that Belle and Sebastian [0] showed) that had Slavic fairy/folk tales. Not quite cartoons, but definitely a cartoonish vibe. A little like The Story Teller [1], but much earlier.

    Sadly, I can't recall any more about it than that, but maybe it'll help that you're not alone. And of course this could be nothing at all related to what you're after.

    [0] https://en.wikipedia.org/wiki/Belle_and_Sebastian_(1965_TV_s... [1] https://en.wikipedia.org/wiki/The_Storyteller_(TV_series)

  • brazzy a day ago

    > Now that I'm grown, I see it having a Slavic or Russian aesthetic.

    Maybe it was in fact produced in Russia or one of the former Warsaw Pact countries? They had their own animation tradition, and some of it was translated in the West (like The Little Mole from Czech is), but I can easily see how such works could be very obscure to English-language searches.

chrisguilbeau a day ago

I'll add my experience to the mix. I was in Thailand in the early 2000s and we were eating at a night market and I heard a song that sounded like something by the Beatles or another 60s band. I started looking for what it might have been when I got back to the states a year later; did it say doctor Jones? Friends and google were no help. Anyway, 20 years later I asked ChatGPT and it come back with "New York Mining Disaster 1941" by the Bee Gees... Simply incredible. I suppose there will be fewer of these decade long searches now!

1970-01-01 15 hours ago

Another great example how hallucination is the Achilles heel of AI. Never implicitly trust its output.

cschmidt 2 days ago

I also had a science fiction book from my childhood that I kept trying to find. Eventually I did find the title and author through a chat with ChatGPT, unlike in this case. (It was Midworld by Alan Dean Foster, if anyone is curious. I'm not sure why that particular book stuck in my head.)

jnsie a day ago

Lovely and well told story though I'm not sure what the following excerpt has to do with anything:

> The second is that nowadays the real problem perhaps is not with Christmas decorations staying up too late, but with them going up too early, and with shops selling Christmas wares and playing Christmas music well before Advent, let alone Christmas

rossdavidh 2 days ago

"You will even notice how it neatly covers everything I could remember – giving equal weight to each data point and deftly joining them all together.

And again, what ChatGPT here had to offer was utterly – absolutely – false.

Like a fluent and practised (but unwise) liar it had contrived an account that fitted only the available information."

A fundamental flaw in modern "AI" is that it has no model of reality against which it is checking its (or anyone else's) words. It isn't even lying; it has no concept of the difference between truth and lies, therefore it cannot be made to stop lying, because it isn't even lying, it's just spewing language that sounds good.

This makes it a passable tool for sales, but an extremely poor tool for anything which requires accuracy of any kind. It's not that it is trying to be accurate and failing (in which case further work on it might be expected to improve it); it is not attempting to be accurate; indeed the concept of accuracy is not anywhere in its architecture. Vocabulary, grammar, syntax, yes; accuracy or truth, no. It will not get better at being truthful as it is worked on, it will only get better at being persuasive.

  • ben_w a day ago

    > A fundamental flaw in modern "AI" is that it has no model of reality against which it is checking its (or anyone else's) words. It isn't even lying; it has no concept of the difference between truth and lies, therefore it cannot be made to stop lying, because it isn't even lying, it's just spewing language that sounds good.

    For the early ones, the reality against which they were checking their words, was their training corpus. Now they've also got RAG and search.

    In the context of "find a story that fits this description", even the original training corpus was pretty effective. Not perfect, but pretty good… for stuff that was well represented.

    If all Transformer models could do was vocabulary, grammar, and syntax, they wouldn't have ever been usable as more than a very light form of autocomplete.

    Even word2vec got analogies (man is to woman what king is to queen, etc.).

    > it will only get better at being persuasive

    I kinda agree, unfortunately: it will *also* keep getting better at being persuasive, and this is indeed a bad thing. "LGTM" is easier to test for than "correct" — but to me, that's an example of humans having limits on how well we can model reality, how well we can differentiate between truth and lies, etc.