The computers are getting better at writing

(newyorker.com)

153 points | by jgalt212 5 days ago

29 comments

  • caconym_ 5 days ago
    The lesson we should take from GPT-3 is that we spend way too much time reading text that isn't really saying anything, and/or failing to find meaning in what we read. It's a neat trick, and I think it could absolutely replace much or most of the writing on the internet today. But is it good writing? No, not really. At least, not in the sense of writing as producing anything more substantive than slick prose with a vague, meandering context running through it.

    I think the next step (say, being able to produce short stories with actual literary value) will be substantially more difficult, mostly because it's substantially more difficult for human writers too. It takes learned abstract knowledge and deliberate intention to write something that's coherent and engaging and longer than a few hundred words, and that skill is separate from the intuitive prose generation ability that's essentially unconscious for practiced writers and comes easily to many avid readers (i.e. trained on a large corpus) who take up writing.

    It will be interesting to see how producers of actual substantive content incorporate these tools into their processes, if at all. I guess I can imagine using it to generate descriptions, for instance, but how does it do with dialogue? Probably not very well.

    • wpietri 5 days ago
      > we spend way too much time reading text that isn't really saying anything, and/or failing to find meaning in what we read

      Absolutely. It helps make clear that the function of words isn't always what we think. It's the sort of window into the machinery of the mind that helps us reverse-engineer what's really going on.

      It reminds me of the Oliver Sacks essay "The President's Speech", which Wikipedia describes as being "about a ward of aphasiacs [people who have lost the ability to understand words] and agnosiacs [here, people who understand only the words, as if reading a transcript] listening to a speech given by an unnamed actor-president, 'the old charmer', presumably Ronald Reagan. Many in the first group laughed at the speech, despite their inability to follow the words, and Sacks claims their laughter to be at the president's facial expressions and tone, which they find 'not genuine'. One woman in the latter group criticizes the structure of the president's sentences, stating that he 'does not speak good prose'."

      Glibness has a purpose, but it's not really about conveying information. It's for something else.

      • jamesjyu 5 days ago
        Founder of Sudowrite [1] here.

        > I think the next step (say, being able to produce short stories with actual literary value) will be substantially more difficult, mostly because it's substantially more difficult for human writers too.

        Totally agreed. Both my co-founder and I are professionally published short story authors, and we would be the first to say that using something like Sudowrite is no guarantee you will land a story in the New Yorker. Similarly, using Cinema4D won't propel you to the same quality work as Beeple. These are tools. It's up to you as the author to use them in a way that enhances your craft.

        My goal is to figure out how we mindfully incorporate AI into a writer's workflow. If you look at other fields like visual arts, they have had a wealth of interactive tools starting with photoshop. What is the photoshop for writing? (hint: I don't believe it exists yet, and I want to build it)

        In terms of the details, you'd be surprised at how well GPT-3 handles descriptions and dialogue. It's actually scarily good at dialogue [2] [3].

        [1] https://www.sudowrite.com

        [2] https://www.gwern.net/GPT-3#dialogue

        [3] https://www.jamesyu.org/singular/

        • bertil 5 days ago
          If you want to echo the last paragraph of the article, the system also doesn’t know when to stop.

          Which leads to a simple improvement: teach the model when to stop, what sections to remove. There’s already great ML-based summary engines, and systems able to answer questions from descriptions.

          • Smithalicious 5 days ago
            I wonder if that could be a viable approach, having a separate "writer" and "editor" AI
            • syrrim 5 days ago
              GPT-3 is the editor. The hard part of writing is getting a decent idea onto the page. GPT-3 can't do that, it doesn't think for itself. It is good at the second part of writing, creating a coherent string of sentences describing an idea. But since GPT-3 doesn't have decent ideas, the final product doesn't end up being interesting either. The most interesting use cases for GPT-3 involve pairing it with a human who will contribute the actual thinking to the project, with GPT being there to come up with interesting turns of phrase and nice conjoining sentences. Having an AI in the driver seat is still a long way off.
              • jcims 5 days ago
                I do the same thing when I write (except for my comments on HN lol)

                GPT-3 reads like the unfiltered garbage that comes out of my head when I'm trying to get ideas out. The primary difference is that I have empathy for the audience so I go through the effort to reread, clean it up, clear it up then cut it down.

                But that secondary editing process has nothing to do with the subject matter or initial content generation process. It was trained through years of *not* doing it and having my audience ask lots of questions and clearly misunderstand what I was trying to say (let's call it reinforcement learning). As a result I now have an intuition about which concepts need explanatory analogies, based on my understanding of what the readers know, and which are almost certainly going to be understood outright. I think this gets at the root of the difference.

                Basically when I write, I'm not trying to generate text, I'm trying to generate training data that will amend foreign models with a high fidelity representation of a subset of mine.

                This feels GAN-like but different. It's like you need to have two separate models, the 'expert' and the 'n00b' (or ten n00bs trained on different data), ask both of them the same thing, and one-shot retrain the n00b from the answers provided by the 'expert' until the n00b is able to generate content that the expert finds to be sufficiently similar.

                • isaacimagine 5 days ago
                  > Basically when I write, I'm not trying to generate text, I'm trying to generate training data that will amend foreign models with a high fidelity representation of a subset of mine.

                  Well said! Replying so I can find this thread again, please ignore.

                  • btrettel 5 days ago
                    In the future, rather than cluttering up the thread, bookmark the page in your browser or use the "favorite" button here on HN.
              • NoOneNew 5 days ago
                Yea, but that misses the point to all the qualms against the idea that this is "good writing". I know the name of this eassy structure changes depending on region and generation, but 3.5 essay structures teaches you the basic principles of "good writing". What are you arguing, quickly overview your 3 points to prove it, next 3 paragraphs explain and support each point, then a conclusion paragraph. Super simple, basic and on the nose writing to help teach kids how to write WITH PURPOSE. ML based writing doesn't even come close to this. It's just good at the statistical chance of what words follow each other. It's a keyword hound that has some notion of what makes a sentence "look real".

                People who are afraid of the current generation of ML writing are absolutely dogshit writers with terrible taste in writing. I keep forgetting who said this, but theres a saying to never start writing by describing the weather. Lots of folks miss the point to this. You could modernize it by saying, "Dont start talking about your childhood when writing a recipe." Why? Because no one fucking cares. They want the recipe. Not a description of your childhood or what the weather was like. Take the time to notice articles and "journalist writing". If it starts with the weather, notice how incredibly long it takes for the author to get to the point or describe anything functionally meaningful. You'll notice these articles are extremely long in word count and extremely low in actual information/content. These are the people afraid because yes, a machine can keyword drop and fluff better than them.

                Is it possible that there will be a future AI/ML to write with purpose? Maybe. But realistic idea connection and logical flow is still a huge hurdle for AI/ML. Summary engines seem good on the surface, but again, if you actually read them for a real world reason, not for "look how good ML is", you'll notice they're as meaningful as a politician running for office. Lots of keywords, lots of nothing really said or done... just like anyone with a liberal arts degree.

                • alvah 4 days ago
                  Correct me if I'm wrong, but I suspect you misunderstand the purpose of all those recipes with ~2000 words of ramble followed by ~300 words of actual recipe. The 2000 words of fluff isn't for the reader, it's 100% for the search engine. It's required to provide enough room to crowbar in the required keywords without triggering an over-optimisation penalty. All in the name of getting it ranked in the SERPs, so people will actually read the ~300 words, or at least click on an ad or two. Unfortunately this is where search engine algorithms have taken us, for now at least.
              • goodpoint 5 days ago
                Many (human-written) books and articles are 10% content and 90% fluff and repetition. The articles shared on HN are a good example.

                If the goal is to increase the SNR, GPT-3 is probably not going to help.

                • shoto_io 5 days ago
                  May I ask what SNR is short for?
                  • ghaff 5 days ago
                    Signal to noise ratio.
                    • arethuza 5 days ago
                      I suspect "Signal to Noise Ratio"
                    • username90 5 days ago
                      Which is why we read comments and not articles. Articles are full of fluff, comments for some reason aren't.
                      • All of the salient points from the article will be quoted and discussed with context in the comments. As for the pieces that no one thought important enough to discuss...I appear to be missing nothing.
                    • Nav_Panel 5 days ago
                      It's funny, I think using Twitter helped me with this. Forced me to condense large thoughts into small spaces, learning to cut through the reddit-comment fluff. High-density writing. Will be harder for an AI to replicate.
                      • > But is it good writing? No, not really.

                        agree. I'd say it's impossible to solve. Where would we even place our yardsticks:

                        Has society been able to get any better (than we used to) at assessing and categorizing what "constitutes" great writing? If so in what relation could that be measured to our past performance especially that many no longer read today due to our loss of attention span? (most won't even read this comment because it's way too long).

                        People are terrible at spotting some of the best writing when it happens. History is full of writers who's work could only be understood and appreciated after their death.[1] Death makes their work a finite commodity. There is no more chance asking the author "what they meant". Others get shunned because they were too radical, (or too ahead of their times).

                        Great writers as in "a radical new style" or as "for their radical ideas" get only credited after decades (if not centuries) post mortem. When it's too upsetting (because it's true) we need the distance of history and later generations to appreciate it[2].

                        Even "what" we comprehend and then appreciate changes over time, and within the same person with every time we read it[3].

                        [1] 6 people centuries ahead of their time https://www.oxford-royale.com/articles/6-people-centuries-ah... <- there are hundreds of wrtiers like this (it is just the top result google gave me). How would an AI spot it when all AI was created "in our image". And if it did spot it how would the AI be able to convince us that this is worth a closer look instead that we need to destroy the AI?

                        [2] The Last Messiah by Peter Wessel Zapffe https://philosophynow.org/issues/45/The_Last_Messiah <- say something mediocre to ensure you get published and ranked in "best new books". But saying something profound can potentially get you ignored (at best) and killed (at worst). This ties back to[1]

                        [3] James Joyce Dubliners https://archive.org/details/dubliners00joycrich / https://en.wikipedia.org/wiki/Dubliners <- there are hundreds of examples like Dubliners but they are all different books depending on who reads them and when. the retrospect and integrating what we read (and then re-read over and over at various stages of our lives) into our own reality is what is makes it great. yet even if some minority of people does practice this, it's still highly subjective. how would an AI provide us with value (e.g. if it can the value it creates would only benefit itself but never others)

                        __

                        I'm not saying AI is incapable of better writing than AI is now, but it would only remain mediocre and never outperform humans. Also the writing can't stand or be judged by itself. It can only be valued and appreciated through human mind. Art (and many books are art) has the same problematic relationship with AI. John Berger's "Ways of seeing" talks about this (but so does a lot of philosophy) https://www.youtube.com/watch?v=0pDE4VX_9Kk

                        • Kelamir 5 days ago
                          > (most won't even read this comment because it's way too long).

                          I admit I was going to skip it, but I saw a reference to a James Joyce's book and reconsidered :)

                        • CuriouslyC 5 days ago
                          My expectation is that autocorrect and sentence completion tools will get better over time, until we get to the point where we have tools that take outlines and turn them into first drafts.
                        • WesolyKubeczek 5 days ago
                          I have a suspicion that, since GPT-3 gorged its enormously swollen fuzzy digital belly on a lot of real prose, it might, when prompted, occasionally throw in a piece of the prose in question, or a whole original passage, because to its fitting function, it would be a suitable thing to do.

                          But then again, aren’t we all doing the same, regurgitating bits and pieces that we heard or read before and liked?

                          Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say? And then we agree that we need to go take the alligator from the repair shop.

                          However, if anything, GPT-3 is mostly showing how little meaning there is, but in how many words.

                          Never mind me, I’ll pour myself another one.

                          • >> But then again, aren’t we all doing the same, regurgitating bits and pieces that we heard or read before and liked?

                            We can do that. We can also not do that. Language models can only do that.

                            My comment is stripped of nuance (language models don't do that exactly) but it should point at the difference: humans can be as dumb as bricks, or as smart as humans. Language models don't even come close to our brighter moments. They don't even come close to the brighter moments of our pets, and our pets can't generate human language.

                            You have kids? Think of whom you'd like to spend the next decade or two around: a five year old human or GPT-3? Which one would you find more interesting, original, curious, do you think?

                            I think people who are the most excited about GPT-3 are the ones who are used to even dumber, more boring interactions with computers than the repetitive and unoriginal text generation of language models. Or perhaps it's humans who are used to boring interactions with other humans. Technology has done some weird things to us of late I fear.

                            • WesolyKubeczek 5 days ago
                              > You have kids? Think of whom you'd like to spend the next decade or two around: a five year old human or GPT-3? Which one would you find more interesting, original, curious, do you think?

                              Kids and language models are very different kinds of "interesting". I like to be around my child, because I love her and it's good for the soul. If I only have one time slot and a choice who to spend it with, GPT-n can go pound sand, I'll be with my kid.

                              A language model can certainly be fascinating in other ways. It brings me into a philosophical mood and spurs some questions about the humanity and quality of human knowledge, and efficiency of passing ideas around, but those don't imply answers I'd like to have, and tend to eat my soul away.

                            • jamesjyu 5 days ago
                              Founder of Sudowrite [1] here.

                              > However, if anything, GPT-3 is mostly showing how little meaning there is, but in how many words.

                              We imbue words and symbols with meanings, and we strive to give our stories meanings that transcend the prose on paper. Yes, it's true that GPT-3 can spout non-sequitars. But overall, it does stay on topic quite well, as compared to previous iterations like GPT-2.

                              But sometimes, you WANT that, you want to be provoked to get unstuck on writing that tough scene or character. I've found that using a system like Sudowrite is not only about generating new text, but reading the generations and discovering how the neural net is interpreting what you've written. I believe this particular use case hasn't been fully explored, outside of summarization algorithms.

                              [1] https://www.sudowrite.com/

                              • WesolyKubeczek 4 days ago
                                > Yes, it's true that GPT-3 can spout non-sequitars. But overall, it does stay on topic quite well, as compared to previous iterations like GPT-2.

                                I've heard GPT-3 could produce better text than BuzzFeed writers. You could read it without even spotting a difference.

                                It doesn't so much imply GPT-3 is so good (it is good, no denying that) as it implies how amazingly content-free BuzzFeed is. And mind you, there are people working on BF, writing articles and whatnot, and making pretty penny off it. Even more people read the stuff.

                                Maybe we the humans actually need to dilute grains of actual useful information into gobs of fluff to be able to process it well, just like vitamins have different bioavailability depending on which food they are contained within. I'm fine with this conclusion too. It's just amusing to be aware of that.

                                Next thing we're going to learn is that a remote-working middle manager at some large company who's rather good at his job and even has awarded annual bonuses, is discovered to have been a GPT-3 (or maybe GPT-4) simulacrum all along, with his subordinates and bosses alike being none the wiser. Which is going to say tons about the standards of management at that company.

                              • claudiawerner 5 days ago
                                >Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say?

                                No. Sometimes I can predict what a friend will say but I won't actually say it. The more I think about it, the more I realise that AI at the moment is the unlistening (but hearing) spouse. And just like it seems we can engage with such a person (though can only rarely), so too can the AI trick us ('simulate') that behaviour.

                                • yesenadam 5 days ago
                                  > Don’t we all sometimes, when busy with our own thoughts and worries, occasionally spout out oddly appropriate responses to what our spouses are speaking, without actually listening to and processing the conversation, maybe because we heard it all so many times we can readily predict what they are going to say?

                                  Uh.. no, not me. That sounds very strange to me, and sad. I always listen to everything she says. And expect the same back. Otherwise it wouldn't be a good relationship, I think. (Maybe I'm super-naïve!..or something.)

                                • mattikl 5 days ago
                                  Rewinding a bit from computers doing all the writing, are you using any tools that help you write considerably better (not just spelling/punctuation checking)?

                                  As a non-native English speaker I would like to have a tool that points out my nonstandard usages of language. I think it could be something as simple as parsing the text and searching it against a corpus, but I have no idea how many "unique structures" a text typically contains where no one else has used those same exact words. AI could help to suggest better ways to formulate the text.

                                  • camillomiller 5 days ago
                                    As a fellow non-native who accepted not to strive for full assimilation, my suggestion is to do the same. You can refine your writing by reading a lot. Books like “The elements of style” and “On writing well” can help with grasping those structural elements you might be struggling to identify. Apart from that, our brain has been shaped by our native language, and we can’t really delete that. Which is great news! It means we can express concepts in English with structures and flows that, if grammatically correct, have the advantage of sounding original, different, sometimes exotic.

                                    One tool I use is Grammarly. I think it’s mostly crap, but if used properly it helps you identify the more blatant mistakes. Oh, and by the way, if you need a non-native speaker ego booster, try and read native speakers’ first drafts of anything.

                                    • adamauckland 5 days ago
                                      I wanted to write a message to a Swedish relative to say "Happy Birthday" but the literal translation felt wrong.

                                      I asked my wife how to say "Happy Birthday" and she said "What you've got there is the right translation but that's not what people say".

                                      I think that's one of the hardest things about learning another language is to not directly map words but understand what would be said in a certain context.

                                      There is no English word for Hygge or Fika because the concept simply does not exist. You can find a nearest match but it lacks the meaning.

                                      • You know what I struggle with? Writing professional letters (or email) in my native language. I went to university and worked in the industry in the UK and so I only ever had to write in a professional style in English. I was trying to ask for some information from an online shop in Greek the other day (I'm Greek) and I found I had no idea how to start and end the email and how exactly to phrase the question I wanted to ask without sounding like I was talking to my bestie.

                                        Kind of the reverse situation than what you're describing, because I haven't learned to say those things in my native language, but for me it goes to show we learn lots of pre-baked turns of phrase that we adjust appropriately to the context when we need them.

                                        • adamauckland 5 days ago
                                          That's interesting because you're right, in English there are different voices depending on the context and who you're talking to. Is it a professional context or casual or if you're supposed to be doing something important or tactful.

                                          In some countries, this is extended to even using different languages depending on the context, especially more in the Eastern countries.

                                        • kubb 5 days ago
                                          This is only a problem when translating words or short phrases which is usually one of the first things that language learners try to do.

                                          When you know the original intent behind a sentence (for example if you are its author), you can usually convey its meaning it in any of the languages you know well.

                                          As your wife said, there is a way to express a friendly sentiment to somebody on their anniversary and let them know that you remember them. It's just not literally using the words "Happy Birthday".

                                        • mattikl 5 days ago
                                          That's a good point of turning your native language into an asset. The goal of communication is to be well-understood. That's so easy to forget that and strive for something more. Good writing also keeps the readers engaged, maybe entertained, but is there much more to that?
                                          • thaumasiotes 5 days ago
                                            > It means we can express concepts in English with structures and flows that, if grammatically correct, have the advantage of sounding original, different, sometimes exotic.

                                            Yeah, that happens all the time. It always strikes me as unfair when someone comes out with an ostensibly perfect sentence and the result is "well, you didn't make any mistakes; all the words mean what you want them to mean and all the grammar is used correctly. But the sentence is still wrong; that's just not how we say that."

                                            • Al-Khwarizmi 5 days ago
                                              It's useful to learn how they say it, though. Not because we should pretend to be native speakers when we aren't, but because language is more efficient if you know the expected way to use it.

                                              The correct but awkward sentences from non-natives (I also use them, being a non-native English speaker) will more easily get lost in situations with background noise, several people talking, people not paying a lot of attention, etc. because not saying things they way people expect and are used to implies extra effort to understand.

                                              • camillomiller 5 days ago
                                                True, but OP was asking for an AI tool to help in this. I think the language bias on such a tool would be enormous, and it would contribute to flatten a piece of writing. Being effective by saying things the right way, or picking the right words to express a specific context, is definitely important. My point was this: once you get to a good-enough point, what you're looking for is an editor (of sorts, not necessarily a professional one). Someone that could recognize how your idiomatic choice of words serves the piece right, or makes it harder to understand. What really helped me was asking native friends to read my posts or my articles, and tell me what they were getting from them. After a while, through this process, you'll get better yourself at recognizing these patterns. Before that, I worked hard on understanding the specific pitfalls of writing in English as a native Italian. One clear example: attributive nouns.
                                                • >> One clear example: attributive nouns.

                                                  Specific examples, please? I'm curious :)

                                                  • camillomiller 5 days ago
                                                    In a case where a noun is a modifier of the following word, Italians tend to prefer specificity , therefore falling for the “noun + of…” construct. A chicken soup bowl is easy to understand, but for an Italian that’s a “piatto di brodo di pollo”, so an Italian native would tend to feel like “a bowl of chicken soup” is the better choice. In this case there’s no real mistake (and for “bowl of” you might argue there’s almost no meaning difference) but most of the time you end up with convoluted sentences, especially if a genitive is lurking around —- “Andy’s chicken soup bowl” vs “the bowl of chicken soup of Andy”. Understandable, more familiar to an Italian speaker, yet kind of wrong.
                                                    • thaumasiotes 4 days ago
                                                      Just for reference, the default English constructions would be "bowl of chicken soup" and "Andy's bowl of chicken soup".

                                                      "Bowl of soup of chicken" is right out, but "chicken soup bowl" isn't much better. It's so anomalous that I might interpret it as meaning "a bowl for chicken soup" as opposed to "a bowl with chicken soup in it right now".

                                                      • Thanks for the example!

                                                        Wouldn't "piatto di brodo di pollo" translate to "a bowl of soup of chicken", as a more unnatural English sentence?

                                                        Greek is similar in that respect and I catch myself sometimes lapsing into such more micro-managed speaking, and I also noticed it in other Greeks (perhaps a few Italians also).

                                                • willyt 5 days ago
                                                  I’m a native English speaker and you all write well enough in English that I didn’t notice that you are not native speakers from your writing. I went back and looked more carefully and noticed a few little mistakes, but nothing I wouldn’t write off as the result of someone writing quickly.
                                                • kristjansson 5 days ago
                                                  And English, in its predatory way, will adopt those 'different' words and constructions as its own in due time :)
                                                • ht_th 5 days ago
                                                  I am using LanguageTool. Not the website, but a command-line tool. You can configure it with all sorts of rules to check all sorts of spelling, grammar, style, and language issues. However, as far as I have noticed, LanguageTool is not a very advanced language-help tool. Still, it did improve my writing a lot. If nothing else, my writing has gotten more consistent.
                                                • biztos 5 days ago
                                                  I think real-time lookup in a corpus would be enough if it's implemented well.

                                                  I'm a non-native speaker (reasonably fluent) in two non-English languages, and when I'm writing, if anything strikes me as sounding "off" then I google the phrase and see how common it is.

                                                  In my less-fluent of the two languages, I end up doing this a lot, often just to check my work. As a result I write e-mails and such very slowly, but at a fluency level exceeding my actual fluency in that language. Which seems like a good tradeoff for written communication.

                                                  It would obviously be useful to have that done for me while I type.

                                                  • bewbaloo 5 days ago
                                                    prowritingaid has a very good, free, web-based tool that will parse out all sorts of neat things in your work. for example: i write fiction and have a real problem with “echos” — words that i repeat within a certain distance of each other — and i even miss them when proofing/editing. pwa is a godsend for this. if scrivener would add their own native version of this (hint hint incase any of you L&L people lurk here.. and it would take ten lines of java that i’d give to you for free) i would be completely covered for a word processor.

                                                    on the other side, the desktop version is expensive and needs some ui work badly. they say it’s ai-based, but it feels more like a reason to drm the product than anything. still, the free version is excellent for my limited use scope, so i hope you’d have some luck with it too

                                                    the other thing to try and do is find someone that you would use as a language buddy. there’s a subreddit (i forgot the name) where people look to pair with others who are looking for/offering certain language combos, then you get on imessage/whatsapp/whatever and use each other as a language bot. right now i’m looking for someone who speaks mandarin or cantonese and needs someone who speaks english, so if that’s you op, hit me up

                                                    • Well, repetition is considered harmful in literary writing but that, too, is just a convention. There's no reason to force yourself not to do it if you find that it comes to you naturally. Rather, I'd try to find a way to develop it further into a unique style instead. Think of minimal music, heavy metal riffs and chorus...es (chori?), or repeating leitmotifs in classical music. Repetition drives a point home. It's a useful device, in music and visual art, why not in writing also?

                                                      Actually, I find that constant repetition of the same words is absolutely necessary in technical writing. I used to do this thing where I'd try to write my research papers as if I was writing literature, trying to find new ways to say the same thing to avoid repetition, and I got some really harsh criticism as a result. I went to my university's center for academic English and they pointed out my mistake to me and suddendly it was blindingly obvious. In a technical paper you have to use the same exact words to refer to the same concept, or you will immediately confuse the reader, who will think you mean two (or more!) different things, one for each different turn of phrase. So I adjusted my writing to exploit the rythm created by the repetition of the same few words every few sentences, because of course I want to convey a precise meaning but I also want my papers to read well. And now reviewers note my papers are well written and clear; and so far nobody has complained about repetition.

                                                      • bewbaloo 4 days ago
                                                        i’m with you on the technical bit. in any style of writing where the goal is to concisely convey information, you’re definitely right that style doesn’t matter in most cases, so repeating words isn’t really something to avoid. to kinda piggyback off your anecdote, a lot of people trip into that pitfall of trying to make whatever they’re writing sound too unique, and in doing so, they (often inadvertently) add a layer of verbosity that adds wordiness instead of anything meaningful to what they’re trying to say.

                                                        my issue is that i reuse adverbs and adjectives within a few sentences of one anther and simply don’t notice it — like saying “rain fell in gentle waves,” then a sentence or two later, “the tide rolled in, soft, gentle, crashing against…” it just feels.. lazy, unimaginative to me. i read somewhere that it’s a quirk of the brain (perhaps only for some people, i don’t remember) where words in immediate memory get looped and “stuck” there. that seems to be what happens when i write, and it chaps my ass.

                                                  • causality0 5 days ago
                                                    GPT-3-generated writing is a combination of comically absurd and incredibly deep. The weak infants of the woman next door who had to be picked up with the tongs and thrown into the dustbin is quite silly but then there's He wanted to crawl away from it, but there was no place to go. which is a surprisingly deep statement about the revulsion he feels towards himself. That a computer can generate that says very uncomfortable things about the skill of humans in recognizing genuine creative significance.
                                                    • throwawaygh 5 days ago
                                                      > which is a surprisingly deep statement

                                                      People used to say the same thing about Markov chains, and then the hype died down it took 20 years for another hypetrain to pull into the station. The same will happen with GPT-n.

                                                      > says very uncomfortable things about the skill of humans in recognizing genuine creative significance.

                                                      Indeed. People see pictures in clouds and omnipotent beings in the stars. Humans are uncomfortably good at finding meaning in nothings.

                                                      • fapjacks 5 days ago
                                                        My biggest problem with people the world round is invariably rooted in this sense of ownership of what other people find meaningful. This kind of top-down dictation of ultimately subjective meaning is what is uncomfortable (because meaning is a form of tacit knowledge and therefore incommunicable) and is the path to inquisitions, woke / oracular authoritarian thought-policing regimes with reeducation camps.
                                                        • kumarvvr 5 days ago
                                                          And other people reward those findings with money. Good writing is hard. Writing that evokes emotions in others is even harder.
                                                          • watwut 5 days ago
                                                            It is extraordinary rare to ear money by writing. It may be hard, but people dont reward it with cash.
                                                            • throwawaygh 3 days ago
                                                              Exactly. Getting people to see patterns is easy. Getting people to fork over money based on those perceptions is much more difficult. Ask any church plant.
                                                          • nl 5 days ago
                                                            The limitations of Markov chains are obvious within hours of starting to use them. GPT-3, less-so.
                                                            • cageface 5 days ago
                                                              What applications are Markov chains used in now?
                                                              • Mordisquitos 5 days ago
                                                                Sequence analysis in Bioinformatics, for example to align sequences and detect homology and as in HMMER: http://hmmer.org/
                                                                • amarant 5 days ago
                                                                  Kubernetes uses them to come up with random generated cluster names if you don't provide a name of your own..I bet many other applications use them in a similar manner
                                                                  • FractalHQ 5 days ago
                                                                    Crypto currencies and decentralized protocols like the libp2p (used in IPFS) like Markov chains iirc
                                                                • swiley 5 days ago
                                                                  Deep statements are deep because the communicate feeling, if the machine never felt anything but tripped over meaning like this (and it's pretty obvious it's just gluing things together) it really dulls the effect.
                                                                  • bikeshaving 5 days ago
                                                                    Who’s to say that our greatest writers didn’t “trip over meaning”? A lot of poetic processes involve finding, curating and re-contextualizing interesting phrases (see found poetry), and there’s potentially a lot of value in applying the same process to computer-generated writing.
                                                                    • mponw 5 days ago
                                                                      I am wondering if we call some writers great because of what they did not, could not or cannot express (anymore). By their limited and graspable existence they serve as targets for our projections, whereas an AI is less attractive for that purpose, at least in my view.

                                                                      I'd rather think about the authors of AIs, their assumptions and their worldview.

                                                                      • tsimionescu 5 days ago
                                                                        That's why no one is truly moved by a single phrase. We often quote memorable bits from larger works ("to be, or not to be"), but that quote is only memorable because the entire work makes us believe that it is not an accident.
                                                                        • exporectomy 5 days ago
                                                                          I was by a famous 6 word poem: "For sale: Baby shoes. Never worn."
                                                                          • watwut 5 days ago
                                                                            Yeah. But the thing is, after you have own baby, you may find yourself with like 5 pairs of never worn baby shoes.

                                                                            So while it somewhat suggest death, it is only because of the literally context, which makes you expect something deep. If you seen that in an ad, you would not feel the same.

                                                                            • tsimionescu 5 days ago
                                                                              Sure, there are exceptions. And for those exceptions it may be fair to say that perhaps they were just a lucky stumble.

                                                                              Even here I do wonder how well this poem would be remembered if it weren't associated with Hemingway.

                                                                        • chillacy 4 days ago
                                                                          Today at least all the gpt3 we read is still hand selected by another human, so if it communicated feeling in the editor, we get to read it.
                                                                        • notahacker 5 days ago
                                                                          Is it "surprisingly deep" though? Looks like two standard cliches of human writing often used in close proximity and the wider context suggests literal motion (which is also present in the silly line about the weak infants!)

                                                                          I did like "fuzzy belly" tbf, and Kafka himself might have appreciated the purely serendipitous irony of a line about "the bugs his father used to bring him as a little boy"

                                                                          • bckr 5 days ago
                                                                            I was actually taken in by the infants-in-the-dustbin image. I thought it was gruesome, then I reflected on how infant mortality has decreased and what a better time we're living in. This article was uncanny valley for me.
                                                                            • intergalplan 5 days ago
                                                                              Writers use all kinds of tools to "cheat", up to and including not actually writing. Mining computer-generated nonsense or coherent-but-empty prose for the odd accidentally-excellent phrase or sentence might end up being just another. Hell, it may already be.
                                                                          • minimaxir 5 days ago
                                                                            See also my experiments with GPT-3 on sane prompts, which have wildly varying quality even after generating them in bulk: https://github.com/minimaxir/gpt-3-experiments

                                                                            Creative writing hasn't been one of the super-hyped use cases by OpenAI for the OpenAI API outside of AI Dungeon, surprisingly. For just random generation, the necessary curation can detract from the time-savings advantages. (as an aside, the API is also extremely expensive for long-form content to the point I'm not sure how the economics work for these startups even with charging monthly fees).

                                                                            I'm more bullish on small bespoke models for a given use case, which is what I spend my time researching. (example: a ~1M parameter GPT-2 Magic: the Gathering card generator: https://colab.research.google.com/drive/1VOt090UzvltoBgMdUZm... )

                                                                            • Generating M:tG cards is interesting to me. Can we talk about this a bit? :)

                                                                              My degree dissertation was an M:tG expert system that included a hand-crafted parser and generator for (a small subset of) ability text [1]. My Masters' thesis was a grammar induction system trained on M:tG cards [2]. And I finally managed to sneak some M:tG grammar induction in my papers as a PhD student [3].

                                                                              I had a quick look at the colab of your project. If I may be critical, the examples you have are ... so-so. I've seen previous attempts to generate M:tG text with neural nets, starting with the old project in M:tG Salvation that became Roborosewater and that used LSTMs if memory serves. Results have always been a mixed bag that looks promising but always falls down holes even with hand-curation. This still seems to be the case with your project.

                                                                              A couple of examples (you probably know already):

                                                                                −2: Put a +1/+1 counter on each other creature you control.
                                                                              
                                                                              "Each other creature" should refer to a card with the (super) type "creature" but this card is a "planeswaker".

                                                                                +1: Each opponent chooses a creature they control. If that creature was a copy of that creature, each other player reveals their hand, chooses one of those cards, then puts it into their hand, then shuffles.
                                                                              
                                                                              Which creature was that creature a copy of?

                                                                                −10: Target opponent sacrifices a creature.
                                                                                −9: Each opponent sacrifices a creature.
                                                                              
                                                                              Representing costs in a meaningful manner is a constant problem in every M:tG generator I've seen.

                                                                              The problems I highlight above are not with grammaticality, which is certainly a big step forward with respect to the past. But many of the abilities still don't make a lot of sense, or don't make sense to be on the same card, or have weird costs etc. If you want your project to be something genuinely new, I have to say I don't think it's there yet.

                                                                              My intuition is that it would take a lot more than language modelling to generate M:tG cards that make enough sense that it's more fun to generate them than create them yourself. I think it would be necessary to have background knowledge of the game, at least its rules, if not some concept of a metagame. It might also help to re-train standard NLP pipelines specifically for M:tG (e.g. some kind of ability-text specific anaphora resolution might help with "that creature" above).

                                                                              Also, I note that the new online version of the game is capable of parsing cads as scripts in a programming language using a hand-crafted grammar rather than a machine-learned model [4] [5]. So it seems to me that the state-of-the-art for M:tG language modelling is still a hand-crafted grammar.

                                                                              __________________

                                                                              [1] https://github.com/stassa/Gleemin - unfortunately, doesn't run anymore after multiple changes to Prolog interepreters used to create and then port the project over.

                                                                              [2] https://github.com/stassa/THELEMA - should work with older versions of Swi-Prolog, unfortunately not documented in the README.

                                                                              [3] https://link.springer.com/article/10.1007/s10994-020-05945-w - see Section 3.3 "Experiment 3: M:tG fragment". Cringe-inducing to read just a few months after writing, sorry.

                                                                              [4] https://www.reddit.com/r/magicTCG/comments/74hw1z/magic_aren...

                                                                              [5] https://www.reddit.com/r/magicTCG/comments/9kxid9/mtgadisper...

                                                                              • minimaxir 5 days ago
                                                                                The outputs on pageload are just a random uncurated set I ran. I never claimed smaller models like the MtG models were immune to the same curation issues as larger text generation models.

                                                                                As someone who did try to reproduce RoboRosewater with LSTMs years ago, I can say using GPT-2 overall has a higher signal-to-noise ratio in terms of generating interesting cards which is a win.

                                                                                • >> The outputs on pageload are just a random uncurated set I ran. I never claimed smaller models like the MtG models were immune to the same curation issues as larger text generation models.

                                                                                  Are you sure? I reloaded the page once, before commenting above, but the output didn't change. I keep reloading now and the same set of cards is displayed. I reloaded with Shift-F5 to be sure ish.

                                                                                  >> As someone who did try to reproduce RoboRosewater with LSTMs years ago, I can say using GPT-2 overall has a higher signal-to-noise ratio in terms of generating interesting cards which is a win.

                                                                                  I don't know what an "interesting" card is, when we're talking about automatically generated cards. If a card looks too much like an existing card (e.g. if it has abilities that you can find on cards in a real-world set) then it's not very interesting. If it's an ability that hasn't been seen before but is only a variant of an existing ability ("Destroy target donkey") it's still not very interesting. To generate really "interesting" cards a generator must go beyond what's in the M:tG corpus, but not so far out that the abilities don't make sense anymore because that's not "interesting", just "random". The ones generated by your project are not far out enough to be what I'd call "interesting" but I think if you tried to make them more interesting they'd also become less coherent than they are currently (looking at the same few results I keep getting anyway).

                                                                                  (This is not very harsh criticism I hope.)

                                                                                  • minimaxir 5 days ago
                                                                                    You have to run the notebook itself by following the instructions at the top, then you can customize the output (lower temperatures will follow the rules better).

                                                                                    Here's some examples of curated cards generated that probably fit your description of "interesting": https://www.reddit.com/r/magicTCG/comments/n396g8/i_trained_...

                                                                                    • Sorry, you said on pageload, I thought you meant when the page is loaded :)

                                                                                      On the cards you posted to reddit:

                                                                                      a) "Keening Vythat", "Tombsis Satyr", "Quarpling Barrier". These are typical nonsense-nonsense names (as opposed to the fantasy-nonsense names found on M:tG cards) that are typical, in my experience, of card names generated by language models trained at the character level. M:tG fantasy-nonsense names, like "Rathi", "Kapashen", "Viashino", etc, are derived from a real-world narrative created specifically to theme the cards. A backstory. Without such a backstory, "Vythat", "Tombsis", "Quarpling" are just random strings that have no reason to be used as names. This is an important limitation of this kind of card generation, especially given the very low chance that you'll see any more "Vythat" cards (and if you did, they'd likely have a completely different function). Also "Keening Vythat" sounds like a creature, rather than a Sorcery.

                                                                                      So, first problem: thematic consistency of names is all over the place.

                                                                                      b) The text on Keening Vythat is ungrammatical:

                                                                                        Basic lands are basic land type in addition to its other types.
                                                                                      
                                                                                      c) Seismic Mind has an interesting ability (the one that modifies its base P/T). The mana cost makes sense, actually (although I bet it'd end up at {4U} at R&D), and the effect is very Blue. Nice.

                                                                                      d) Tombsis Satyr also has an even more interesting ability (feeds the inner Johnny). The cost and colours are right and the Angel type is just the cherry on top. Very nice. If only it wasn't called "Tombsis Satyr"... X)

                                                                                      e) Quarpling Barrier's ability is more of a crippling weakness than an interesting ability. A creature without Defender would not be called a "Barrier" these days, either. Pass, I fear.

                                                                                      f) Trade Damage is more balanced than interesting, but it's very well balanced! It would fit very nicely in a modern tempo deck. Nice also.

                                                                                      So that's three nice, two not so nice and a bit of a problem with themes. Still, those are not that bad at all. Unfortunately, there's just five of them. At this point it's hard to tell whether your project is able to generate better cards than other projects - and why would it, if it's using the same tech as those projects? You can fine tune and turn more knobs, but what, fundamentally, is the innovation here?

                                                                                      I suggest that you add the five cards above (and/or others) to the colab of your project alongside the auto-generated ones, to have examples of the "best case" and "average case" together. That'll give a boost to the appeal of your project.

                                                                                      Anyway, better than I originally thought but still a ways to go.

                                                                                      • minimaxir 4 days ago
                                                                                        Again, this (and my other text generation projects) are optimizing more for humor/chaos than accuracy, and there is a bolded important notice before the text emphasizing that the results may not be legal. Per the other comments on that Reddit thread, that appears to be the correct approach as the crazier-but-barely-legal cards are what resonate the best (the "bad" RoboRosewater cards were the most popular, not the good ones).

                                                                                        Curating only cards which are grammatically good is in itself misleading. Part of the reason I work on text generation in the first place is to illustrate its limitations.

                                                                                        I encourage you to run the notebook itself instead of arguing a No True Scotsman about something I never tried to assert. This notebook is not a thesis paper, just a side project about a ML model that took more time to train than to write the code for.

                                                                                        • I'm not arguing a "No True Scotsman". I offered criticism to help you improve your work, as criticism should do. I'm sorry that you took it the wrong way.
                                                                            • bambax 5 days ago
                                                                              Many good and prolific writers of the past used ghost writers who prepared drafts and idea lists for them, in effect making their work more like that of editors (it is said Alexandre Dumas worked like this -- it's one of the only reason to explain his spectacular productivity).

                                                                              So if AI is capable of doing some prep work, why not?

                                                                              When writing fiction, my struggle is usually not in the writing itself (making sentences) but in figuring out what to write about (the content of a scene). An AI that would generate scene ideas from an input of, say, a list of characters and plot points, would be very helpful.

                                                                              Outputting sentences and paragraphs, not so much (I think).

                                                                              Also, the article seems to conflate Sudowrite and GPT-3 but it would be interesting to understand the difference between the two; is Sudowrite just a wrapper over the GPT-3 API that forwards requests directly? If not, what is its added value vs hitting the API directly?

                                                                              • jamesjyu 5 days ago
                                                                                Founder of Sudowrite here.

                                                                                We are built on GPT-3 but we have a bunch of things we do on top that optimizes Sudowrite for the narrative fiction/non-fiction use case.

                                                                                For example, we have a description tool that, given a highlighted phrase, gives you options for telling phrases in 5 senses and metaphorical uses. Another tool we have analyzes a story premise and gives you possible twist and story turns, based on the genre of your story, and also gives you possible characters that could inhabit that world, complete with both external and internal character descriptions.

                                                                                I agree that for writing stories, the hard part is in the forest and not the trees (although this isn't true for everyone). And systems like GPT-3 can help with that, too, simply because it's level understanding of text is approaching that of a human.

                                                                                • bambax 4 days ago
                                                                                  Interesting! Thank you for answering.

                                                                                  How long is the waiting list to join Sudowrite beta?

                                                                                  • jamesjyu 4 days ago
                                                                                    It’s gotten quite large, but we plan on sending more invites out soon.

                                                                                    We are prioritizing folks who are working writers and have an immediate project they’re working on that could benefit from collaborating with an AI. If this sounds like you, sign up on the beta form and also send me a note about how you’ll use Sudowrite to jamesjacobyu@gmail.com

                                                                                    • bambax 3 days ago
                                                                                      > working writers and have an immediate project they’re working on that could benefit from collaborating with an AI. If this sounds like you

                                                                                      Not sure. I'm not a professional writer. I just finished my first novel (in French) and am starting a new one in English. I think the idea is great and quite original but I struggle a bit with scene ideas and general structure.

                                                                                      I think what I'm looking for is a sparring partner... Maybe I just need to paint a face on a volleyball...

                                                                              • jvanderbot 5 days ago
                                                                                This all reminds me of the poetry and intentionally over-used analogies and similes of junior high.

                                                                                People who look for meaning in it will find it, but it's mostly nonsense.

                                                                                It reminds me of the early face generation ... The eyes nose etc were all in the right place, but it was mimicking the appearance not the structure.

                                                                                I'm more interested in a convincing paragraph than sentence.

                                                                                • troelsSteegin 5 days ago
                                                                                  Are people getting better at reading? A deep read is an immersive, flow experience. But to engage in reading you have feel you have the time and focus to commit to reading and then not be interrupted. I don't want for machine generated narrative, I want more time spent deep reading.

                                                                                  I do way more scanning and skimming now than deep reading - authoritative source, credible, what's new, what's useful - it's all critical thinking and done as efficiently as I can make it. It feels like work. I always wonder if more structuring over the stuff that I read - entity and concept extraction, thesis extraction, separating facts from opinions, a topic-centered feed - would be better. I haven't found or built the right experience yet.

                                                                                  To me, narrative generating models are interesting only as a step toward a better critical reading experience, and the win there would be generating useful abstractions that help one understand faster. Arguably, writing style is one such abstraction.

                                                                                  Maybe there's potential in being able to generate readable narratives from things that people have a hard time describing, like datasets. Then the grail is business dashboards composed as sonnets.

                                                                                  • amilios 5 days ago
                                                                                    It's clear that the system occasionally struggles in making coherent sentences that communicate actual meaning -- others have pointed this out well enough. I think what's occasionally lost in these types of criticisms of these systems is the point the article is making about the transformation of the role of the writer from a producer to a curator. Already this transformation has occured in other fields; the job of a professional translator (for text) at this point is mostly starting with a machine translation and slightly tweaking it in order to produce something more natural. I can easily see this happening with GPT-N: the writer inputs some ideas, and filters what's produced and/or tweaks it to make the final product. It doesn't matter if X% of what it produces is trash: if there is usable content, the role of the writer has been changed. If the machine produces something like the article's examples on the first go, and all the "writer" needs to do is tweak it a bit to modify the bit about throwing babies in the trash, then it is still groundbreaking.
                                                                                    • barry-cotter 5 days ago
                                                                                      > Already this transformation has occured in other fields; the job of a professional translator (for text) at this point is mostly starting with a machine translation and slightly tweaking it in order to produce something more natural.

                                                                                      That certainly isn’t the case for Chinese-English translation. It’s faster to translate from scratch than to edit machine translated text if you know what your doing and have quality standards. One of my friends did German-English translation and machine translation just isn’t for purpose there either if you have any real quality standards. You can’t skip the part where you know what’s going on.

                                                                                      • groby_b 5 days ago
                                                                                        > the job of a professional translator (for text) at this point is mostly starting with a machine translation and slightly tweaking it in order to produce something more natural.

                                                                                        Not really, no. That might be true for instruction booklet translation (they sure read like it), but no, for a decent translation, you'll need to let a human have a full go at it.

                                                                                        Writing is as much about filtering & tweaking GPT-3 output as sculpting is about "removing everything that doesn't look like the final sculpture", as the old adage goes.

                                                                                        It is useful for one purpose: To do an absolutely minimal job and to externalize some more work onto end users. It'll also be popular for spam^W SEO, and for writing filler content for content graveyards. It will not, however, be useful for writing.

                                                                                        I have no doubt that we'll see a lot more tooling supporting writing, and machine-assisted writing. I have equally no doubt that GPT-3 and friends aren't going to be it.

                                                                                        • Clewza313 5 days ago
                                                                                          Sure, "decent" translation as in novels etc will continue to be hand-crafted, but the vast majority of translation is low-end bulk stuff like business reports and technical documentation, paid pennies per word. I know one (former) professional translator, and dumping text into Google Translate was always her starting point.

                                                                                          Interestingly, her job got harder when Google rolled out the new deep learning models, because instead of outputting visibly broken garbage that kind of flagged itself for checking, it now tends to produce beautifully formed sentences with subtle but often critical fuckups (wrong sense of a word, missing negation, parsing a pronoun incorrectly, etc) that can affect meaning.

                                                                                        • falsaberN1 3 days ago
                                                                                          A friend of mine is a Japanese person who translates Japanese<>English stuff and machine translation is absolutely unthinkable there, and would take much longer to "tweak" into something that works. It's no wonder there's high demand for people like him.
                                                                                        • db39 5 days ago
                                                                                          This is something I've been deep into recently. Just a couple of days ago I published a short paperback story (grimtalesbook.com) written and illustrated by AI - GPT-3 for the writing in this case.

                                                                                          I liked the analogy aimor made in this thread between the level readiness of autonomous driving and autonomous writing. From my experience that feels right, we're not at L5 yet - you can't just let the AI write continuously because you'll just end up with junk wrapped around some nuggets of gold.

                                                                                          In the future if we get to 'L5' writing, your child will have the ability to say 'OK Google, tell me a story about my adventures as a dragon slayer' and Google will respond happily with a perfect bedtime story. And the next day, you'll get a physical copy of that story in the mail. Luckily, that future isn't here yet.

                                                                                          The way Grim Tales was written with GPT-3 was by using some human intervention to keep the AI on track to write a cohesive narrative with a plot and dialogue. A couple of rules like removing anything that doesn't progress the narrative, and keeping the narrator in first person meant that the story was far more readable and didn't meander around like common examples.

                                                                                          Right now - for professional writers - AI is probably good enough as a co-pilot used to create alternate scenes at points in the narrative, or as a tool to help with writers' block.

                                                                                          • rdedev 5 days ago
                                                                                            I wish GPT 3 api was public. I tried out GPT 2 when I was writing out my SOP but it was really bad whenever there was a pretty technical word in the sentence. I really want to see how GPT 3 handles those cases
                                                                                            • enkid 5 days ago
                                                                                              Is it just me, or is the "continuation" grammatical but completely nonsensical? It talks about infants being thrown into dust bins and calling them animals - not in an intentional way. It talks about trying to crawl away from the blood on his belly. It talks about bugs being ill. It just doesn't actually make sense.
                                                                                              • malloryerik 5 days ago
                                                                                                Absolutely, because GPT-3 is a super hot rod text prediction algorithm but has strictly zero consciousness or concept of what it's writing about. It's merely using statistics to do more complex versions of stuff like, when words a, b, c, d, and e are used together, words f, g, h, and j are typically used as well. Take this simple kind of idea and push it to the current max et voilà, GPT-3.
                                                                                                • Swizec 5 days ago
                                                                                                  It makes sense the same way a horoscope or a Myers-Briggs test makes sense — the reader adds their own meaning.
                                                                                                  • dash2 5 days ago
                                                                                                    I'm not sure it makes much less sense than Kafka, which starts with the hero waking up as a cockroach.
                                                                                                  • lbacaj 5 days ago
                                                                                                    I have been doing a lot of writing lately, short and longer essays. I have also written on-and-off for the last ten years.

                                                                                                    Writing well is not easy.

                                                                                                    I used to think I wrote well until I started reading better writers. The whole idea that we will just slap together a bunch of text using some “AI” and it will resonate with people is crazy to me. As an engineer whose dabbled with some AI and ML, and has even seen what GPT3 can do, we are as far away from being able to use AI to write a best seller as we are from landing on the Sun. Not saying it will never happen just that nothing we are building right now will be able to do that.

                                                                                                    Good writing factors in human culture, sparks emotions in people, tells stories that resonate. I will keep going to sleep at night safely assuming that no AI will out-write humans anytime soon and you should too.

                                                                                                    • Swizec 5 days ago
                                                                                                      You are right. AI will not replace good writing. AI will replace typical writing – seo spam, news reports, ads, coding tutorials, etc

                                                                                                      Good writing has insight. AI can’t do that yet. Most writing is filler lettuce. AI is great.

                                                                                                      • ryandrake 5 days ago
                                                                                                        Most news articles don’t seem to be even proofread anymore! At least AI spells perfectly. It will at least likely soon replace the uncreative who/what/where/when news writing.
                                                                                                        • tsimionescu 5 days ago
                                                                                                          Only for papers that don't care about who/what/where/when, because GPT-3 itself doesn't really care about those questions.
                                                                                                          • But if you write such an article, you need to convery specific information - about the "who/what/where/when" of the news story. This is information that was not available to GPT-3 at the time of training (e.g. it doesn't know that Biden is the PotUS now) so someone will have to sit down and craft a prompt that contains this information and such that it causes GPT-3 to generate a piece that repeats this information. To do all that sounds like a lot more work than just writing the article by hand.
                                                                                                        • Animats 5 days ago
                                                                                                          we are as far away from being able to use AI to write a best seller as we are from landing on the Sun.

                                                                                                          Harlequin Romances, though, those little soft-core porn pamphlets for women found at supermarket checkouts, could probably be generated. There are about 4,000 of them, and they're written to a set formula - “Boy meets girl, boy loses girl on page 56, and, by page 180, the book would end with a marriage proposal.” Load up the training set and profit.

                                                                                                          • tartoran 5 days ago
                                                                                                            But that’s not good writing, it’s just junk remixed, what would be the point? And that could be procedurally be generated.

                                                                                                            In general I don’t see utility of having AI write stuff for us to read, it’s not like we are running out of things to read. What I’m more interested in is getting an computer to memorize facts, understand the context and then subleties of a convetsarion to the extent that you could ask back questions and infer answers from that ingested knowledge we fed it. And I think we’re far from it though we’re making fast strides towards a different direction.

                                                                                                            • lacker 5 days ago
                                                                                                              AI is pretty far away from those, too. The "window" mechanism that GPT-3 and similar methods use basically prevents it from staying on-topic over the course of a few paragraphs.
                                                                                                              • dumpsterdiver 5 days ago
                                                                                                                Well, it's not just women, I have read and enjoyed such writing. But you are correct, writing that lacks substance intended for people who expect nothing more. Sex is easy to write about. It doesn't take much to turn people on.
                                                                                                                • watwut 5 days ago
                                                                                                                  I used to read a lot of junk paperbacks - mostly horrors, but also stories about truck drivers, detectives and western stories. They were schematic to the extreme. Looking back, it was the same story over and over and over. I liked them back then. I used to read also a lot of repetitive sci-fi: those small start trek books.

                                                                                                                  I never read Harlequin, but it being repetitive is not something special. Back when people read a lot of books, we did not read James Joyce and Charles Dickens exclusively. I mean, spiderman comics, ninja turtles comics, they were all the same story repeated again and again too.

                                                                                                                  We read easy for fun books.

                                                                                                                  • anthk 5 days ago
                                                                                                                    >I used to read a lot of junk paperbacks - mostly horrors, but also stories about truck drivers, detectives and western stories. They were schematic to the extreme.

                                                                                                                    He, ditto in Spain. In most cases the (same) authors would rehash a detective story as some sci-fi based short novel by just changing some devices and lore and call it done.

                                                                                                                    Well, in the end cyberpunk it's just futuristic noir.

                                                                                                                    • Dan brown seems to do this. Da Vinci code, angels and demons, and the one about the NSA all have the same plot structure.
                                                                                                                      • anthk 5 days ago
                                                                                                                        Altough my case was about really cheap short novels from the 70's, they almost were Pulp Fiction.

                                                                                                                        An infamous crime in Chicago with mafia and detectives -> some crime in Orion with spaceships, Martian troops and so on.

                                                                                                                        Altough some of them were fairly good, like one about a time travel paradox.

                                                                                                                        • anthk 5 days ago
                                                                                                                          Just copycat Broken Sword so nobody notices anything.
                                                                                                                      • Junk paperbacks about truck drivers sound interesting... Remember any titles?
                                                                                                                    • Yes, when signing up for the beta I almost included 'eroticc but I thought better of it.
                                                                                                                    • coffeefirst 5 days ago
                                                                                                                      Very true.

                                                                                                                      GPT-3 (and everything like it) is the perfect tool for SEO spam and filling out college papers with blather if you're confident no person is actually reading it.

                                                                                                                      It's impressive. No, really, it's an incredible simulacrum. But it's also just a slightly more practical version of the infinite monkeys with typewriters thought experiment.

                                                                                                                      • spoonjim 5 days ago
                                                                                                                        Computer generated text have tens of millions of people they can put out of a job, manipulate into emotions, phish for scams, etc. without needing to produce text like James Joyce.
                                                                                                                        • nautilus12 5 days ago
                                                                                                                          I almost feel like the obscure to most effects of joyce may be easier for gpt3 to generate than other things an only joycian scholars of which there are few would be able to tell
                                                                                                                        • JamesBarney 5 days ago
                                                                                                                          > we are as far away from being able to use AI to write a best seller as we are from landing on the Sun

                                                                                                                          We might be as far away from both. But our velocity approaching those goals could not be more different. In the last 10 years we've made tremendous progress in terms of writing. GPT-3 is leaps and bounds better than GPT-2. I'm not convinced a GPT-7 couldn't be mostly responsible for a best seller.

                                                                                                                          • Teever 5 days ago
                                                                                                                            This comment is correct.

                                                                                                                            The most interesting question that it raises is whether the comment was written by GPT-3. How would we ever know for certain?

                                                                                                                            • tsimionescu 5 days ago
                                                                                                                              It's simple : GPT-3 doesn't have anything it wants to communicate. If you've taken some message from the comment, it's either written directly by a human, or some human has looked through god knows how many GPT-3 outputs until they found one that actually sounds like their message.
                                                                                                                          • aimor 5 days ago
                                                                                                                            What level of Fully Self Writing are we at right now? Maybe L1 with L2 on the horizon?

                                                                                                                            I like the idea of text generation as a writing aid, even a flawed network can help a stuck brain. But say writers really do hate writing, I still come back to the big question: Who's going to read it all? Cheap writing is bad enough already (think stock market news, recipe blogs, or any of my HN comments), I can't begin to fathom how much text I'll be scrolling through once GPT-7 is released. I suppose the automated writing will be consumed by automated readers then, and my involvement starts to feel kind of unnecessary.

                                                                                                                            • 8bitsrule 5 days ago
                                                                                                                              Couldn't help but notice that 'Xanadu' uses a lot of rhymes ... GPT apparently didn't notice that constraint.

                                                                                                                              And then there's: "While all the stars that round her burn’d, Bow’d to the ground and based their fires. To the one ever-branching cloud" When 'base' is used as a past-tense verb, we expect a prepositional phrase ... before the next period. The preposition is never 'to'. Doesn't matter; the period stopped the English part.

                                                                                                                              "That blew and drifted—blow and drift;" What?

                                                                                                                              "If you told me that Coleridge wrote it, I would believe you."

                                                                                                                              NExt.

                                                                                                                              • gwern 4 days ago
                                                                                                                                > Couldn't help but notice that 'Xanadu' uses a lot of rhymes ... GPT apparently didn't notice that constraint.

                                                                                                                                Due to an obscure low-level technical decision (encoding text into BPEs), phonetics are erased from the GPT training data, and so therefore neither GPT-2 nor GPT-3 can learn to rhyme or make puns (they can only memorize relatively common pairs by bruteforce): https://www.gwern.net/GPT-3#bpes

                                                                                                                                I put a good deal of effort into testing this, including "Kubla Khan": https://www.gwern.net/GPT-3#samuel-taylor-coleridge Just doesn't work, and it never will until OA trains a model using some sort of better encoding which preserves phonetic information. (Character-based, unigram LM, or BPE dropout might all work, we don't know yet.)

                                                                                                                              • wsh 5 days ago
                                                                                                                                I had to rewrite a letter for someone the other day and used Google Docs for the first time in a while. I was struck by how often it made autocomplete suggestions and how good they were: not just stock phrases (“Thank you in advance for your prompt attention...”) but substantive text, too.

                                                                                                                                This kind of help with the mechanics of composition and usage, however, is a far cry from the critical thinking needed to be an effective advocate, expositor, or entertainer. There’s more to being a writer than making plausible sentences and paragraphs.

                                                                                                                                • konfuzio 2 days ago
                                                                                                                                  GPT Neo works well for drafting emails. Find our approach here: https://www.helm-nagel.com/open-sourced-gpt-Neo-writes-indiv...
                                                                                                                                  • ayushchat 5 days ago
                                                                                                                                    I've actually used tools like copy.ai and converstion.ai for a couple of copywriting projects and found them to be very useful. Of course the output isn't perfect, human intervention is required, but it is way better than doing the writing myself from scratch. I think going forward we are going to see a lot of human + AI writing projects instead of purely human or purely AI.
                                                                                                                                  • jamesjyu 5 days ago
                                                                                                                                    Oh wow, just seeing this post made it to the front page. Founder of Sudowrite here, happy to answer questions.
                                                                                                                                    • libertine 5 days ago
                                                                                                                                      I don't mean to put on you the spot, but I have been wondering about this:

                                                                                                                                      Do you think that such tools will have the eyes of the regulators on top of them due to copyright issues, since the data used could have a lot of copyrighted material?

                                                                                                                                      I haven't yet made my mind about this: is the output original content, or just shuffled proprietary content?

                                                                                                                                      • jamesjyu 5 days ago
                                                                                                                                        (Caveat: IANAL) As far as I know the issue of copyright and AI generated content hasn’t been tested in the courts yet. There are two issues:

                                                                                                                                        1) The author inadvertently uses output from GPT-3 that happens to be close enough to copyrighted material to be a problem.

                                                                                                                                        This is why we recommend to authors do two things: a) enter as much of their own original work to prompt Sudowrite and b) edit and incorporate the AI output organically into their work. The chance that GPT-3 outputs verbatim copyright work in that case is small. As we grow, we also plan to incorporate plagiarism checks in the background.

                                                                                                                                        2) Someone could challenge the author’s copyright on a piece of work that was primarily written by GPT-3.

                                                                                                                                        This has not been tested in the courts, and I suspect it will come to head in the next few years. For me, the question of who owns copyright is dependent on how much work the author has done to the AI output (there are some parallels to the Monkey Selfie case [2]). For example, Stephen Marche (the author of the new yorker post from OP) also wrote a short story [1] using Sudowrite, in which he claims that 17.1% of the final words were from the AI. But of course, he has done non-trivial amounts of work to edit and incorporate those words and made it his own. So in this case, he def deserves ownership. All of our Sudowrite authors use the AI similarly in their process.

                                                                                                                                        However, what if an author uses vanilla GPT-3 with no humans int the loop to generate hundreds of biographies of famous people based on wikipedia entries? I don’t have a good answer to this. It shouldmay come down to how much the author has done to guide GPT-3 and/or pre-post-process it.

                                                                                                                                        Here’s further reading:

                                                                                                                                        http://rutgerslawreview.com/wp-content/uploads/2017/07/Rober...

                                                                                                                                        https://core.ac.uk/download/pdf/77599763.pdf

                                                                                                                                        [1] https://lareviewofbooks.org/article/the-thing-on-the-phone/

                                                                                                                                        [2] https://www.npr.org/sections/thetwo-way/2017/09/12/550417823...

                                                                                                                                        • libertine 5 days ago
                                                                                                                                          Thank you for the insightful reply! Will definitely read more into it!
                                                                                                                                    • vumgl 5 days ago
                                                                                                                                      Computer-played chess or go games are already more interesting than the human-played ones. If you get computer-written novels indistinguishable from the best human-written novels, or even better, would you read them? There will be 1 million of them generated every day, all better than Harry Potter, or by taste, War and Peace.
                                                                                                                                      • causality0 5 days ago
                                                                                                                                        The better question is, would you read anything else? What happens when we can just dial up the story we want? What happens when these models combine the skillfulness of a human writer with the command of facts, theories, behaviors, and lore of computer records?

                                                                                                                                        Imagine. You could feed it all of Tolkien's writings and tell it to give you an epic saga of novels set in the First Age of Middle Earth. You could tweak a parameter and instantly get a version of your favorite novel that proceeds with a different main character or a different choice and have it be indistinguishable from a version written by the original author and one hundred percent consistent with events and previous plots.

                                                                                                                                        I'm not too ashamed to say that if memory-erasure technology existed I'd be sorely tempted to experience my favorite media for the first time over and other. Creative AI is the looming spectre of that danger, as it becomes more and more skilled at giving us the best, we will grow intolerant of anything less.

                                                                                                                                        • >> Imagine. You could feed it all of Tolkien's writings and tell it to give you an epic saga of novels set in the First Age of Middle Earth. You could tweak a parameter and instantly get a version of your favorite novel that proceeds with a different main character or a different choice and have it be indistinguishable from a version written by the original author and one hundred percent consistent with events and previous plots.

                                                                                                                                          Arguably, you are describing about 80% of the Fantasy genre, only it's all written by humans.

                                                                                                                                          Sorry if that sounded like a "shallow dismissal" as per the HN guidelines. I used to love F/SF and then I found I couldn't read it anymore because it all felt like the same few story elements were reused over and Over and OVER again. I get the same feeling of dread boredom when I browse the Fantasy isle at bookshops today. <evil power> is rising in the <cardinal direction>. <Hero> must <undertake heroic quest> to defeat <evil power>. It all got so formulaic you could write a Cluedo variant based on it: The Orc Captain with the Great Axe in the Dark Tower.

                                                                                                                                          This, btw, was all before I (very belatedly, because I'm not a native English speark and so I read only what was translated to Greek) discovered Terry Pratchett, Jeff Noon and Iain M. Banks, who were a breath of fresh air (unfortunately rarely refreshed since then) and yeah, as a kid, I read a lot of F/SF. Explains my school grades, hah.

                                                                                                                                          • Digit-Al 5 days ago
                                                                                                                                            Don't you think there's a danger you'd end up with stories that looked like JRR Tolkien as written by George RR Martin?
                                                                                                                                        • xkeysc0re 5 days ago
                                                                                                                                          Chess and go are both constrained games with no hidden information. Literature is anything but.
                                                                                                                                        • Animats 5 days ago
                                                                                                                                          GPT-3 is already better than most blog comments, many bloggers, and most online text porn. Maybe most ad copy. Many pundits. Some college professors.

                                                                                                                                          It has no clue what it's talking about, of course. It's terrible at fact-based material, because what comes out looks plausible, but contains made-up facts. Some facts will be correct, because they were mentioned a few times in the training database. Others are just plausible words strung together.

                                                                                                                                          What it really demonstrates is that much of literature has less behind it than was previously supposed.

                                                                                                                                          • malloryerik 5 days ago
                                                                                                                                            > What it really demonstrates is that much of literature has less behind it than was previously supposed.

                                                                                                                                            I'm uncomfortable with this statement because GPT-3 is in a sense copying or sourcing a massive corpus of human literature to use as the paint on its brush. If it can make a sentence that sounds Shakespearean after performing a statistical analysis of all sentences ever written by Shakespeare and using fancy averages and weightings of those sentences, how does that diminish the value of Shakespeare? In a wider sense, isn't that what's happening, but instead of using every sentence written by Shakespeare the goal is to use every sentence ever written?

                                                                                                                                            That said, how GPT-3 and its descendants affect the market value of "literature" is... sorry I can't resist... a story yet to be written.

                                                                                                                                            • bpodgursky 5 days ago
                                                                                                                                              I have a friend who uses GPT-3 to generate startup marketing copy and customer-facing emails. He just prompts with the company description and "The email which is most effective at getting customers to sign up for <startup> is:"

                                                                                                                                              It reads as good or better than a human would do. Maybe not better than the BEST human marketer would do, but far better than average, and instant + free(ish).

                                                                                                                                              • I expect GPT-3 to be amazing for people who want to write stories or articles but are really terrible at one specific aspect of the writing process.
                                                                                                                                                • sbierwagen 5 days ago
                                                                                                                                                  https://slideplayer.com/slide/6081467/18/images/12/The+scale...

                                                                                                                                                  It depends on if you think writing ability is more like the top graph, or the bottom graph. If it's like the top graph, humans have plenty of time to adopt AI tools, and incorporate them into the writing process.

                                                                                                                                                  If it's like the bottom graph, then GPT-4 comes out in 12 months and is human equivalent, and GPT-5 comes out a year after that and is a better writer than any human could hope to be.

                                                                                                                                                  • Animats 4 days ago
                                                                                                                                                    Progress so far is that GPT-1 could produce a coherent sentence but stopped making sense after a few lines. GPT-2 could produce a coherent paragraph, but several paragraphs strung together made no sense. GPT-3 is coherent for about a page or screen, but not a chapter.

                                                                                                                                                    A page is good enough for the clickbait industry.

                                                                                                                                              • jgalt212 5 days ago
                                                                                                                                                > In July of 2019, Microsoft invested a billion dollars, which allowed OpenAI to create a supercomputer with two hundred and eighty-five thousand C.P.U. cores, ten thousand G.P.U.s, and four hundred gigabits per second of network connectivity per server.

                                                                                                                                                > Gupta imagines the product turning into a resource that writers will pay fifteen to twenty dollars per month to use.

                                                                                                                                                That seems like a great deal to sublet some portion of all those resources.

                                                                                                                                                • TheDudeMan 5 days ago
                                                                                                                                                  "teraflops per second"
                                                                                                                                                  • Clewza313 5 days ago
                                                                                                                                                    Teraflops per second is the speed at which computing speeds up. Eyeballing Wikipedia's list of fastest supercomputers, between 2018 and 2020 we clocked around 100 petaflops per year, which comes to around 0.00317 teraflops per second.

                                                                                                                                                    For homework, you can calculate teraflops per second per second, which would measure the acceleration of teraflops.

                                                                                                                                                  • Does anybody know what the author means when he asserts that "nearly all writers he knows don't like writing"?

                                                                                                                                                    Seems contradictory to everything I've heard about the iteration loop between practice, play, and discovery that characterizes mastery in any craft, be it programming, writing, music, sports, and so on.

                                                                                                                                                    • cjohansson 5 days ago
                                                                                                                                                      The AI generated paragraph made no sense to me and did not sound like the first. AI is like a narccisist, every other day people say it's useless and then the next it's the savior of the world.
                                                                                                                                                      • notum 5 days ago
                                                                                                                                                        I'll admit, I just scrolled to the bottom to see if the article ends with "This piece has been written by GPT-3".

                                                                                                                                                        It doesn't.

                                                                                                                                                        • bulldog13 5 days ago
                                                                                                                                                          What is current state of the art that a user can actually access ? For example, GPT-3 requires open ai beta access.

                                                                                                                                                          Is it still GPT-2 ?

                                                                                                                                                          • ta_ca 5 days ago
                                                                                                                                                            until a few minutes ago i was a skeptic. now i think about it... this gpt business is not only AI but it might be a way to AGI. why? it mimics us perfectly that each of us can fill books about the things we have zero understanding of.
                                                                                                                                                            • nickpinkston 5 days ago
                                                                                                                                                              Maybe one day computers will even learn to write like the New Yorker... starting every article an initial page of irrelevant artful content and then moving to an overly flowery essay lacking actual insights into the topic at hand.
                                                                                                                                                              • dang 5 days ago
                                                                                                                                                                "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

                                                                                                                                                                https://news.ycombinator.com/newsguidelines.html

                                                                                                                                                                • nickpinkston 5 days ago
                                                                                                                                                                  I wouldn't say it's shallow. I think the New Yorker's editorial board / style guide does a disservice in explaining science / engineering concepts. I just delivered my critique via sarcasm.
                                                                                                                                                                  • dang 5 days ago
                                                                                                                                                                    Since it didn't go into anything specific or interesting, I'm afraid it counts as shallow—especially because it's also a cliché response to NYer articles.
                                                                                                                                                                    • nickpinkston 4 days ago
                                                                                                                                                                      Haha - cliché or common knowledge?
                                                                                                                                                                      • dang 4 days ago
                                                                                                                                                                        From an HN moderation point of view I think those probably fall into the same bucket.
                                                                                                                                                              • witherk 5 days ago
                                                                                                                                                                sheeeeeeesh