Literature

Why Vauhini Vara Used ChatGPT to Write a Book About Big Tech and Herself

Vauhini Vara is a writer of everything—short stories, novels, journalism, texts and emails, Amazon reviews, and now, Searches, a book-length work of inventive nonfiction exploring the offerings and exploitations of large technology companies, and the undeniable hold they have on our lives. 

Vara is not new to the world of technology. Having grown up in Seattle in the 1990s and then attended Stanford University, Vara has always found herself surrounded by the industry. She started her career as a technology reporter, first at the Wall Street Journal and then at the New Yorker. In 2022 her debut novel The Immortal King Rao was named a finalist for the Pulitzer Prize. It tells the story of a man born to a Dalit coconut farming family who goes on to become the CEO of the world’s largest technology company, one which eventually takes on the job of global governance. 

Though the story of King Rao unfolds in a dystopian future, Searches concerns itself just as much with the past and present. In 2021, Vara was given access to an early version of ChatGPT, which she used to help her write a story about the death of her sister Deepa. Published first in The Believer, “Ghosts” became the first ever viral AI essay. Now, Vara has taken the experimentalism of that essay and applied it to a whole book. Like “Ghosts”, Searches includes moments of collaboration with AI, but it certainly doesn’t leave out the critical analysis. 

I met with Vara over Zoom to discuss AI and its impact on our culture. 


Anu Khosla: The first thing one notices about this work is that it’s actually not strictly your own words that make up the book. I was curious about how this project, if at all, shifted your ideas around authorship as you were writing it?

Vauhini Vara: The book started out as being basically the even-numbered chapters, just those more experimental chapters that are looking at the language that we use when using technology products. I’ve had two editors on this project. The first one was Lisa Lucas, and then she left Pantheon, so my new editor is Denise Oswald. My editor Lisa said something just offhandedly: “I wonder what would happen if you shared chapters of your book with ChatGPT.” My immediate reaction was one of horror and disgust, I was like, “Absolutely not,” but then I thought about the whole project of the book. In a way, it is pushing the boundaries of my own complicity in the power of these technology platforms. I ultimately started to think that maybe sharing my work with ChatGPT and allowing it to engage with it would be almost the most extreme possible manifestation of that, which is what got me interested in doing it in the first place.

I have been thinking a lot about authorship, to answer your question more directly. You’re probably referring largely to the use of ChatGPT and other AI products to generate text in the book. But then there’s also that last chapter where the text is not made up of my words, but it’s also not ChatGPT, it’s other human beings, which was very intentional on my part. I think that maybe the most facile way of describing authorship is to say that it represents one individual human perspective. But then, I’ve published oral histories, for example. When I publish those, it’s my name that’s on the oral history as its author, and yet all the words that I’m publishing are the words of other people. I wanted to complicate that binary understanding of authorship that comes up often in discussions of AI, like either something was written by one human being and sprung out of their brain with no other influence or it was text-produced by AI, which is this disembodied technology that’s owned by big technology companies and represents their interests. I think in some ways it’s more interesting to acknowledge the ways in which authorship is always communal. The distinction that can be made, I think, between human authorship and AI authorship is less about these questions of influence on an individual author and the primacy of one individual author, than on the difference between humans and corporate-owned machines, essentially.

AK: You are someone who knows a lot more about these tools than others. Knowing what you know about them, why do you think you’re drawn to them when so many writers are repelled by them?


I wanted to complicate that binary understanding of authorship that comes up often in discussions of AI.

VV: I would like to be able to claim some interesting moral authority that propels my need to use these tools and be attracted to them. I think in any nonfiction, in my opinion, the I-character is a construct in some ways. In the context of this book, I’m using a version of myself that in some ways is meant to be a character, even though it does represent a version of myself. It also feels to me that what makes any literature interesting, including nonfiction, is an interest in the main character’s own agency. Often when we talk about big technology companies and their products, again, we talk about it in this binary way where we, the users, will often say: “These companies are trying to exploit us. That’s their goal. That’s all they do. That’s what they’re in the business of.” And then these companies, defending themselves, will say: “We make these products and offer them to you to use, and we would not be at all successful if nobody wanted to use our products. The reason we are successful is that we’re clearly offering something that people want.”

In the context of this book, my goal was to place myself as a character right at that intersection of those two arguments and to explore the tension between them, because the truth is both of those positions are accurate. There’s something true in both of them. I wanted to, in some ways, use myself to show the ways in which that’s true, because I think so often in writing about technology, it’s either the one or the other. As a reporter covering technology, I often find it hard to find people to talk to who represent that tension and nuance. You either find the hardcore critiques or the hardcore boosters of these companies and their products, and I was interested in the middle ground. So I was putting myself there to be a middle ground. 

That’s an intellectualized version of the answer to the question, though. As I write in the book, I was in middle school in the mid ’90s when the Internet started to proliferate, and so it has always been a part of my life. I do count myself as somebody who finds value in these companies’ products. I have had the choice to turn off Google’s tracking of all my search history starting in the year 2005, and I never have done that because on some level, I find it really interesting to be able to have that record for myself of what I’ve searched for since 2005. I opt to have Amazon track all my order history and search history because that makes it easier for me to find what I’m looking for the next time I want to order something. The personal answer to that question is that it’s not just that I’m representing myself for literary purposes as somebody who sits at that nexus between what big tech gets out of these products and what we get out of these products. It’s also that I am someone who believes that I do get some value out of these products, and that’s why I use them.

AK: As I was reading, I was struck by something that I don’t believe I’ve ever realized before. For those of us who have been exposed to tech for a long time, we’re very used to hearing the terminology of “my product is democratizing this industry.” One of the ideas that you present really powerfully in the book is that actually the main impact of many of these companies has been that they’ve been able to really consolidate power. Can you talk to me about this relationship between the idea of democratization paired with the actual outcome of consolidated power? 

VV: On the face of it, access to technology like ChatGPT or Google or Instagram would suggest a democratization function, because they’re readily available to us, and they do offer us something that makes our life marginally better or more connected or easier by certain definitions of those terms. The problem is that while these products are doing that, they’re built in such a way that much greater amounts of power and wealth are accruing to the people who control the technologies. 

I think oftentimes when people who run or invest in these companies use terms like “democratization” to refer to what the companies are doing, my sense is that they’re doing that in good faith. They really believe that these products are democratizing society. It also happens that there is a rhetorical value of that term that serves their interests. And so I think that’s why the term has become so prolific. I don’t think that there’s a cabal sitting in a back room saying, “let’s use the term ‘democratization.’ That’s how we can get everybody hooked on our product so that we can exploit them further.” I think there’s a genuine belief in the power of these technologies for democratization. I think there’s something useful and interesting about that idea of democratization as being embedded in technology, because it is possible for there to be technologies that do truly democratize access without accruing power and wealth to the people who are already powerful and wealthy. I talk about it in the book, but Wikipedia is an interesting example. Obviously, the people who contribute to Wikipedia are disproportionately anglophone and male and white, but at the same time, Wikipedia isn’t a for-profit company. Anybody can edit it, anybody can access it. There’s no exploitation of users’ personal information when we use it. And so those kinds of models are really interesting to think about, because oftentimes when we think about technology, we’re thinking about those corporate-owned technologies in which the value that they give us is bound up in the value that accrues to the people who own those technologies. There are all these other counter models that are interesting to think about.

AK: Reading the ChatGPT sections in this book, it’s really clear to me that the AI is an optimist. You talk a lot in the book about how you are an optimist yourself. How did it feel to observe this attribute that’s true in yourself in this technology as well?

VV: It’s true that I consider myself an optimistic person, and I characterized myself that way in the book, and yet the ChatGPT’s version of optimism really grated against me. I think the way I would define my optimism is, I would hope, as a clear-eye optimism that recognizes the real problems that exist now, but is hopeful that there is a different future that we can imagine and get to. Whereas the optimism reflected by ChatGPT is less about that. It’s more about characterizing the way things are now as perfectly fine. There are problems, but it’s not that big of a deal. Or, there are problems, but look on the bright side! That’s the optimism that doesn’t resonate with me. I’m not a technologist, but my understanding from reporting on AI is that this optimism is not inherent to large language models or AI in general. Rather, it was designed into these tools as they became productized because the companies behind them realized that they wanted these products to be good little chat boxes. In the same way, when I have had customer service jobs, I was told by my bosses to keep a positive attitude and spin things in an optimistic way. ChatGPT is essentially getting the same instructions. 

AK: I’ve been seeing examples online of people asking AI products questions and then receiving truly horrible advice, to, say, put batteries on their sandwich or whatever. The AI doesn’t seem to know, at least yet, when we’re asking it to tell us facts or when we’re asking it to help us write a story. It makes me think a lot about the concept of genre. For you, especially as someone who writes in both genres, do you have a strong emotional reaction to the distinction between fiction versus nonfiction?

ChatGPT, for all the things that it does really well, is functionally not capable of representing an individual perspective.

VV: I’m thinking of the investor presentation section of the book, which is the one where I’m using AI generated images to make an imagined pitch deck. It’s part of a nonfiction book, and yet I’m obviously not actually making a presentation to investors. And so I think what’s important is the social contract between the writer and the reader. That’s what’s important for me about the distinction between nonfiction and fiction, which is why, for me, it matters to define fiction as almost being a promise that the author is making to the reader that this is a representation of reality. The connection for me between that and AI made by large language models in particular, is that there’s an assumption that that social contract is between human beings. There’s a human author of a book and there’s a specific audience, whether it’s an actual explicit audience (the way I’m talking to you) or an implicit audience (the people who are going to pick up my book but I don’t know who they are). There’s a relationship between actual human beings there, and that’s how communication and language have functioned since the beginning of communication and language. So what’s disruptive—and I don’t mean that in a positive sense—about large language models is that the large language models neither have an individual perspective nor a model for understanding who they are talking to, right? So there’s no understanding of themselves as having a perspective. This gets talked about less, but there’s also no understanding of who is being addressed. I will, to give credit where it’s due, mention the well-known paper called “On the Dangers of Stochastic Parrots.” I quote it in the book. The authors of that paper, they’re technologists, they’re not writers, but they almost get into literary theory in that paper because they talk about the way in which the essential breakdown that takes place when we use ChatGPT and feel like we’re communicating is that we assume our interlocutor has an understanding of human communication like ours, but that’s not true of ChatGPT, and that’s the problem.

AK: Do you think that ChatGPT would make a good literary critic?

VV: My answer to that question in reference to ChatGPT as a product, specifically, is absolutely not. And the reason I say that is because what I love about criticism, when it’s good, is the precision of the point of view of the person who’s offering the criticism. And ChatGPT, for all the things that it does really well, is functionally not capable of representing an individual perspective. The other reason is that when I love criticism, when most people really like criticism, it’s because of its originality. It’s because we’re reading something that we’ve never heard framed quite that way. ChatGPT, again, functionally by virtue of the way it’s designed, is always an anti-originality technology, in that it’s always interested in the statistically probable perspective rather than the surprising or original perspective.

AK: There’s a tweet from a couple of years ago that went viral that I think about a lot that said, “can we get some a.i. to pick plastic out of the ocean or do all the robots need to be screenwriters?” I still haven’t seen a real answer to the question, so I’ll ask you. Why is it so important to these companies that the AI be able to create art as opposed to just solve those more technical problems?

VV: With my journalistic hat on, I have to say I don’t know how the companies would answer the question, but I find it a super interesting question. I write in the book about a conversation with somebody who works at OpenAI. I wanted to ask him about how companies like OpenAI are talking to people like university professors and filmmakers and photographers and writers. The fact that they’re doing that outreach so aggressively makes me wonder if it’s less about an interest on their part in having these products used for creative purposes than about an understanding on their part of the cultural capital that creative people have. Because creative people have been some of the most vocal critics of these technologies, and a lot of creative people also happen to have a lot of cultural capital. They have followers on social media, they make movies, they write books. My sense is that creative uses, like using AI to write a novel, just isn’t that big of a market opportunity for these AI companies. I really can’t imagine why it would matter to them to make the case for it. I think the reason it’s useful for them to make that case is because of the role that plays in the cultural conversation about AI in our lives.

The post Why Vauhini Vara Used ChatGPT to Write a Book About Big Tech and Herself appeared first on Electric Literature.

HydraGT

Social media scholar. Troublemaker. Twitter specialist. Unapologetic web evangelist. Explorer. Writer. Organizer.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button