Collaborate to Ride the Generative AI Wave
- Charles Harris
- Feb 3, 2023
- 9 min read

I have been smiling recently as I’ve read about the explosion of interest in generative artificial intelligence (AI) fueled by the release of OpenAI’s amazing ChatGPT chatbot and its Dall-E-2 image generator. After years of research and experimentation, the genie is out of the bottle. Content creation will never be the same again.
I’m not surprised. I have been writing about AI image generation since 2019. In my Eva Johnson thrillers about politics, technology and social change, the protagonist (Eva Johnson, of course) is a Carnegie Mellon graduate and digital artist who co-founded a software company that identifies and creates deepfakes. She also created her own AI image generator to collaborate with her on her digital art.
Rather than fearing generative AI as a threat (as so many people are doing today), Eva created her AI as a partner who could help her craft exciting, creative digital art that she would never be able to produce by herself.
Because Eva understood that AIs are only as good as the information they digest (more on that later), she focused on training her AI and inventing methods of adjusting her AIs input data to influence the AI’s output. She did this by controlling the data that her AI considered in generating her art. Rather than trying to eliminate bias in the learned data, she sought to understand and control its effects to achieve the artistic statement she wanted a piece of art to make.
My favorite comment from readers is that my books are prescient. Much of that comes from writing about contemporary themes and extrapolating the facts just enough to lead the reader into events and science that are not yet true but still seem believable. Technology is a good example.
Much of the technology I write about is real, and if I have done my job well all of it seems like it might be real. I like to leave my readers wondering how much of the technology in my books is real and how much is fiction. The new tools from OpenAI make a few more things in my novels real, but Eva still has more AI tools in her fictional toolbox. If you want to learn more about how Eva works with her AI (and also enjoy a fast-paced contemporary thriller), check out Intentional Consequences, Revenge Matters or Virtual Control at www.charlesharrisbooks.com or at Amazon at https://www.amazon.com/stores/Charles-Harris/author/B07W81957J.
Eva’s been collaborating with her AI for four years now, and she’s learned a lot that she’d be happy to share with readers who are trying to assess the impact of generative AI tools like ChatGPT and Dall-E. If you were to ask for her advice, I think she would offer the following thoughts.
Ride the Wave or Get Crushed by It
The printing press put the scribes out of business but enabled publishers and a world full of readers. Search engines and the internet expanded the speed, breadth and dissemination of information. In one lifetime, access to knowledge moved from card catalogs and libraries to Google search and document downloads. Typewriters and carbon paper gave way to personal computers and productivity software. With every major advance, the new tools destroyed jobs and entire industries. But they also created opportunities for the people willing to embrace them.
For millennia, humans have embraced the tools of the age. It’s time to do it again.
Put less politely, buy in or get out of the way. Ride the generative AI wave or get crushed by it. Be part of this of powerful new technology or be left by the wayside.
From an historical perspective, it’s sound advice. Better to be a book publisher than an unemployed scribe. Better to learn how to do Google search than to rummage through a dusty card catalog or dig facts up in an old encyclopedia. Better to learn how to extend and expand your creativity and communications skills with generative AI than to pretend you can fight the tide.
But the threats posed by generative AI are not the point. Generative AI offers some powerful benefits. Whatever their negatives, these tools can enhance and extend the creative process and they can do it far faster than humans can. They can find and extract data, concepts, designs and ideas from a broader, deeper database. They can move those creative ideas forward more rapidly and they can create and iterate endlessly. They can improve brainstorming, feedback and communication. They can enable non-creatives to use images in their thought processes. They can also enhance and expedite research. The list of benefits is long and growing.
Collaboration is the Key
The question is: How do you ride the wave?
The answer is easy: You collaborate with your AI to make you and your AI even more effective.
Collaboration benefits both sides. You need your AI to find and process the world’s images and words at warp speed and stimulate your creativity and communications skills. Your AI needs you (for now, at least) to teach it how to learn faster and better and apply that learning to human needs.
Rather than fearing this new phase of AI, Eva would say you need to make generative AI your partner. That means training your AI to learn information it needs to be responsive. It means discovering how to phrase queries and follow up questions or instructions. It means being a bridge that adds humanity to a human-machine partnership. It also means learning to trust the output you receive from your AI partner.
Two factors are especially important in creating a successful human-machine collaboration: the quality of the information the AI uses to learn and the credibility of the output the human partner receives. Humans have special roles to play in understanding and optimizing both.
Data Input Colors AI Output
Because generative AI tools only “know” what they have learned, they will inherit the limitations, prejudices and biases of their knowledge sources. Like the well frog in the old Chinese parable that I mentioned in my third novel Virtual Control, AIs live only in the world that they are taught to know.
Even if the AI has all the information in the world, critics point out that the risk of systemic bias exists because the “system” created by humans is inherently biased. If the AI has been trained with less information (a likely situation for some time), the risk of accidental or intentional bias increases.
These realities pose difficult questions. Who will be responsible for assessing and erasing these biases? How will we prevent intentionally biased AI training data from producing desired but undesirable outcomes? More broadly, who will decide what AI-generated images, topics and speech are unacceptable and why? As more and more content is generated by AIs, will human-derived speech be tested against linguistic, topical, political and cultural “filters” that the AIs have determined to be acceptable? Can corporate ownership and use of these AI censor machines avoid First Amendment protections in ways the government could not do itself? How will all this affect democracy and freedom?
The moral and objectivity hazards become more complicated as people debate whether the AI algorithms should be created by humans or by the machines themselves (who, of course, learned from digesting data, events and algorithms created by humans). Similar debate ensues over whether AIs should be taught through supervised learning, unsupervised learning or some combination of both. At the risk of oversimplifying, in supervised learning humans tag images fed into the AI—for example, labeling a picture of a cat as a “cat.” In unsupervised learning, the AI devours huge quantities of data and makes its own decisions about how what it sees should be categorized and labeled.
Part of the magic that Eva brings to her AI is the ability to consider different learning databases (designed by Eva) that alter the images that the AI is using to create its output. Eva’s successful use of this intentional manipulation of the input data demonstrates both the potential positive iterative power of generational AI (which allows her to create new art, avatars and other output much more rapidly) and the potential negative bias that could be intentionally added by undisclosed human intervention in the data inputs.
Output Credibility Affects Human Trust
Effective collaboration cannot exist without mutual trust. Collaboration requires developing trust with your AI, just as you would with any other important colleague.
Although the idea of trusting a machine—and having the machine trust you—may seem artificial, we already do it on a regular basis. We trust the calculations our computers make. We trust our airplanes to fly us safely. We trust Google maps to get us where we want to go. “Oh,” you say, “but humans make those machines accurate and safe. The machines don’t do that themselves.” That’s largely true. But if it’s not too much of a stretch to think about it, our machines “trust” us to design and engineer them correctly and keep them running. Even there, industrial AI is taking over more of the monitoring and advising and creating the algorithms to do it.
In human-to-human relationships, vulnerability is an important factor in building trust. Human colleagues build trust by being transparent about their weaknesses and sharing experiences that show they are open to continuous improvement. These same concepts apply to building trust in human-machine relationships.
Our AIs need to be transparent about the bias in the algorithms they develop from the data they digest. Their human partners need to admit their own bias. Machines need to learn to integrate human adjustments to the AI learning process and content generation. Humans need to be candid about their unease in relying on machines to help them create. While we may not yet be able to have these discussions directly with our AI partner, we can be more effective collaborators by designing and using generative AI tools that facilitate transparency about systemic, accidental and intentional bias and the strengths and weaknesses contributed to the partnership by human and machine.
Why Are We So Sensitive about Generative AI?
Generative AI is different from the industrial AI technology that improves things like logistics, manufacturing and airline scheduling. Generative AI strikes at the heart of our humanity. For the first time, we are facing machines that can compose creative content in images and words — a uniquely human talent (or so we thought) that has enabled our species to communicate and collaborate across the eons.
Why is sharing that talent with a machine so threatening and uncomfortable for many of us? It doesn’t just threaten our jobs, it goes to the heart of how we think of ourselves as humans.
Israeli author and historian Yuval Noah Harari says large-scale flexible cooperation has made our species masters of the world, but it has also made us dependent for our very survival on working together. Harari believes our unique trait as humans is “our ability to create and believe fiction.” As he explains, “All other animals use their communication system to describe reality. We use our communication system to create new realities.” By helping us create fiction, generative AI will enhance that communication and collaboration on a scale deeper, broader and faster than ever before.
Where We Go from Here
Fun as they are to try out, Dall-E-2 and ChatGPT are still very much works in progress. They have plenty of limitations and weaknesses, including some that will be challenging to cure. But they are already doing things we have never seen computers do before.
These early tools are the beginning of a tidal wave of change in how we consciously and subconsciously find, assess, sort and apply visual and textual information and create designs, images and stories from it. The New York Times predicts generative AI has the power to reinvent everything from online search engines like Google to photo and graphics editors like Photoshop to digital assistants like Alexa and Siri. It will revolutionize the human-machine interface, letting people talk with computers and other devices as if they were talking with another person. Add in voice recognition (another technology that has experienced dramatic breakthroughs) and we will finally be able to chat with HAL, hopefully with more cooperative results.
If you have any doubts that generative AI has reached an inflection point, check out what Microsoft is doing. After investing billions in OpenAI and becoming its preferred partner for commercializing new AI technologies, Microsoft is adding OpenAI’s tools to its Office suite and making them available through its Azure cloud-computing platform. Their goal is to use these AI capabilities to “completely transform every Microsoft product.” And they recently announced they are planning to invest even more in OpenAI.
Fully evolved, generative AI will be as disruptive and empowering for our society as social media, the internet and the smart phone have been. Google is already scrambling on Red Alert to decide how generative AI will change the competitive landscape for its dominant search business. Although it will enhance and facilitate the metaverse, generative AI will have far broader impact, particularly as it is integrated with voice recognition.
It's been almost 16 years since the iPhone was introduced by Apple in 2007. As remarkable as the iPhone seemed then, it has advanced light years since. If you can flash forward 16 years from now to 2039, you will be able to look back with the same degree of amazement about how basic the “breakthroughs” in generative AI were in 2023.
Follow Eva’s lead. Collaborate and ride the wave.
Comments