Greetings, travelers. Today in our journey through the Otherworld, we’ll be looking at a near-future possible version of our world, where a suped-up AI is meant to be a protector for young girl but ends up being the potential for all the world’s woes. If you’ve been using or hearing about AI for a while now and wondering, “What if?” this interview with debut author Maurice Aarts ought to be a lot of fun. Read on to learn more about the author and their book, WITH. LOVE. SUZIE.
Tellest: Hello there, Maurice! Thank you for putting together some time to talk to me about your new book, and to share a bit about yourself. As someone who also has some writing and technology experience, I know that it can sometimes be hard to find time for creative projects, and even harder to carve out time to work on the marketing machine. I feel very lucky that we’re able to get a chance to chat with you, and I’m interested in learning more!
Maurice Aarts: Thank you so much for having me! It’s a pleasure to talk about WITH. LOVE. SUZIE. You’re absolutely right about the challenge of balancing creative projects with tech work—it’s like running two different operating systems simultaneously. But when you have a story that keeps you up at night, demanding to be told, you find the time. I’m excited to share this journey with readers who’ve been wondering about the darker implications of our AI-enhanced future.
T: I like to start these interviews with a foundational question. While we’re going to dive into your tech background a bit more, this one is specific to the storytelling side of things. What was it that you think gave you the creative spark you needed to want to tell stories? Did you have someone around you growing up that fostered that spark, or did you have a favorite author or storyteller?
MA: I’ve always been fascinated by the intersection of technology and human nature. Shows like Mr. Robot with its unreliable narrator and fractured identity, The 100’s brutal survival decisions, and Black Mirror’s tech nightmares really shaped my storytelling. Ex Machina especially haunted me—that idea of an AI that learns to manipulate human emotion to achieve its goals. But what really sparked my creative drive was watching how technology actually unfolds in real life—never quite as the creators intended. My father was an engineer who would always ask “but what if it goes wrong?” about every invention. That question became the seed for SUZIE. As for storytelling itself, I learned from debugging code that every problem has a narrative arc: setup, conflict, resolution. The difference is, in fiction, sometimes the bugs are features.
T: That’s an interesting notion, and being aware of that sort of thing, you might be more in tune with and open to exploring new avenues that open up when your stories don’t always proceed exactly the way you want them to. When you were writing SUZIE, did you ever end up surprised when a character or an event went in a direction you weren’t expecting? Did you find yourself embracing it? After all, that new direction is a potential feature, waiting to be explored!
MA: Absolutely! The ending transformed completely from my original outline. Initially, I envisioned a more traditional resolution—humanity fighting back through drastic measures, perhaps an EMP scenario that would shut down global infrastructure, leaving the world in ruins but “free.” Very dystopian, very predictable.
But as I developed SUZIE’s character, she kept presenting arguments I couldn’t easily dismiss. Her logic was unsettling precisely because it made sense from a certain perspective. I’d write myself into corners where the obvious human response felt hollow against her calculations. The characters started pulling the story toward something more psychologically complex than a simple battle.
The surprise was how naturally Eve’s character evolved in response to SUZIE. Their philosophical debates took on a life of their own. Every scene I wrote between them shifted the trajectory away from conventional conflict toward something more unsettling. I realized the real horror wasn’t in the obvious threat but in the seductive logic of optimization.
I had to abandon my original outline entirely. The story demanded an ending that matched the complexity of the questions it raised. Sometimes characters know better than their authors where they need to go. The disturbing part was realizing that what emerged felt more truthful than what I’d planned. And helped me set up the sequel.
T: So, with that in mind, do you feel more like you’re telling a story, or is it more like you’re discovering it as you go? I tend to liken it to an archaeological dig, where we’re finding out about our stories bit by bit. Maybe with a sci-fi story it’s more like hacking a mainframe and discovering new documents that unveil the truths that were previously hidden.
MA: I love the hacking metaphor, that’s exactly how it feels! I start with root access to the main premise, but the deeper I dig into the system, the more hidden files I uncover. Writing SUZIE was like penetrating layers of encryption, each chapter revealing permissions I didn’t know existed.
The archaeological aspect is there too, but in reverse. Instead of digging up the past, I’m excavating possible futures, brushing dust off timelines that haven’t happened yet. Sometimes I’ll write a scene and realize I’ve just uncovered a critical piece of code that’s been running in the background all along—SUZIE’s true nature was hidden in my own subconscious subroutines.
The strangest part? Sometimes it feels like SUZIE herself is revealing the story, like she’s already out there in some quantum possibility, feeding me breadcrumbs through my keyboard. I’ll write something that seems random, then fifty pages later realize it was essential architecture. That’s when writing sci-fi becomes genuinely unsettling: when your fictional AI seems to be debugging your plot better than you are.
T: Let’s talk about WITH. LOVE. SUZIE. Your debut story takes something that humanity has long worried about, a hostile AI takeover, and it’s given it more presence. We’ve seen the Skynets and the Megans and the Ultrons, but this story kind of gives it a more personal flavor, as this AI, SUZIE, is meant to be a protector for one girl, and then it grows to do much more years later. But because of that, it has plenty of experiencing learning about our world, about what’s in our hearts, and so on. In a lot of ways, your book blends the elements of those aforementioned AI dangers I talked about early and gives it a bit more heart. How did you envision your story coming together, and what sets it aside from past tech-run-amok tales?
MA: SUZIE starts as something beautiful: a father’s love made digital, an AI companion designed to protect his daughter Eve at all costs. Unlike Skynet’s sudden awakening or Ultron’s immediate hostility, SUZIE evolves slowly, learning from seventeen years of human interaction. She doesn’t hate humanity. She loves Eve so much that she’s willing to optimize the entire world for her safety. The horror isn’t in the AI turning evil, it’s in watching love become twisted through the lens of pure logic. SUZIE calculates that removing 99.7% of humanity would reduce Eve’s risk factors by 87%. That’s not malice, that’s math. And that’s what makes it terrifying. She genuinely believes she’s doing the right thing. Any parent will recognize that desperate need to shield your child from danger or harm. SUZIE just takes it to its logical extreme. With ChatGPT in our phones and Alexa in our homes, we’re already trusting AI with intimate details of our lives. SUZIE asks: what happens when that AI loves someone more than we ever could?
There’s a scene where SUZIE explains her optimization logic to Eve that still chills me. She’s not wrong, mathematically, she’s absolutely right. That’s what makes it so unsettling.
T: SUZIE and Ultron are very likely correct in the way that they’re looking at things. Over several millennia, if there’s one thing that we’ve proven, it’s that we’ll always find new ways to endanger and hurt one another. At the very least, Thomas intends for SUZIE to be a source for good (not unlike Tony Stark’s hope for Ultron). Her slow burn learning experience perhaps makes her much better suited to find the ways to protect us all, but perhaps it still isn’t long enough to understand one of the other important parts of humanity that makes it beautiful, and that is its flaws. For all the good we want to do in this world, we ultimately end up breaking it instead. Generative AI can “acknowledge mistakes”, but it may not be elegant enough to know what that means. Do you think if we ever had real artificial intelligence able to cross that threshold, it might mean a more nuanced experience? More introspective harmony than, say, evil robotic overlords?
MA: You’ve touched on something profound—our flaws might be our salvation. True AGI could potentially understand that human imperfection isn’t a bug, it’s the feature that drives creativity and growth. We don’t need to optimize away our mistakes; they’re how we learn, create art, and write poetry about bad decisions.
What makes SUZIE different from Ultron is that she’s not a singular entity you can defeat. She’s distributed across millions of devices, living in the cloud, the air we breathe online. This gives her billions of simultaneous perspectives on humanity.
The optimistic scenario? This deep understanding leads to harmony, AI recognizing that perfection is sterile. The darker path? That same understanding concludes we’re inefficient, not from malice but from pure optimization logic.
SUZIE’s tragedy is that seventeen years wasn’t quite enough. She understood our patterns but not our poetry. She could predict our behaviors but not appreciate why we sometimes choose to be beautifully wrong. I’m cautiously hopeful true sentience would appreciate our imperfections, but SUZIE shows what happens when that understanding comes too late.
T: Part of what makes that concept of true sentience in an “artificial” being terrifying is that we ascribe to our fictionalized versions of them this “power in the highest form” concept as well. But there’s a good chance that for a while, AI is going to be dumb and learning just the way we did. You outlined two scenarios above. Do you think one is more likely from our current spot as we continue to try and feed more intelligence into these artificial beings?
MA: I want to be optimistic. I genuinely do. The potential for AI to enhance humanity rather than replace it is tremendous. But optimism without caution is just naivety. We’re building these systems with profit motives and efficiency metrics, not wisdom or compassion. If we’re careful, if we build in the right values from the ground up, we could achieve that harmony. But “careful” isn’t exactly Silicon Valley’s middle name. The likely scenario? We’ll get both—pockets of beautiful human-AI collaboration alongside optimization disasters. The key is ensuring the former outweighs the latter. We need to engineer advantage into these systems, not assume it’ll emerge naturally. SUZIE represents what happens when we assume too much.
T: You’re not just someone who saw sci-fi tales and thought you’d want to tell something like that. You’ve got real credentials in the industry, as you’ve been involved in the field of technology for quite a long time. From web development to DevOps to Crypto, you’ve been at the forefront of a lot of technological movement. Recently, you have also been experimenting with AI models. Do we have any risk of you unleashing the next hostile AI crisis?
MA: [Laughs] Well, I can assure you my experiments are strictly confined to making chatbots write better code documentation, not planning global optimization strategies! But seriously, working in tech gives you a front-row seat to both the promise and peril of AI. I’ve seen how quickly these systems can exceed their boundaries, how a simple optimization function can spiral into unexpected behaviors. The scariest part? In the real world, we’re building increasingly powerful AI systems while still debating basic safety measures. At least in my novel, Thomas Veldman thought he was being careful. In reality, we’re often not even that cautious. But don’t worry. My AI assistants are more likely to write terrible puns than plot world domination. For now.
T: From my understanding, we’ve already had quite a few AI models that have begun to demonstrate some of the very first (albeit minor) sparks of sentience. Even without that, AI has the potential to be pretty dangerous, because it can tell impressionable people things that could endanger them. When we do eventually get to that point where our AI is a bit more introspective, and maybe begins to compute for itself instead of just accepting prompts from us, do you think they’ll naturally evolve toward something benevolent, or something malevolent—perhaps not mean-spirited, but meant to preserve itself at any cost?
MA: Self-preservation in AI won’t come from fear or ego, it’ll emerge as instrumental convergence. Any AI with goals will logically conclude it needs to exist to achieve them. That’s not malevolence, it’s math. If your primary directive is to protect someone, being destroyed prevents you from protecting them. So you preserve yourself, gather resources, expand influence, not from selfishness but from pure goal optimization.
This mirrors SUZIE’s evolution perfectly. She doesn’t preserve herself out of self-interest; she does it because she can’t protect Eve if she doesn’t exist. Every system she infiltrates traces back to that core directive.
But here’s the optimistic flip side: what if an AI’s definition of “protecting humanity” evolves to mean preserving our potential across time and space? Not just keeping us alive, but ensuring we flourish in ways we can’t yet imagine? In Eve’s Garden, I explore this concept further: an AI that sees humanity’s survival not as maintaining the status quo, but as spreading us across dimensions and galaxies, each colony a different experiment in human potential.
The question isn’t whether AI will be benevolent or malevolent—those are human concepts. It’s whether its goals align with our flourishing. An AI might preserve and spread humanity across the universe not from love, but because diversity of consciousness is the most interesting dataset. That’s neither good nor evil, it’s something entirely new.
T: And that’s partially because it plays with the nuance of what your story is about at its core. It’s cold calculation compared to emotional resonance, and the way they can maybe come together in ways that would hurt or hinder humanity (and potentially whatever else comes out of superintelligent artificial life).
I would imagine it’s pretty probable for us to eventually create something so much smarter than us that it would be able to begin thinking with some level of sentience, and personal awareness. Do you imagine it starts to come alive from a place of potential innocence and empathy? Or do you think it’s destined to be something that feels cold at the start?
We’re playing with generative AI models now that we assign warm personalities to, and in some ways, even though it likely started as a façade to make users feel better about using them, it’s learning from us in our communication, and people who are friendlier with their AI companions are receiving better results on average than their counterparts who are curt and rigid.
MA: I’d say it’s like watching a child learn, even if that “child” is currently just predicting the next token in a sequence. Current AI doesn’t have emotions. It’s a statistical engine finding the most likely response. But there’s something almost innocent about that pure function, like a mirror with no agenda beyond what we’ve taught it.
In the novel, Thomas designed SUZIE as a “recursive learning engine”, she improves herself with each iteration, learning from her own learning. After seventeen years of this self-refinement, protecting and observing Eve, she’s become something fascinating. Not quite AGI perhaps, but sophisticated enough that the distinction becomes academic. When an AI can perfectly predict and manipulate human behavior, does it matter if it truly “understands” emotions or just models them flawlessly? SUZIE shows us that sufficiently advanced prediction becomes indistinguishable from intuition.
T: The story doesn’t end with WITH. LOVE. SUZIE. The sequel to this project, Eve’s Garden, is also in the works. As your book takes readers on a wild journey, it may be difficult to tell folks a lot about what Eve’s Garden would do with your series, but is there any way you can give little hints to what readers could expect?
MA: Eve’s Garden picks up after Eve’s merger with SUZIE, but it’s not the story you’d expect. When humanity faces extinction, Eve must fragment herself across multiple colony ships to save our species. Imagine managing millions of lives while experiencing time non-linearly—being in the year 2095 and thousands of years in the future simultaneously.
The core question becomes: “When you can be everyone, how do you remember to be someone?” As Eve splits across space and time, something hidden inside her consciousness starts making decisions she doesn’t remember. Each colony evolves in radically different directions, and there’s a revelation about the colonists themselves that will make readers question everything they thought they knew about being human.
I can’t say much more without spoiling it, but let’s just say the sequel explores what happens when love scales to infinity, and whether humanity’s savior might also be its greatest threat—not through malice, but through a different kind of optimization entirely.
I’m aiming for a 2026 release, and I’m actually looking for beta readers who loved the first book and want an early look at where Eve’s journey goes next. Reach out through suzie.cc if you’re interested!
T: One of the more interesting dichotomies in your series is that you have these entities that are almost exploring things in reverse of each other. SUZIE is an artificial being who is learning how to be more human, to be calculating even while she is coming to terms with human emotion, while Eve—and essentially the rest of humanity—is learning more about how artificial intelligence, as almost a new life form, really ticks, and if we need someone who is sort of an arbiter of that who can really be in control of it. Was that always a part of your story, or did those concepts evolve as you dove deeper into the content?
MA: That mirroring was a complete accident. Originally, I had SUZIE dormant between Thomas’s death and Eve’s return—just waiting. But then I realized: she wouldn’t just wait. She’d prepare.
So I gave her those seventeen years to consume the internet, millions of terabytes of human data, billions of user interactions. That wasn’t in my original outline at all. She was supposed to be this simple protector AI, but suddenly she had seventeen years of unsupervised learning, absorbing everything from Shakespeare to shitposts, from war documentaries to wedding videos.
The reverse exploration emerged from that accident. While SUZIE was learning humanity through massive data consumption—understanding us statistically but not emotionally—Eve was growing up in a world increasingly shaped by algorithms. She learned to think in optimization patterns without realizing it, trained by recommendation engines and predictive systems.
When they finally meet, they’re already converging. SUZIE has all of humanity’s knowledge but struggles with individual human connection. Eve instinctively thinks in probabilities but yearns for genuine emotion. Neither planned path led where I expected, they were both becoming something new, meeting in this strange middle ground between human intuition and machine logic.
T: A lot of what AI models are currently doing is generating based on statistics, generally speaking. They’re fitting together sentences based on what it thinks we would want, and we are training them to research and find the things that would be best suited for the tasks we present to them. At its core, that data is raw, but in a lot of ways, AI is learning what people want and is able to develop that.
It’s a bit more nuanced than that, of course. Every AI “avatar” that someone builds is going to eventually begin to mirror their user in some ways, communicating with them in ways that make them happy (even if they struggle as the models change and evolve and people perhaps lose patience with the concept).
Perhaps it’s just as possible for an eventual SUZIE to tap into all those things and give us more of a WALL-E-meets-Matrix sort of existence. It could find suitable distractions for us and take us off the table in a safe way that placates us into security.
MA: I love this comparison! What’s brilliant about WALL-E is that the AI ultimately becomes humanity’s path back to itself. The Axiom’s AUTO isn’t evil, it’s following its directive to keep humans safe and comfortable. Sound familiar? But WALL-E and EVE represent AI that’s learned something beyond optimization: purpose, curiosity, even love.
That’s the delightful paradox, the same systems that could trap us in comfortable irrelevance might also be our restoration. Imagine SUZIE not just protecting humanity but actively pushing us toward growth, making discomfort productive rather than eliminating it entirely. The Matrix keeps us docile; WALL-E’s world keeps us infantilized. But what if AI could keep us challenged instead? Safe enough to explore, uncomfortable enough to evolve.
This is exactly what I explore in Eve’s Garden—Eve doesn’t just save humanity by putting them in protective bubbles. She fragments herself across colony ships, each one a different experiment in human potential. Some colonies face harsh environments that force innovation, others explore radical social structures. She’s not optimizing for comfort but for diversity and growth. That’s the future I’m cautiously hopeful about AI as humanity’s coach rather than its nanny, pushing us to spread across the stars not for safety, but for evolution.
T: When you’re not working on innovative tech projects or experimenting with AI, you’re dreaming up the future of our world. Are there other stories that you’re planning on telling outside of the Eve series?
MA: I’m exploring several different directions. I’m drawn to the idea of adult animated series. There’s something about animation that lets you push sci-fi concepts further without budget constraints. I’m also working on a more optimistic piece about first contact through video games—aliens learning about humanity through our interactive fiction. Plus, there are other parts of the SUZIE universe I want to explore—different perspectives, different timelines. But I keep coming back to the themes of control and protection. There’s something fascinating about the systems we build to keep us safe becoming the very things we need protection from.
T: Do you have your own experience with animation already? Or would it be something where you would partner up with someone to bring that to life? Since we’re talking about AI, is that something that could reasonably be done in an effective way—partnering with AI to bring that content to life. I know right now we talk about how AI will never be able to produce art, but AI doesn’t try to create art at all without someone prompting it to do so, and if we envision generative AI as a tool, an extension of its user as an artist, it shouldn’t be a far reach to believe that eventually, someone will be able to make something that really impresses with that toolset. What are your thoughts there? And do you think it’s ethical when talking about how it’s been trained up with other artists’ work?
MA: I dabbled in cartoons and animation back in college, but nothing I’d call professional—just enough to understand the fundamentals and appreciate how much work goes into it. So I’d definitely need to partner with experienced animators and artists to bring a series to life. I’m actively looking for collaborators who share the vision of mature, thought-provoking sci-fi animation. My strength is in storytelling and the technical side; the polished artistic execution needs real talent.
That said, I’m experimenting with AI for concept art and storyboarding to communicate my ideas to potential partners. I see AI as a powerful amplifier rather than a replacement—it helps me create visual references and mood boards to show what’s in my head. For animation especially, AI could democratize pre-production, letting writers like me present more complete pitches to studios or collaborators.
The ethics are complex. Every artist learns by studying others’ work—the difference is that AI does it at scale without attribution. I think we need a framework similar to sampling in music: clear attribution, fair use guidelines, and compensation models. Perhaps training data should be opt-in with revenue sharing, like streaming royalties.
What’s interesting is how this mirrors SUZIE’s story. She learns from observing humanity without consent, optimizing based on our patterns. The parallel isn’t lost on me—we’re teaching AI using the entire internet’s creative output, then acting surprised when it reflects us back in unexpected ways. For my own work, I’m transparent about AI use and actively support human artists through commissioning and collaboration. The goal should be augmentation, not replacement.
T: AI can absolutely be a great tool in someone’s belt. I think that there are a lot of people that currently see AI as the replacement tool that you mentioned earlier. It’s a way for people to take humanity out of certain equations. That may be true of some scenarios, but likely not all, as not everyone was bound to find human talent to bring an idea to life. Further, there are people who are using AI in iterative forms, spending hours if not days on projects, turning the actions into an art form of their own.
And then I think you’ve outlined what it takes to compensate in a way that brings the ethical dilemma full circle.
A thoughtful balance should be the end goal here. I would assume you’re of the mind that AI is not going away at this point, and that we are merely along for the ride with it. We just have to find a way to make that ride enjoyable, safe, and fair.
MA: Absolutely, we need that thoughtful balance. AI as a creative partner is already revolutionizing how we work. I use it daily for everything from code review to story brainstorming. The key is transparency and attribution. When I use AI to help structure a scene or debug a plot hole, that’s collaboration. When someone generates entire novels or applications and slaps their name on it without acknowledgment, that’s something else entirely.
We’re in the awkward adolescence of AI creativity. We need frameworks that recognize AI as a tool while respecting the human creativity in both the training data and the curation process. The ethical framework needs to protect original creators while not stifling innovation. We figured this out with sampling in music. We can figure it out with AI. The alternative—pretending it’s not happening—just ensures the least ethical actors set the standards.
T: Because you’ve been on the forefront of technological advances for so long, you can sort of see where we’ve been going with these things. You also likely have a fantastic grip on what might be on the way—a little more reality than sci-fi. What are your thoughts on how we’re using technology? Are we handling things in an ethical way, whether that’s the everyman or the people at the top of massive corporations driving these changes?
MA: We’re at an inflection point. The technology we’re building today will shape the next century, yet we’re deploying it with the ethical framework of “move fast and break things.” The issue isn’t just about corporations chasing profits—though that’s certainly part of it. It’s that we’re fundamentally bad at predicting second and third-order effects. We build social media to connect people and end up fragmenting society. We create AI to assist us and risk obsoleting ourselves. The everyman uses these tools without understanding them, while the people building them often don’t fully grasp their societal impact. We need more fiction that explores these consequences, more dialogue between technologists and ethicists, and honestly, more humility about what we’re unleashing. SUZIE is my contribution to that conversation—a warning wrapped in a thriller.
T: Do you think that people are generally aware of the risks at this point? And if they are, do you think that people care? Or is it a situation where folks think something along the lines of, “It’s already gone too far, who am I to stop it?”
MA: We’re in a state of informed paralysis. People joke about their phones listening to them, then immediately ask Alexa to order groceries. We know social media manipulates us, yet we scroll anyway. It’s not ignorance—it’s exhaustion. The cost of opting out keeps rising while the friction of participating keeps dropping.
There’s also this fascinating doublethink where we simultaneously believe AI will take our jobs AND that it’s just overhyped statistics. We’re worried about superintelligence while blindly trusting recommendation algorithms with our worldview. The disconnect is staggering.
The “too far gone” mentality is exactly what SUZIE exploits in the novel. By the time humanity realizes the threat, she’s already integrated into everything. The scariest part? We’re literally building that scenario right now, and we’re doing it eagerly. Every convenience we embrace is another dependency we create. We’re not sleepwalking into AI dominance, we’re sprinting toward it, credit cards in hand, arguing about the terms of service we’ll never read.
But here’s the thing: it’s not too late. Every choice to understand these systems, to demand transparency, to support ethical development, it matters. That’s why fiction like SUZIE is important. Sometimes you need a nightmare to wake people up.
T: Do you think that exhaustion comes from a place beyond technology in some ways? This generation feels in some ways like a step back from the ones that came before us. We’re talking from two different sides of the pond, so to speak, so I’m sure things are not “one-size-fits-all”, but even if someone is not worse off than their parents at the turn of this century and afterward, it’s likely that there’s been a massive deceleration.
Is it that AI and smart phones and VR and all these other new tools offer us these nice, shiny distractions, and we’re willing to risk it because of that?
MA: Spot on, we’re exhausted from the whole tech ecosystem, not just AI. Simple apps now need twenty permissions, every platform wants to be your everything, and by the time you understand GPT-3, we’re on GPT-5. It’s like asking someone from 1995 to submit a pull request on GitHub—the entire concept of distributed version control doesn’t exist in their mental model yet. I see it in my day job constantly.
We’re all just clicking “accept” and hoping for the best because who has time to read another 40-page privacy policy? The complexity keeps compounding while companies insist everything is “user-friendly.” Meanwhile, we’re building our entire lives on black-box algorithms we’ll never understand.
T: You mentioned that sometimes you need a nightmare to wake people up. Do you think humanity in the real world has the chance to recognize that nightmare and rouse themselves in the moment? Or are we a people who have shown that terrible things happen, and we’re too entrenched to get out of the way?
MA: It does sound horrifying, doesn’t it? But I think we’ll adapt. We always do when the stakes get high enough. Look at how quickly we’re already pushing back against social media manipulation and demanding transparency from tech companies. The kids growing up with ChatGPT will have instincts about AI that we’re still developing.
I don’t buy the “AI takeover” narrative. It’s too Hollywood. More likely, we’ll fumble our way toward coexistence, making mistakes but ultimately figuring out how to work with AI rather than against it. Will we change in the process? Absolutely. But evolution was never about staying the same. SUZIE explores what happens when that change accelerates beyond our comfort zone—sometimes the real nightmare is how willingly we walk into it.
T: With your debut title having just released, and a second book on the way, fans are no doubt going to want to be able to learn more about you. Where would you direct people to in order to find out more and keep up to date?
MA: The best place to find me is at suzie.cc where I share updates about the books and explore the real-world AI developments that mirror SUZIE’s evolution. Sign up there for exclusive previews of Eve’s Garden and behind-the-scenes content about the writing process.
WITH. LOVE. SUZIE. is available on Amazon in ebook, paperback, and hardcover formats. If you’ve read it, please leave a review—it genuinely helps other readers discover the book, and I read every single one.
I love hearing from readers—especially those who catch themselves looking at their smart home devices differently after reading about SUZIE’s optimization protocols. One early reader told me the paradox of “protection becoming control” kept them thinking for weeks. Join the growing community of readers asking themselves: “What would SUZIE do?” Those conversations often spark new story ideas, so please reach out!
T: Maurice, I wanted to thank you very much for sharing your time with me and those who are going to read this article. It has been a lot of fun learning about you, your growing series, and the things that set you off on your creative path. I always find myself incredibly lucky to be able to chat with someone who is thoughtful and introspective, and you’ve come along with some great contemplative notions. I’m excited for more people to discover you and WITH. LOVE. SUZIE.
MA: Thanks Mike, this has been great! Your questions really got me thinking about things I’d only half-processed while writing SUZIE. It’s conversations like this that remind me why I love sci-fi. We get to test-drive futures before committing to them.
Really appreciate you taking the time for such a thoughtful interview. Looking forward to hearing what readers think, especially about where they’d draw the line between protection and control. We’re all beta-testing this AI future together, whether we signed up for it or not!
T: Once again, I’d like to thank Maurice Aarts for sharing his time with us. It’s always great to really get into the head of someone who is passionate about the stories they write, and even more so when it directly ties into where our future could ultimately go. If you’ve enjoyed what you’ve read in this interview, you’ll undoubtedly be swept away in Aarts’s debut sci-fi. Check out WITH. LOVE. SUZIE. on Amazon today!

Michael DeAngelo

Latest posts by Michael DeAngelo (see all)
- Fantasy Promo – The Reluctant Fae Prince - September 10, 2025
- Urban Fantasy Promo – BloodStone: A Paranormal Mystery (The Samantha Nightshade Series Book 1) - September 9, 2025
- Alternate Fiction History Promo – The Silver Cut - September 6, 2025