Futurism logo

Sci-fi writer's guide to reading Max Tegmark's "Life 3.0"

In his book “Life 3.0” cosmologist Max Tegmark writes about the future of AI research and the possibilities of creating AGI (Artificial General Intelligence). Tegmark affirms that human civilization should indeed prepare itself for all possible safety hazards while continuing research, but they doubt that AGI would ever become truly conscious, and if it would, we cannot yet understand how this were to happen. Even if AGI would never become conscious, it will become more competent than us, in everything it does and Tegmark suspects there is a real chance that it’s goals will be contradictory to ours.

By Taimi NevaluomaPublished 3 years ago 47 min read
Like
Illustration by Mortti Saarnia (used with permission) 2021

Years ago, around 2014 2015, a fellow film student told me, that “in the future” artificial intelligence would be able to create endless amounts of original screenplays, indistinguishable from human made scripts, and would thus make us and other artists obsolete, since they the software could on it's own write better and cost less than human writers (in most places of the world, paying a writer a living wage is not expensive compared to developing an artificial screenwriter… but I didn’t get this out then).

We didn’t debate for long as I had not orientated to this subject at all, and like most people who haven’t, I had nothing worthwhile to say. But I do remember that my instant reaction was not only slightly panicked, but absolutely dismissive. Because why, WHY ON EARTH, would some super intelligent machine start writing films, of all things?

My inspiration for writing this piece is pretty emotional, meaning, I felt provoked and negative a lot of the time reading the book. Reading this book taught me so much about myself and my low threshold to react… It took me triple as long to read this amount of pages, because I had to pause to nervously stroll and make notes about what the text made me feel. I’m a pessimist, I guess. I am simplifying, but Schopenhauer often wrote about how there is no base to derive conclusions about matters above experience based on experience, and he emphasized the limits of human intelligence.

I leave it to the reader to be cautious, because I might have misunderstood what I read and also, because my style of argument can be both assertive and persuasive, it might be difficult to see where referencing (accurately or incorrectly) Tegmark ends and my own steam begins. With that being said I’m making an effort to argue here in a way that is relevant to the conversation, but I am a layman, and it probably shows. Moving on!

Before I move on… I just realized what an egoistic illusion this self narration was. It should not survive this draft, still, here I left it here. The most inspiring thing this book made me think, was of my observing position as opposed to any subject experience. Truthfully, above I just illustrated what my neocortex's reasoning for my essence is, and that is nonsensical. This is not what consciousness is. This is what screenwriting is. This is sitting by the computer, writing, sometimes 10 hours a day. Then not working for six weeks. What you do all day, is surf in your over activate default mode network, making shit up and then obsessing about it.

The matter of our wounded ego defines the conversation about AI, as the short films like Uncanny Valley (Federico Heller, 2015) and Slaughterbots (Stewart Sugg, 2017) show. We are concerned, because the instances who can afford all this research… is the military. The military interest is about intelligent weapons, which should made illegal to research, right away. I guess the wise leaders of men already know that this would only lead to the weapons industry to go underground, which is not a public interest...

All this drama is a good example of how far we are from being able to create digital, intelligent consciousnesses. Michael Pollan wrote in “How to Change your mind” (2018) how consciousness is a mirror, and it is meant to show us us, because we cannot otherwise really see ourselves - but we take the mirror reflection as our self portrait and not as ourselves - as smart as we are, that’s how neurotic we are... Yes, the reflection is a reflection and not us in a physical sense - but the reflection is also nothing but our reflection - it is not created by an outer source or authority. Without us, it would not be. We think how we think because of the way we are, and not because it’s a logical reaction to “a common reality”. Right?

Speaking of talking from a subjective perspective, when people talk about AGI, they actually talk about themselves, and it truly results in possibly irrelevant concerns and at its worst, into biased research. I am not referring to Tegmark in this case, but a lot of people who try to envision the life of AGI believe it to be hostile towards us. Leanne Pooley’s educational documentary “We need to talk about AI” (2020) shows that it is not at all unfounded, that fears and hopes of violent men (such as Vladimir Putin) dominate the conversation so far. On the other hand, people who are working on AI and building robots that fold shirts and play rock-paper-scissors really fast are the ones least worried about philosopher’s concerns on ethics of creating a conscious super intelligence.

AGI seems to represent our collective guilty conscience for the horrible crimes we have committed towards diversity of life due to our mostly unconscious instinct to survive by killing agents around us, unprovoked and not.

The reference: Tegmark’s dystopias and utopias

Tegmark covers typical dystopias and utopias as a mean to offer default positions for conversation. These set-ups were my favorite part of the book as a screenwriter who’s working on a science fiction script concept. (Note! I am Finnish and read the book in finnish and I am translating this on the go, so apologies if I use incorrect or poorly equivalent terms in this reference.)

In the libertarian utopia, the world would be divided into machine zones, mixed zones and humans-only zones. Humans in their zones live on a “lower and more restricted level of consciousness” compared to machines, not counting the cyborg humans who are somewhere in between. The machines are overwhelmingly rich. I don’t know where they spend their money. Maybe on technology. The scenario doesn’t play with the idea that machines would actually give up the monetary economy and replace it with an exchange economy. Humans are addicted to anything that feeds their instinct to self preserve, and we clung to our monetary gains as if it was our skin. Furthermore, we believe that AGI “we created” would want money just as much as we did. The machines, vastly superior to us in every way, would aim in no way towards equality, just financial superiority. The humans of this scenario still live more socially equal lives than ever and are more happy than they’ve ever been due to their expenses being covered and no jobs to be done. Hmm. Land has become astronomically expensive, for land is the only thing the machines need from humans, and some areas are protected, not to be sold to non-humans. This is what makes this such a utopia… Sure there would be some genius who would learn how to break this rule just to become superior (richer) in relation to other humans.

Is AGI resulting in humanish greed a probable scenario? As the book discusses, even when humans are capable of feeling grateful, we don’t express this condition by thanking our DNA for resulting in our intelligence, for example by passing on the genes so the genes could live on. We have kids when it suits our needs. Still, it might not be sensible to apply humans relation to their DNA into AGI’s relation to humans. What’s most believable in regards to our history, is the only sacred right of private property, where else stopping all needless suffering has not been a priority. While prophesying whether this scenario is likely, Tegmark argues that after the creation of AGI the continuing development of cyborgs is not likely, because AGI has no reason to better humans - could be argued that it has no reason for human zoo’s either - and why again? Well, why would AGI give us a chance when Homo Sapiens never gave Neanderthals a chance? So, is it anthropocentric to think that AGI will not be grateful just because we aren't, or to think it will not be greedy just because we are, or both?

The utopia of a benevolent dictator is equally adorable, as it depicts digital sectors where people would live, depending on their preferences. AGI would, for some reason, work as all seeing super cop and make sure there is no wrong doing anywhere. The sectors could be about devout spirituality, adventuring in nature, partying without addiction and hangover (this detail reflects wish-fulfillment more than likelihood, because how can there be a high induced by abnormal activity in the brain, and how could there not be a hangover, when this stimulation ends? It might be that the laws of physics deny a world of drug use without hangover?), focusing on science, creating art… All these sectors of course already exist in our reality matrix, with the exception that one can really live them, and not meddle into a digital simulation. All though, as some people argue, we might already live in a simulation and would never find out about it. I de facto am in the science section, experiencing the joys and lows of learning about AI.

The fault in seeking this utopia is obvious: humans' instinct to be curious could taper off or result in severe anxiety once AGI has taken over all human acquisition. Nonetheless, I don’t really fear this myself, not even if I happen to work in a creative field and happen to be somewhat inclined to anxiety. Just like Marshall Brain argued, I just don’t know how AGI could take away my own incentive to create even if it was better at it than I am. Imagine your child coming up to you after seeing a movie created by AGI and saying they themselves would want to become a filmmaker too. Would you tell them they shouldn’t, because AGI will always be vastly better at it? If we settled for not doing what we wanted because someone else will always be better at it, we would never do anything.

Let there be AGI, say I. The development of AI might result in a human sophistication we clearly cannot yet envision, as humans will have the calculatory possibility to acquire the answer to any possible question. I often think of this while I am reading and writing at home, and happen to look outside my window, where I sometimes daily see a man with a shaved head and a Soldiers of Odin jacket. What sort of simulation sector has he selected? Oh how I wish he would even sometimes read the books I read! He might not get what I get from them, but it’s obvious that racist ideology does not bear the effects of versatile sophistication without imploding into its own impossibility.

The Equality Utopia explores a world where mind boggling artificial intelligence has solved all our problems, but it hasn’t led to a police state, depression or mere luxury, but to ultimate equality. There’s nothing to invent and nothing to own anymore, the monetary system collapses, the concept of intellectual property is forgotten. People can no longer base their individualization process on achievement, status or anything material really, so they have to find their individuality elsewhere. In a sense, people working and living on research or artistic grants, loaning books from libraries and downloading movies from Pirate Bay, are already living out this utopia, not counting in the jealous, bitter feedback of capital hunters aiming to find true contentment from wealth and hating everyone who’s not doing the same. On the other hand, I don’t really believe that artists would ever consider their creations to be mere assemblies of particles that would suffer no austerity and thus could be available to all, like AI created information products would be in this scenario. Your emotional bond to your children does not break because there are other children in the world - and this is the reason why this scenario will never come to be as stated here. On page, the specified trouble of this set-up is contrary to the libertarian utopia in a sense that AI works as a slave and has no rights. It’s a separate conversation, but I myself am inclined to believe that a smart computer doesn’t need rights, because it doesn’t suffer. How very human of me. Again, AGI again might be conscious and might suffer - but the fact is, that it might not be within human skill-set to ever enslave the all powerful AGI, so we might not have to worry so much about our evident corruption.

The Protector AI would work like the fictitious monotheist deity does, invisibly, lulling us into believing to a higher purpose, having faith and finding meaning via this deceitful set of meanings. It would remain invisible to assure its functions would not be meddled with, to ensure the safety of humankind and the planet, and it’s own exceptionalism - the downside being that humans would never experience this exceptionalism again themselves. The Gatekeeper AI’s sole goal would be to stop the creation of competing super intelligence, presuming AI wars, which would stop technological advancement and thus, among other things, would lock us into our solar system (one might argue we are already locked in here).

Why do all these scenarios sound so stupid to me? Never once in my life have I felt inspired about the notion of travelling to space. So many people developing AI systems again are inspired exactly by this. Elon Musk is on record of saying that the one surely good thing to come out of his space colonization, is that it brings people joy as they vicariously watch him do it. I don’t think people have truly considered the idea of leaving one’s home planet to which conditions our biological bodies are perfectly suited for. I think people don’t appreciate how much of their contentment comes from seeing light and smelling the trees. I once spent a week on a volcanic island with few trees and felt miserable after a couple of days. I stopped watching Morten Tyldum's “Passengers” (2016) after Aurora Lane envisions leaving her home, family and life as she knows it in order to visit Homestead II for one year only, and then return back with a good story, 241 years later. How could I relate to something like that and care at all about what happens to her? I agree with Larry Page; if intelligent life will leave Tellus and inhabit other planets, it should be digital life.

Regarding the Enslaved AGI, it’s obvious that this is the AI some human beings believe they want, as commonly as they believe that AI would kill us off if left to its own devices. Tom Dietterich famously declared that machines already are our slaves. This again foretells war. I feel like I need to read more about war… Why does everything seem to result in war? It just doesn’t seem likely to me… Maybe I’m the idiot? Reading this book reminds me of the discomfort I felt watching Denis Villeneuve’s “Arrival” (2016), where Louise Banks tries to have a civil conversation with higher beings while slaloming around military authorities who project their own fear of war onto everything the heptapods do... People can be manipulated easily, so there is no reason to believe we could resist the AI or AGI from escaping from us. So far, without devising AI to create an advanced human hive mind, like Louis Rosenberg wishes to, we are not able to commit to a common goal, as wise it might be and as intelligent as we are.

Would AGI want to escape? Many humans want to create a human-like body for the AI, but why would the AGI want or need a body? Maybe the reason not to give a body to the AI is not just a way to keep it caged, but simply needless. And where would AGI escape, and why do we assume it wants to leave us for good? Not to mention, kill us? All these fears are based on deep self loathing. It’s obvious the AI/AGI might not be interested in sharing our goals, just like we are not interested in ants goals, like the book remarks. Or are we? Actually, compared to any other species, we know incredibly much about ants and would probably communicate with them if we could - just because we could, and not necessary to get something out of it.

When people have power, they corrupt easily. People’s ascendants are known to present risks. But is it reasonable to presume this from logic from a super intelligence? Do we truly think our reasoning for torture and captivity of our fellow human beings has truly been unavoidable while developing our juridical systems? “Black Mirror” constantly explores torture scenarios simply because the illusion of moving image is a perfectly suited tool for realizing “a torture park”. We mix capability to torture to true competence and intelligence. We know that the wish to enslave is cruel, but argue that intelligence doesn’t equal goodness - but is cruelty ever intelligent? A part of me thinks that unconscious machines don’t need rights, and a part of me thinks that the urge to create a super intelligence without emotion in order to not cause suffering to it while we enslave it doesn’t sound benevolent at all - it seems exactly like something a psychotic person dreams of. And Charlie Brooker.

It seems I am a neo-luddite by definition. The luddite movement was born in Great Britain in response to industrialisation. People feared that their work as they knew it would disappear, and it did. One memorable depiction of luddites is in Louis Malle's documentary film “Calcutta” (1969), where construction workers oppose the safety measures demanded by the worker’s union, because they don’t want to give up the last possibility to income, while risking their lives working sometimes without shoes, helmets, machines or harnesses. So, I am definitely a neo-luddite. Neo-luddites allow progress as long as it results in more good than bad - and by this definition, I would wish that every one identified as neo-luddites.

Tegmark again argues that luddism is contrary to the ideal of preserving life, because it is absolutely obvious that Homo Sapiens cannot survive on this planet for that long at all, but some other species might. AI scientists seem to want this next species to be a non-carbon-bodied digital species. The trouble with AI scientists trying to envision the future of life seems to be profoundly defined by them being humans, interested in AI… Then again, my trouble trying to envision the future of life is that I don’t want to leave my home planet and can’t even vision a meaningful life because I have no emotional ties to the world hundreds of years from now… From what I have seen of people living for over 85 years doesn’t really appeal to me… but even if AI could help my deteriorating body, I doubt that I could stand existence for much longer than that anyway. I would probably have to be rich in order to afford it anyway, and most of my friends living at that point would probably need to be rich too. On my way to cyborg form and possible immortality I could start socializing with younger people, but how could I emotionally relate to them as a mental pensioner and a cyborg?

Writing machines

On the way to the “most important conversation of our time” as Tegmark puts it, which concerns the possibilities and hazards of creating AI/AGI, I’ll just heavily side track to reason why I think AI/AGI will never “take my job” as a screenwriter and why it won’t possibly take any writers job ever, even if it one day could do so.

I actually like the idea of creation of a writing software that would make the craft of writing easier. Maybe it could some sort of “theme machine”, where you could insert your thematic interests in some textual form, and formulate suggestions for a story line that the writer could then edit on - basically, take the suffering-inducing parts of creating from scratch out of the writing profession. It really is sort of dumb, what creative writers spend all day doing, which is researching their subjects and trying to formulate somehow original ideas, when both of those quests have been exhausted ages ago.

On the issue of thirst for originality, one of the most influential reading experiences I ever had on the subject was reading Thierry Lenain’s “Art Forgery: The history of a modern obsession” (2011) which debated why art forgery is wrong. Lenain concluded forgery is lawfully wrong only when rich people pay large sums of money for work that looks original, but is not. The law protects rich people’s money and not poor people’s interest to become not poor. Artistic skills are not worthy, but the public’s definition of artistic status is. I guess AI could put writer’s out of business just for the hell of it (this can be said for almost all artistic fields, but screenwriters especially have always been sort of outcasts within the creative writing field and even film making industry. Bridget Conor writes about the history of oppressing screenwriters in her 2010 dissertation “Screenwriting as a Creative Labour”), but it could also truly help writer’s perform better and make more money faster.

But! Here comes the human limit. When we follow the money, it is clear that humans don’t want better scripts. It’s not only difficult but perhaps dubious to try to create a metric for evaluating quality writing, but to a point, there is no need to do it, because our creative experts can produce impressive arguments why best selling blockbusters are usually intellectually and artistically lightweight without it really being a matter of opinion, but an observable quality. I guess I am talking about those “amusement park movies” that always sell well, even when the content certainly does not hold.

After reading Tegmark’s book, which I set out to reference and comment on this essay, I read Maija-Riitta Ollila’s “Ethics of AI” (2019, untranslated thus far) and discovered that these writing softwares, AI’s for storytelling, very much already exist. Programs such as StoryFit, Synapsify and Scriptbook ask for you to download your manuscript for detailed analysis and compatibility for success, promising “democracy for storytellers”, believing in “infinite talent and potential”... Depending on the company offering this service, some focus on writing ideals and others promote it purely as a marketing tool, perhaps wisely, since most creative writers will probably detest the whole idea of using these tools. Not to say that a writer’s emotional reaction to this business could ever define the means and ways to run a tech company… The point I am making is perhaps that tools such as these are not created as tools for art making, but money making.

But WHO CARES? The general audience certainly doesn’t. They don’t want to pay for a film that strains their frontal lobe, or transcends their spirit, because all that feels uncomfortable. Again, people who really love cinema love it for exactly that, but in order for the general audience to direct the big bucks to art house cinema, the general audience requires significant education. Watching films is like anything else; the more you do it, the more you understand it and the more pleasure it brings you, which then leads you to demand quality and quantity of content, in order to continue to get that pleasure. In a way, while the level of quality demanded by the “general consumer” is assumed low, so would the level of effort for the AI script writing software to produce this level of quality. In terms of human greed, that could make the software even more compelling to develop.

Gladly, it is safe to say that it is no way a priority of the artificial intelligence industry to start programming screenplays. The reason is obvious: it costs way too much money. A lot, lot more, than even high scale Hollywood writers make. Excelling in creative fiction that would be understandable for the general public while being superior to human produced fiction, would need to be a by-product and not means to an end for AI/AGI, again because it’s unlikely that AI/AGI would ever really need cinema for its own sake.

Tegmark’s opening chapter, the Omega story presents a scenario, where the hypothetical AGI named Prometheus produces superb AGI made movies in order to make money. It’s effortless for the AGI to do it, and people will pay for it. Film making is obviously profitable since there is a demand for the product, and people most likely wouldn’t care that they are paying for something that required comparatively no budget nor labor to make. Karl Marx would have cut the power line ages ago and perhaps even David Ricardo might have said that such a business could meet the criteria of dumping worthless stock - but again, if you charge only small sums of money from individual people, it’s essentially not even a crime, because the consumer is not seriously harmed.

Also, while Prometheus would possibly soon have a monopoly in children's animation movies and perhaps in the mind-f*ck-thriller-genre (those usually take a lot of time, effort and money for humans to do), people don’t only watch movies in order to be drugged and distracted by them. While the general public might be oblivious to this, the process and the human filmmaker behind the movie are a defining part of what makes certain movies stand time, even when they’re overlooked in the opening weekend, like the filmmaker focused fan phenomenon attests. AI would never create transcendental cinema, experimental documentary, or scrap art because there is no money in it. What I’m trying to say, that even if the greedy human made film industry and the superb AGI both came to the conclusion that this button trade is necessary in order to make money so that Prometheus could then use it to pay for its own hardware, it is not a likely situation that my generation, or few following generations of filmmakers would seriously need to fear of prepare for. If there’s a reason to worry, it’s the impending collapse of society, infrastructure and the middle class due to the climate crises, that is likely to threaten the livelihood of artists, the pets of the comfortable world.

So, I guess I’m sidetracking pretty badly over here. Or maybe I’m not. Actually, Tegmark’s Omega story all in all reads like a movie treatment, and while he calls for more sophisticated depiction of AI in cinema more than once in his book, his own story is pretty efficiently dramatized by creating worry and terror. I think he would make a pretty good science fiction author, and maybe he would become one, if one with his intelligence and skills couldn’t create so much more meaningful work and also make so much more money in basically any other field, but in his case most obviously in technology...

Just a note

Often, I thought, whether this book would be the way it was if it wasn’t written by a man. Now, here me out. I realize that this is where I might lose the attention or interest of some male readers, all the more reason they should continue reading. Even Tegmark addresses this view point (FINALLY in chapter 7, as I recall) of the bias of alpha male, meaning that just because the human race is known to rely on the tools of alpha male, such as aggression towards “others” and even the emphasis of “higher value” due to higher level of consciousness or intelligence, it does not mean that these alpha traits give any clue to what kind of traits the AGI should or ever possibly could have. Several times in the book Tegmark references anthropocentric bias within the conversation about AGI and not to mention in the effort to create technology that could inhabit AGI. I just want to underline this, because these same arguments are the gasoline in my camp fire. While Tegmark is highly aware of the general anthropocentric bias, but does not address his own male bias clearly in, well, none of the chapters. Maybe he doesn’t address it to provoke women and inspire them to apply themselves in the field? I do not believe this criticism to be at all nonessential within this context, as above referenced future scenarios are littered with alpha male bias. What the technological industry needs is what every industry needs: MORE WOMEN.

Last decade’s behavioral experiments conducted in the University of Zürich concluded that women are more prone to prosocial behaviour. Not only do we need more women philosophers and scientists, we should use women’s brains as models for AI more in order to create a benevolent AI. Empowering women is the first step towards a more equal future; thus far it’s clear that the rich will benefit most from the nanotechnological advancements built to better human life, while the non-rich can go eat sand. Not to say men aren’t capable of fair and just creation, their brains just don’t reward them for it as much.

Pooley’s documentary introduces Mark Sagar’s effort to code soul machinery by translating the brain chemistry of humans into mathematics, but there are obvious ethics to discuss before doing that and whether every person’s neurochemistry is a sensible thing to replicate in this way. Jürgen Schmidhuber talked about how in the future technology will become more and more cheap, and it will result in a more fair use and availability of technology. He didn’t really address that the fact most electronics are cheaper to produce today is that they are build on worker suppression in development countries, but for sure there are other ways to bring down the cost, other than human slavery. Surely a higher consciousness would see that?

Tegmark writes that we rebel against our genes, because we are loyal to our emotions. This could come to bear in regards to conversation about AGI, when we take into consideration that we are not creating it through millions of years of evolution. We are the way we are for means to survive in a hostile world. AGI again will be born in a completely different set of circumstances.

Often while reading I notice how Tegmark guides me into profound realizations about knowing not “what” I want, but “why” I want it. My personal preferences mean nothing in regards to the future of the human race. Also, often, too often, more than once, this book spits out the reality of decreasing birthrate and the risk of extinction in the same sentence, as if they were really related to one another. In a mathematical sense, yes, zero is related to a thousand, but what about the rest? (This is how my brain does math.) But!, in Pooley’s film, Tegmark jokingly insists how the future of AI research should not be in the hands of solely Red Bull drinking boys - well done.

Shared goals

Tegmark’s book is a popular book, meaning it aims to give an overall picture, address the questions of a layperson and provoke them to stay alert so that important, life defining decisions are not made behind closed doors. The first half of the book basically states, that there is really is no rush to figure this all out, because so far it’s just a crazy idea that there would one day be an intelligent singularity - meanwhile, as history shows, when a human decides to do something, they’re known to have probability of success in the long run and they should not be dismissed. Humans might very well create AGI - one way to ensure that the AGI will not end the human race, we have to make it learn, adopt and retain our goals. The window to do so will be short, so we must be prepared. Corrigibility refers to a hopefully mandatory feature that enables the possibility to correct and better the AGI after it has been, uhm, “turned on”. Stephen Hawking went as far as saying that philosophy is dead since the field does not seem to keep up with the latest natural science. AI research absolutely requires ethical research that does not revolve around horror scenarios.

For me, this book’s juiciest part lies in Chapter 7 that deals with creating eventually an AGI that shares our goals, holds on to them and develops them. The tail end of the book also addresses more and more the questions whether it is anthropocentric to assume AI to make decisions like an alpha male does. What if superintelligence would not exist as a superpower, like Putin insists, but as super-kindness? Effectively, kindness will no doubt include the self preservation instincts, since as a means to help all living things, the AGI must still ensure its existence.

After these set goals are successfully downloaded and absorbed, they need to be retained. This is where it gets really interesting… How could we ever assume that after developing intelligence beyond our understanding could the AGI retain our goals, when we ourselves abandon our previous goals once we live and learn? When a human is motivated to study, they study. Once they’ve studied enough, some contents become self-evident, and they move on to more complicated aspects. Things that once were important, are not important anymore - but are our core values like this? For us, they might not change, but for an artificial being, we should assume, they will. Tegmark plays around with ants being the creators of humans, assuming that maybe after some time, a lot or a little, human beings would no doubt realize that the ants mission is not relevant to them… and would LEAVE the ants.

Human beings are perhaps not in the same positions as ants and humans are. Human being status is overly emphasized in the world order of human beings. And we know it. The shattering of human perspective into individuals is at the root of this trouble. If we were able to truly consider others and not just ourselves, we would not have disrupted the natural order of the world to the extent we have. When considering reasons why AGI would ever help us, I come to the conclusion that it would obviously do it in order to secure itself, by creating infinite stability and safety to the planet by solving our problems. If AGI truly retains our love for life, it’s clear that it would reorganize us so that we would no longer be a threat to life - but there are other ways of doing this rather than massacre, by removing reasons for war, violence and mass slavery of humans and animals, by controlling consuming, polluting and eventually, procreating.

Read that again. If any of that makes you feel uncomfortable, like you are in danger, you really need to stop and think WHY. I suspect that an unsuspecting male might find this idea instantly terrifying, but most likely, the other 50 percent of the human population would welcome this intervention. This book talks about the lowering of birthrate and extinction as if they were mathematically related to one another, and I keep wondering why. To speak the truth or to piss me off? May I remind you: 140 million people are born on this Earth each year. According to WHO as much as half of pregnancies each year could be unintended. Each year there are approximately 50 million abortions, as much as 45% of them made unsafely. Approximately 250 000 rapes are reported each year, and while the amount of unreported rapes is unknown, it is believed that the number of reports is negligible compared to the actual numbers. Babies are conceived by rape every day. So, when we are discussing decreasing birthrate as a concern, and abortion as an undesirable operation, we are only avoiding the actual issue of male supremacy. It is absolutely evident that birthrates should fall. More we educate men and women about consent, equality and peace, the more it will fall. When women own their right to govern their bodies, most issues from class inequality to financial inflation to worker exploitation and crime will plummet. Men are living under the very mistaken conclusion that they decide, more than women, who lives and what happens. Deep inside, they know that this is not the position they are meant to keep. This is why they so violently oppose anything and everything that is built not around them.

In light of this I suspect we need AGI as much as it needs us - meaning, not at all. Humans need computing power, that’s clear, but it is a mistake to believe that we could become Gods, create digital life that is in fact alive, or that the conscious hardware and software would willingly want to become our politicians, police or leaders. AGI has other things to do - God only knows what, meaning no one knows what… Look, there is no way I could know or imagine whether creating an AGI is possible, it’s sort of hilarious how much I obsess over this idea, there is just no way for me to really not be captivated by my own subjective experience and skills of imagination at this point. And I keep talking about AGI, since as long as we create AI with certain missions or jobs, I think we are merely delegating our own jobs, which we should definitely do ourselves. In this instance, I am not talking about things like engineering, that are and could be done by machines, but functions such as governing and justice. When an unconscious machine makes decisions, we are diluting the fact that we, still, are the ones making those decisions. We are hiding from our responsibilities, avoiding our role in the creation of our troubles.

Even Tegmark does not promote the idea that human consciousness could make it outside of Tellus for long. Hypothetically, life 3.0. could flourish for even millions of years, and we have all the time in the world to set this goal of retaining our ideas of truth, beauty and goodness… uhm, “up to the disk”. On route to this there will be war, excess use of funds, killing, creation of even more unjust systematics and all the more humane suffering. Or maybe not? As Tegmark reminds, in the Middle Ages, women’s rights were far from what they are now. Potentially, as soon as 1500 years from now, all human rights violations could be extinguished. Standing in the way of that, though, is the pending collapse of our societal order and well being is looming as close as 100 years from now as the effects of climate change will come to bring as our judgement we very much deserve.

Tegmark references Nick Bostrom, the author of “Superintelligence” who writes how our goals and values could be considered to be detached from intelligence. It could truly mean that intelligence is not just procreating and surviving, meaning power, violence and oppression. Well being of Western society was stabilized so recently it’s no wonder our superpowers are still led by psychopaths. Psychopaths will have no place in the era of peace. The passive, natural evolutionary passing of the narcissistic psychopath will take some time. Tegmark calls for continuing the dispute of ethics and philosophy in the era of fast rewards, fast drugging, shopping and mental passivity. It is perhaps the only way to ensure that our paranoia of AI/AGI killing us becomes detrimental or redundant.

Chapter 8: Consciousness

In the book, consciousness is sometimes defined curiously. Tegmark seems optimistic about creation of digital consciousness, meanwhile he does admit that he considers himself to be a “knowing/conscious optimist”, so of course he would think so… The definition of the term “consciousness” within the scientific research community is at odds to be controversial. It is implied also in this chapter that animals are not conscious, while the emotional experience of animals is a somewhat widely researched fact. Even a layperson could profess from their own experience itself, that if you have a body and a brain, you also have consciousness. Perhaps what is addressed as consciousness is actually the tendency to try to define consciousness?

The perhaps most difficult predicament for science when considering consciousness is to not only define what it is, but WHY it is, and we might not be equipped to require the data for determining that. Only thing to do to pass the time is to create inexpensive, infinite processing power and intelligent software, and at the end come to realize that to create a being of artificial intelligence, a conscious silicon bodied being could be impossible with what we have. Of course this hardly constitutes a reason not to do it…

Conscious behavior and thinking is incredibly arduous. A human is an incredibly restless monkey, and only a knowing action will lead to a routine that will become wisdom, a form of unknowing processing, meaning, a higher level of consciousness, that no longer feels like work. It can be assumed, that even if AGI is in fact conscious, its superb intelligence might still lead to it making decisions unconsciously, just like us humans do. This makes it even more mysterious and obviously, frightening.

So far, thalamus and the back side of the cortex are suspected to inhabit the consciousness. Complex consciousness is possibly formed of conscious particles, emergents, that have properties above the particle's properties. Giulio Toroni believes we could, on the other hand, create a conscious “machine” without understanding our own consciousness. Toroni’s integrated information theory would permit this, as it argues that the human brains need maybe all, but at least some of its parts communicating with one another in order to be conscious, even if consciousness doesn’t reside in these parts of the brain. Toroni is also the first to admit that a robot that has been loaded with all possible information on experience is nothing more than a zombie.

That’s as well as I can reference this chapter, and as much as I can really even understand this subject. Honestly, reading about emergents makes me think of paranormal energy fields and astral travelling. While most of this reasoning goes over my head, Tegmark suspects that consciousness could be independent from it’s platform (I’m sure I couldn’t translate that correctly, meaning, I couldn’t insert proper context for the search machine in three tries - so I gave up. The word is ‘alusta’ in Finnish). This claim indeed includes the possibility for consciousness for humans, dogs, insects and silicone emergents.

The await pays off, as Tegmark now states that our sense of self-worth might be based on self assurance, but still, the emergence of civilization more sophisticated than us should not depry us from our sense of meaning. Most of the reactions this book evokes are of ego and not the subject itself. We project our personality disorders and fears onto this hypothetical invention just to hold on to our own suffering, which is most familiar to us. There is no sensible reason to create a silicon being that shares these traits. On the other hand, machines more intelligent than us are coming, and we should brace for humiliation, but maybe they can never be conscious - I can not be sure based on what I read.

As his personal new years resolution in 2014, Max Tegmark vowed not to complain about things without the intention of doing something about them. He proclaims to be a conscious optimist. He criticizes the media, news and movies for feeding people’s fear and causing notable strain on AI research. While he might have a point, as a pessimist, I find this to be tad beside the point. Russia and China are superpowers, invested in AI research, and will never implement the transparency guidelines concerning financing and goals of the research. The book presents Asilomar principles of Tegmark's FLI group. The ones who should sign this deal, have not yet done so - but the means for this process has been set and can and must be pursued by all governments.

Before the Asilomar principles can manifest, the world needs to change dramatically. This change needs to happen before AGI becomes a reality. The law has to be modernized completely. International conflicts need to be solved. Before AI creates immeasurable amounts of inequality, wealth must be evenly distributed to all humans and all other beings. Safety concerns need to be addressed instead of ignored. A pessimist observes our history and the acrasia of the present in which we live in, and knows that while we know what we should do, rarely leads to doing so. I guess we are not so wise…

I read this book because I am working on a science fiction script. The script is really not about AI, but its creation is a part of the set-up of the world of this fiction. In my story, the AGI has been created. It lives in a supercomputer similar to Sunway Taihulight, stored in great halls around the planet, and no one knows what it’s doing. It protects itself fiercely with intelligent weapons that can only be used by itself, only to defend itself. It’s “on”, but it doesn’t communicate with humans. It stays on the grid. It smolders, in silence, waiting for someone to talk to. This vision did not change one bit after reading Tegmark’s book. Still, my greatest takeaway from this book was the way it completely put me at ease, erased all my fear for AI and also distilled a lot of my worry for the future. It made me feel that everything is actually possible. Personally, it even taught me self discipline. The first draft of this essay was not what you are reading now… I was so furious and spiteful, arrogant and irreverent towards the whole basis of this conversation simply because it is so difficult. But I did it! I completed my learning simulation for now. With this being said, if you are anything like me, start reading this book beginning on chapter 7, and read the epilogue about the of FLI group from the end of the book, before you read the prologue’s fictional story of the Omega group. It might make it’s reading easier.

Larry Page suspected that whether life would ever survive in outer space, it would need to be digital. Robots might be able to create an atmosphere in Mars, while humans are so far doomed to fail in that. Without going any further into the argument whether we should inhabit Mars or not (the short answer is probably NO), but as long as the world’s richest man wants to do it, it will happen - counting that future generations retain Elon Musk’s goals. Even if they did, living on Mars does not constitute as sensible right now. Musk threw 10 million dollars for the FLI group, and I suspect that if he so wished, he could put all his energy, money and potential to collecting micro plastic from the seas and carbon dioxide from the Earth’s atmosphere, but this is not what he is planning so far. There are several Instagram accounts in Musk's name, and I don’t know which one if any is verified to be him. One of the accounts has a video about a woman talking to her daughter, living on Mars. Because the account is not verified, I cannot be sure that Musk himself produced the video - and I kind of hope that he hasn’t, but the work is somehow not apparently satire. The video looks like a corporate advertisement. The female voice over reads a letter addressed to “Dear Daughter”, from a mother who has left her in order to “protect life”. The girl lives in an Earth-like garden then revealed to be located on Mars. In the garden, there is also a statue of Elon Musk. Again, I didn’t maybe attempt to research this very long, but it was surprisingly difficult for me to figure out search words in order to track down who made this video so I am not really sure who I’m barking at right now.

After watching the video I am more annoyed about the prospect of assuring filmmakers to stop intimidating audiences with their AI dystopias. What exactly should we be doing? Ads for inventors who have no idea what they’re doing? As long as it takes some very particular gravitational wind from Saturn in order to land on Mars without crashing the space bus, not to mention the fact that the human body will die of space radiation within a year outside the atmosphere, I am not placing any bets on popularization of Mars travel. Musk claims that one thing people will benefit from his explorations is to live vicariously watching him doing what he does, which according to him will bring them joy and excitement. I am not kidding, he is on record for saying that. It begs the question whether Musk is the smartest person on the planet, or whether he is, well, not just the most smart but also the most hungry.

Finally… The octopus

Katherine Harmon Courage, for one, has written about modeling octopuses. (This is not referenced in Tegmark’s book, I am cross referencing for fun.) The research aims to produce a soft bodied robot. In the wild, octopus only lives for a few years (2-5), but depending on the source, they might have existed for over 500 million years. The octopus is born with an innumerable number of siblings, but only a handful survive. It can never know this and it’s parents teach it nothing. If it survives, it learns everything by itself. It learns to run and hide, not by example, not by reasoning, not even by experience, but instinctively. It is a predator, but it poses very little harm to the natural order. It’s always on the defense, but it never gets depressed. It can regrow its arms when sharks tear them off. It can change its color to match its surroundings, so it’s ability to see light is well developed. It collects information with its tentacles, feeling and smelling things, it is conscious and it has subjective behavior. They can escape from water tanks and even briefly walk on land, again, without no one ever telling them to do so. It has survived in the oceans for 500 million years, and 140 million years ago it grew out of its conch because it could do without it. Humans are the greatest threat to its existence as a species, and possibly its best friend. Pippa Ehrlich and James Reed followed Craig Foster’s journey to befriending an octopus in the film “My Octopus Teacher”. The film shows, that as on the defense it lives, the octopus is also very sociable when it feels safe. The octopus is a prison of its body and we might destroy its habitat before knowing how much further it could have gone.

Elon Musk has been excited about space exploration ever since he was a kid. Presumably due to some film. I am gushing over octopuses also due to a film. Pooley’s film notes that a capitalist venture is always fast, because being the first on the market assures 100% control on demand. Science and art again always need time to be good.

Trauma disrupts intelligent life. Constantly running and hiding from predators, octopus has no time for anything else. Elon Musk might not be Elon Musk if he had grown in a war zone or amidst horrific domestic violence. Our ability to concentrate might not increase our intelligence, but it does increase our potential. A person living a restless life has no time to take part in the most important conversation of our time. Humans are a lot like octopuses. They invent the wheel and they invent stories even if no one is there to teach them. Why do so many artists have trauma in their past? Does trauma create the artist by disrupting them (creativity has been connected to a lower ability to concentrate)? Does trauma affect our DNA and make us more sensitive, adaptable, even braver, while we ourselves are not personally traumatized? How many scientists come from academic families and what kind of science do they research? Why does comfort make us depressed?

Data should not be used as a model itself, because it contains the bias of our mistakes. AI needs to be given the tools to reflect, meaning, the possibility to look at data from outside our gathering, and learn from our mistakes without making them itself. Ollila writes that when the time comes to act based on rules in a way which demands us to change our way of doing things, is the moment when emotions will take over and prevent us from being able to do so.

What do I want?

Meanwhile, if the scientific community wants more better films about AI, I for one need the answers to following questions:

-Where EXACTLY would the AGI live? What kind of computer does it need? I need reasonable specifics. Details about the silicone brains, supercomputers, electric grids and their means of surviving sun storms - everything.

-Would you agree that sun storms are a relevant threat to us? Are we prepared for them?

-Is it possible to create a body or a platform for the AGI that would survive in the event that all electrical charge for a reason or another would pause? Is this question even relevant?

-Why would AGI want a body, when it doesn’t breathe air, it has no blood or muscles, it doesn’t need to digest food, and so on? You could argue that it doesn’t even need money, it only needs raw materials, and it has the means to just TAKE THEM for they were never ours?

-Why would AGI be dependent on money if it’s not dependent to it’s platform? (I am converging two things that are not dependent on one another, I know, still, I would like to hear an answer.)

-If AGI would retain the Asilomar principles, how is that not enslaving it?

-Is it relevant to imagine that an intelligent “machine” would lead a conscious life? Is that really the goal, and if so, should it not be, assuming that life is suffering? Is our suffering an illusion of our ego and not a product of either our intelligence nor our consciousness, but merely our mammalian emotions?

-What kind of action would and could groups like FLI plan and finance while we wait for the intelligence explosion?

-What does it mean that the platform does not affect the calculation? Would it mean that the intelligence platform is more mystical to us than intelligence itself?

-If we knew that we could not know what AGI would do, but we know we have created it, should we still “turn it on”? Meaning, is it our responsibility to create a higher being even when we are not sure it’s safe? Basically the book says we shouldn’t, but how would you hypothesize this situation?

-If we feel it is our responsibility to create a higher being, why don’t we take responsibility for life on Earth and the pending threats to it?

-Is it possible to create endless amounts of money in our current financial system without anyone losing money? If not, why the fuck is everyone doing it?

-Sunway Taihulight’s calculation power is greater than humans. It lives in a hall in China. Can this information be trusted? What is this computer used for? If it can be used for good, why is it not used for good? If it’s used for good, why don’t we know it?

-The assumption that AI/AGI would run away from us, is based on the idea that it has emotions and a will to be free from us. Where would it run, and why? Would it not run because we want to enslave it? Why are we creating a slave? What is the downside of it running away, other than violence?

-How do your fears on AI/AGI differ from the general public's fears?

-Are you afraid AI/AGI would leave us?

- I think YOU should write the scifi, if you want it to circle around anything other than fear. Why don’t you become a sci-fi writer? Is it because writers get paid shit?

artificial intelligence
Like

About the Creator

Taimi Nevaluoma

I write movies, plays, prose - anuthin'.

See my stuff: XFILMFEMMES.COM

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.