Deep Tech Dive #12 | Sonia Joseph ML Researcher & Stealth Founder

Andrew Kirima
The War Against Entropy, A Continuing Battle on a Metaphysical Plane

For this Deep Tech Dive, I interviewed Sonia Joseph, founder of a natural language processing (NLP) company that’s in stealth mode. She’s a machine learning (ML) researcher and engineer interested in computational neuroscience, statistics, mathematics, physics, and complex systems. Before her company, she worked at Janelia Research Campus and Princeton Neuroscience Institute.

This DTD is a bit different from the past ones so far. Instead, of revolving the entire conversation around Sonia’s company, we get a deeper dive into her philosophies, her peculiar yet clever thinking on metaphysics: a quest on designing a new type of transhumanism.

Key Takeaways:

  1. Entropy is a measure of disorder and affects all aspects of our daily lives, and if left unchecked it this disorder grows over time. Thus the war against entropy is necessary, to come towards a point of complexity and unification called The Omega Point.
  2. How ready are for technological advancements with BCIs? With increasing issues like Social Dilemma, where social media companies are doing too good of a job at retaining our attention, do we think we are capable of technology being literally linked to our brains?

“I’m inspired by thinkers like Kurzweil, as one who stands on the shoulders of giants. But I want to create something that is kind of novel and hasn’t been seen before”


This interview was edited and condensed for clarity.

Can you tell me about yourself and your background?

Not too long ago, I was a researcher in theoretical and computational neuroscience at Janelia Research Campus, where I was looking at deep learning (DL) models of the mouse visual cortex. Before that, I was an ML engineer in the Bay Area building information retrieval pipelines at a search engine startup. Currently, I’m about to head to the ML institute Mila to work with their CS professor, Blake Richards, to design new algorithms while building my startup that’s in stealth mode. Finally, I have a background as an anonymous memoirist and science fiction writer, which scaffolds much of my thinking and research direction.

I am interested in finding tangible ways to solve questions concerning metaphysics, questions along the lines of What is the nature of intelligence? This plays out in my research on private companies like DeepMind or OpenAI, which act as these artificial general intelligence (AGI) bets, and in inventing new algorithms that are inspired by the brain.

What are your thoughts on Metaphysics?

Our metaphysical questions are broader than about the nature of intelligence. Currently, our best representation of our universe’s existence is Computationalism, or Informationalism — the idea that we are instantiated on a quantum computer, an idea pushed by thinkers such as Edward Fredkin and David Deutsch. Our metaphysical metaphors, in many ways, are only as good as our technology. It’s not even clear what questions we are supposed to be asking. So there’s loads of ground to cover in conceptual progress as well.

Credit: Morten Tolboll

In the frame that we are living in a matrix or simulation, that our reality is not quite real, I see there being three levels:

  1. Popping out of this simulation/matrix into the one above it.
  2. Improving the current simulation/matrix.
  3. Popping down into a new matrix/simulation that we create for ourselves.

This is a crude decomposition in some ways, but it’s useful until we think of something better.

In popping out of the simulation, we draw upon mathematics, theoretical physics, and computationalism to understand our universe in a far deeper way and see if we can access and manipulate its laws, or even create new universes. This question needs more conceptual progress until we frame it properly, as I do not think a quantum computer is the best metaphor even though it is one of the closest we have currently. This may be the least tractable of the three paths, but in many ways, it’s the most interesting. We can see depictions of this theory in the clever short story Crystal Nights by Greg Egan.

In improving the existing simulation, we continue with our progress in deep technologies (space colonization, longevity, brain augmentation, AI, nanotechnology, clean energy, X-risk reduction, etc). This is the most tractable path, and extremely useful, in that the technology developed may transfer to answering the first question.

And finally, there is the third path, which is creating a new matrix and popping down into it. This looks like video game design, VR/AR, and mind uploading. While this path is fascinating, in some ways it’s the least interesting to me, as it does not directly answer our metaphysical questions.

How do you plan to utilize all of this knowledge you have on metaphysics?

The goal is a lifelong one, and there are many parallel paths:

  1. Staying sharp in physics and conceptual knowledge. I read and re-read Richard Feynman’s lectures, which are like poetry. Feynman presents physics more creatively than the average textbook. I also read philosophers and physicists such as Pierre Teilhard de Chardin, Frank J. Tipler, Deutsch, and Rudolf Carnap to make deeper conceptual progress so we can frame our questions more appropriately.
  2. Exploring and road-mapping areas of technological progress. I stay up-to-date with the space sector, which I am entering within the next 10 years. Space Mission Analysis and Designis a fantastic and comprehensive resource on this.
  3. Starting a company to mobilize resources toward more metaphysical goals. This is the thesis of my first company, which will be one of many.

Can you tell me more about your theses on semantic representation in the human brain?

In college, I wrote my senior thesis at the Princeton Neuroscience Institute on semantic representations in the human cortex. We were wondering how meaning is represented in the brain — mapping Electrocochleography (ECoG) responses to word vectors, in what we call a “neural encoding model.” So if I say the word “cat,” we can ask, what does the word cat mean? What circuits are active? So we were looking at intracranial (within the skull) recordings of the brain and creating models of these representations which were quite distributed throughout the cortex.

Referring back to your interests in private research, can you explain what differentiates private research from public?

Right, so it ends up being a question of funding. Who are you getting funding from — the government or revenue that you’re generating? Are you getting funding from investing firms or a couple of wealthy individuals? So all of these are various forms of funding structures. But I’m agnostic as to the structure; I want the freedom to experiment with ideas that might be considered to be moonshots. I had the privilege of working at Janelia Research, private research that was funded under the same grant as Howard Hughes Medical Institute (HHMI).

I assume you’re trying to replicate that with your ML startup, which I see is in stealth mode. What can you tell us about this project?

We are using all the advancements in deep NLP to innovate the way we read books and long-form content.

What led you to start this project?

I love books, so I began experimenting with my digital library with various algorithms and noticing some extremely interesting results. It is very fascinating and useful, and it can be scaled. The product that I’m making iterates on creating the most incredible reading experience. To do books justice and unlock insights from the enormous corpus of human knowledge that we have collected over the past many 1000s of years.

When do you plan to come out of stealth mode?

Somewhere in early 2022, we will be taking the next few months to continue with R&D.

When you speak about, “soldier in the war against entropy,” what do you mean? I first learned about Entropy, from a physics standpoint. From my classes in thermodynamics to Christopher Nolan’s Tenant. But now, I’ve started to think about it from a philosophical POV, because of tech visionaries such as Ray Kurzweil and Josh Wolfe. What is your take on it?

The idea goes back to old religious impulses. To some degree, we have certain biological circuits that act as “religious circuits,” that are very conducive to adopting religious memes. It’s written about in books like Why God Won’t Go Away: Brain Science and the Biology of Belief. I’m inspired by scientists and philosophers, like Teilhard de Chardin and Tipler, who attempt to align religion with modern progress in science and technology. So how do we update these canonical religions or create religions from scratch? This leads to questions such as: What belief system do you have? What is worth doing? Is there a deeper reason to start a company than just making money or status?

One fascinating idea is the Omega Point, which was first proposed by Teilhard de Chardin, and later popularized by Tipler and Deutsch.

What is the Omega Point?

The Omega Point was initially an idea in physics that the universe is spiraling toward a point of complexity and unification. We have the origin of life, then increasingly intelligent life-forms, then cities, then civilizations — environments of increasing degrees of complexity, which, in turn, order us. The idea is that this complexity will teleologically increase until a point called the Omega Point. The aesthetic gives a sense of progression and hope. So I am very interested in this hopeful narrative and would like my projects to be in line with it. The Omega Point poses similarity to the Big Bang, a beginning and ending point to the universe that is aesthetically pleasing.

However, evidence increasingly favors that the universe is continuously expanding and that the expansion is actually accelerating. There are also some issues with the Second Law of Thermodynamics. So, with the Omega Point, you have the aesthetic component that we’re spiraling toward a point of increasing complexity, the beauty which I would like to preserve, and then you have the actual physics. Ideally, the actual physics and the aestheticization of the physics would continue to sync up, which stopped happening for the major religions. So perhaps the concept of the Omega Point can be innovated on, once again, to line up with the Second Law of Thermodynamics and knowledge of our continuously expanding universe. Ideally, spiritual motivations match empirical reality, the physics of what’s actually happening, which is a bit of an open question.

This is what I mean by tapping into religious impulses. When I say “soldier in the war against entropy,” I’m fighting for an increase in complexity and beauty, opposed to nothingness. I’m fighting against dysteleology, a doctrine of purposelessness in nature. This is a philosophical aesthetic, and not necessarily the only correct one to have, but it’s worked well in my life and led to interesting outcomes when acted upon.

Is the Omega Point similar to the Technological Singularity?

Yes, if you are referring to Kurzweil’s idea of the AI singularity, they are often conflated. Teilhard’s idea is a bit broader than the singularity through AI — we could have a different type of singularity than an intelligence singularity, like a consciousness singularity, or a singularity through a different means.

Jürgen Schmidhuber is a polarizing figure in the AI community, who is also eccentric and interesting. He discusses the idea of Singularity being the Omega Point, which we can bootstrap through some sort of meta-learning algorithm that keeps ordering the universe to a point of increasing complexity. He talks about these ideas in his 1987 dissertation, an idea that intelligence may go against intuitive notions of entropy. However, there is still much to be tested and formalized here; this must be reconciled with the Second Law of Thermodynamics.

You seem adept with Ray Kurzweil’s teachings. What do you think of the predictions he has made?

For every movement, you have the philosophical content, and then the cultural associations that the movement accrues. As a teenager, I read a lot of literature from Effective Altruism, Less Wrong, AI Safety, and futurists like Kurzweil on the internet. I got exposed to ideas like X-risk, cryonics, longevity, being extremely rational, gaining some state of higher enlightenment through technology. Standing on the shoulders of giants, I am inspired by thinkers like Kurzweil. But I’m interested in: What does transhumanism look like in 2021? I want to create something novel, something that hasn’t been seen before.

Credit: Miquel Casas

The brand of transhumanism I want to go for is one where we no longer use the word, where we have transcended it. Of course, there are many different flavors of transhumanism, but I think the worst flavor has the baggage of denying your humanity, or not conditioning on the fact that we are ultimately human, like having physical and emotional health, exercise, family, relationships, and a stable community. Some variants of transhumanism end up not being super reconciled with our brainstems. I am interested in drawing upon other movements to flesh out the practical and emotional aspects of philosophy.

Second, I want to go a very practical route, a very business-oriented route like the one Elon Musk took. That’s where I came up with this sort of tongue-in-cheek idea of Metaphysics Tycoon. You have these old video games from the 2000s like Zoo Tycoon, Rollercoaster Tycoon, and Railroad Tycoon. What if you created some business empire that was immensely profitable, but then channeled the profits into companies that began questioning the nature of our reality or pushing it? That’s the transhumanism route I want to go, and the company I’m working on is the start of that.

Wow, we need a lot more people like you and a lot fewer social media tycoons. The future waits for no one and it’s coming faster than it’s ever before. The Law of Accelerating Returns as Kurzweil predicted. We need to accept these technological progressions or something cataclysmic could happen. I’m afraid Martec’s Law has proven to be true, at least for the US. However, I understand it’s hard to accept innovations faster for ethical concerns.

When you’re dealing with something as dramatic as metaphysics, integrity and ethics become front and center. I remember reading the AI safety community when they bloomed in the 2010s — Nick Bostrom’s stuff, which is this prophecy of doom of malevolent or poorly-designed AI. It ends up being a problem of mechanism design. This kind of thinking can be generalized to every area we’re exploring, like, for example, brain-computer interfaces (BCIs). What are the safety implications of that? I would love to see more popular and rigorous work on safety implications, so we can build these concerns into the device.

Regarding other deep tech industries, which ones do you believe will become commercially viable this decade?

Potentially BCIs, although in some ways I don’t want them to be ready yet. When I’m on certain websites on the internet, I don’t feel in control. I don’t feel in control of my own behavior; rather, I am a dopamine circuit being optimized by the recommender systems to stay on the website for as long as possible. We haven’t humanely designed entire swathes of the internet.

A few months ago, I cut out all technology made after the 2000s for a month. I used only 90s tech like CD players and landline phones. It was super fun and increased my happiness. Last year, I visited an Amish county in Ohio, looking for a puppy since they breed dogs. I started talking to them and understanding their relationship to technology. I like the idea of not being as extreme as a Luddite, a person opposed to new technology, but rather, being a digital anthropologist, a technological anthropologist, where you return to the past ways of being. And by going back, you get a really good sense of how the present technology is affecting you. So that was a really insightful experience.

Obviously, my wariness of BCIs is ignoring their immense amount of medical benefits, and how they are absolutely miraculous. Just after the addictiveness of the internet, it is wise to proceed with caution.

I agree. Not too long ago, I interviewed a founder of BCI company, who’s building an EEG Headband to enhance gaming. From their early iterations, they found out how bad it can really get if they don’t restrict the data they capture. That’s why I’ve started to put more thought into risk mitigation and ethical implications, especially in Deep Tech. Most people tend to be nearsighted, so they don’t worry about what could happen in the future till it smacks them dead in their face and something really bad happens.

You’re right, Americans tend to think short-term but not long-term, like on the 1000 year timescales, which is more common in Russia and Europe. I suspect the latter countries have a stronger sense of their own history, which goes back 1000s of years, and so they have a better sense of their future. After talking to people in European academic circles, I often come away with a better sense of the next 1000 years.

How can our readers follow you on your fight against Entropy?

I am on Twitter at @soniajoseph_ and will be releasing a set of essays soon. We are launching a discussion board shaping this next flavor of transhumanism. I am always looking for intellectual and technical collaborators and full-stack engineers. Feel free to shoot me a message. Perhaps we can create something together — whether turning theoretical physics into a livable philosophy or co-founding a company in space development.

Total
0
Shares
Leave a Reply
Previous Post
London’s First Quantum IPO? Quantum Exponential Limited Set to Go Public

London’s First Quantum IPO? Quantum Exponential Limited Set to Go Public

Next Post
€600,000 For Unique Collaboration To Make South Holland a Global Supplier of Quantum Equipment

€600,000 For Unique Collaboration To Make South Holland a Global Supplier of Quantum Equipment

Related Posts
The Deeptech Insider