• 10 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: December 11th, 2024

help-circle



  • I don’t think it’ll be LLMs (which is what a lot of people jump to when you mention “AI”), they have much higher latencies than microseconds. It will be AI of some sort, but probably won’t be considered AI due to the AI effect:

    The AI effect is the discounting of the behavior of an artificial intelligence program as not “real” intelligence.

    The author Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’.”

    Researcher Rodney Brooks stated: “Every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.’”

    LLMs might be useful for researchers diving down a particular research/experiment rabbit hole.


  • I don’t have any useful speculation to contribute, but here’s a classic chart showing various funding levels towards that goal:

    Coming from a slashdot thread from 2012 where some fusion researchers did an AMA type thing:

    https://hardware.slashdot.org/story/12/04/11/0435231/mit-fusion-researchers-answer-your-questions

    Here’s also a recent HN thread about achieving more energy than we put in:

    https://news.ycombinator.com/item?id=33971377

    The crucial bit is this

    Their total power draw from the grid was 300 megajoules and they got back about 3 megajoules, so don’t start celebrating yet

    The critical ELI5 message that should have been presented is that they used a laser to create some tiny amount of fusion. But we have been able to do that for a while now. The important thing is that they were then able to use the heat and pressure of the laser generated fusion to create even more fusion. A tiny amount of fusion creates even more fusion, a positive feedback loop. The secondary fusion is still small, but it is more than the tiny amount of laser generated fusion. The gain is greater than one. That’s the important message. And for the future, the important takeaway is that the next step is to take the tiny amount of laser fusion to create a small amount of fusion, and that small amount of fusion to create a medium amount of fusion. And eventually scale it up enough that you have a large amount of fusion, but controlled, and not a gigantic amount of fusion that you have in thermonuclear weapons, or the ginormous fusion of the sun.

    So it’s still really encouraging, but just a warning that headlines don’t capture the full picture. Bonus fun fact from that thread:

    Theoretical models of the Sun’s interior indicate a maximum power density, or energy production, of approximately 276.5 watts per cubic metre at the center of the core, which is about the same power density inside a compost pile.


  • The Bitter Lesson talks about speech recognition instead of synthesis, but I would guess that it’s a similar dynamic:

    In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

    Also posted over in !discuss@discuss.online here, since I was reminded of the essay










  • To be clear, I’m not finding fault with you specifically, I think most people use terms like conscious/aware/etc the way you do.

    The way of thinking about it that I find useful is defining “consciousness” to be the same as “world model”. YMMV on if you agree with that or find it useful. It leads to some results that seem absurd at first, like in another comment someone pointing out that it means that a thermometer is “conscious” of the temperature. But really, why not? It’s only a base definition, a way to find some objective foundation. Clearly, humans have a lot more going on than a thermometer, and that definition lets us focus on the more interesting bits.

    As stated, I’m not much into the qualia hype, but this part is I think an interesting avenue of thought:

    it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.

    That seems unlikely if you think the human brain is equivalent to a Turing machine. If you could prove that the human brain isn’t equivalent, that would be super interesting. Maybe it’s a hypercomputer for reasons we can’t explain yet.

    Your project sounds interesting, if you ever publish it or a paper about it, I’d love to see it! I can’t judge about hobby projects being messy lol.



  • Well, it seems kind of absurd, but why doesn’t a thermometer have a world model? Taken as a system, it’s “conscious” of the temperature.

    If you scale up enough mechanical feedback loops or if/then statements, why don’t you get something you can call “conscious”?

    The distinction you’re making between online and offline seems to be orthogonal. Would an alien species much more complex than us laugh and say “Of course humans are entirely reactive, not capable of true thought. All their short lives are spent reacting to input, some of it just takes longer to process than other input”? Conversely, if a pile of if/then statements is complex enough that it appears to be decoupled from immediate sensory input like a Busy beaver, is that good enough?

    Put another way, try to have a truly novel thought, unrelated to the total input you’ve received in your life. Are you just reactive?



  • I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.

    For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.

    I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.

    Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere


  • I think the definition of consciousness meaning “internal state that observably correlates to external state” would clarify here. Gravel wouldn’t be conscious, because it has no internal state that we can point to and say it correlates to external state. Galaxies/the universe doesn’t either, as far as we can tell. Galaxies don’t have internal state that represents e.g. other galaxies, other than including humans in that definition, but it would be more proper IMO to limit the definition the minimum amount of state possible. You don’t count the galaxy as having internal state that represents external state, if you can limit that definition to one tiny, self-contained part of the galaxy, i.e. a human brain.