deleted by creator
Their consciousness is arguable to begin with
Everyone’s is, fellow p-zombie
“Hohohoho these young people are so dumb” Repeating exactly what we all collectively hate about boomers and x
Honestly, responses like yours are more insufferable than the people you’re critiquing.
Would you like to elaborate or just lob an insult and leave
Young people are always ignorant, relatively. They haven’t been around long enough to learn much, after all. However, the quality of education has been empirically declining over many decades, and mobile devices are extemely efficient accelerants of brain rot.
Young people thinking their Ai waifu is real
Boomers thinking America was great and not just racist and imperialist
Gen X being really entitled because they were raised by Boomers
Millennials being the best at everything
I agree 100%.
You can not generalise a whole group on a few indiviuals
Edit: Younge people includes gen Z and alpha and now Beta too. Just daying
Yeah there’s a couple of millennial shitheads but all in all, and especially in comparison, we’re the goat. Not trying to put anyone down or such just stating facts.
We have middle sibling energy at best
In fairness, the word “conscious” has a range of meanings. For some, it is synonymous with certain religious ideas. They would be alarmed by the “heresy”. For others, it is synonymous to claiming that some entity is entitled to the same fundamental rights as a human being. Those would be quite alarmed by the social implications. Few people use the term in a strictly empiricist sense.
The batshit insane part of that is they could just make easy canned answers for thank yous, but nope…IT’S THE USER’S FAULT!
One would think if they’re as fucking smart as they believe they are they could figger a way around it, eh??? 🤣
I wasn’t aware the generation of CEOs and politicians was called “Gen Z”.
The article targets its study on Gen Z but… yeah, the elderly aren’t exactly winners here, either.
We have to make the biggest return on our investments, fr fr
Lots of people lack critical thinking skills
This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.
The call is coming from inside the
housemechanical Turk!That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.
Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.
They call it hallucinations like it’s a cute brain fart, and “Agentic” means they’re using the output of one to be the input of another, which has access to things and can make decisions and actually fuck things up. It’s a complete fucking shit show. But humans are expensive so replacing them makes line go up.
I mean, it’s like none of you people ever consider how often humans are wrong when criticizing AI.
How often have you looked for information from humans and have been fed falsehoods as though they were true? It happens so much we’ve just gotten used to filtering out the vast majority of human responses because most of them are incorrect or unrelated to the subject.
Why are you booing them? They’re right.
to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists
I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.
Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys
Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.
Why are we so quick to assume machines cannot achieve consciousness?
Unless you can point to me the existence of a spirit or soul, there’s nothing that makes our consciousness unique from what computers are capable of accomplishing.
This is not claiming machines cannot be conscious ever. This is claiming machines aren’t conscious right now.
LLMs are like databases with a huge list of distances allowing you to find the “shortest” (aka most likely) distance to the next word. It’s literally little more than that.
One day true AI might exist. One day perhaps… But not today.
I don’t doubt the possibility but current AI tech, no.
It’s barely even AI. The amount of faith people have in these glorified search engines and image generators Lmao
It’s literally peaks and valleys of probability based on linguistic rules. That’s it. It is what’s referred to as a “Chinese room” in thought experiments.
I don’t have a leg to stand on calling anything “barely AI” given what us gamedevs call AI. Like a 1d affine transformation playing pong.
It’s beating your ass, there, isn’t that intelligent enough for you?
A calculator can multiply 2887618 * 99289192 faster than you ever could. Does that make a calculator intelligent?
It’s not an agent with its own goals so in the gamedev definition, no. By calculator standards, also not. But just as a washing machine with sufficient smarts is called intelligent, so it’s, in principle, possible to call a calculator intelligent if it’s smart enough. WolframAlpha certainly qualifies. And not just the newfangled LLM-enabled stuff I used Mathematica back in the early 00s and it blew me the fuck away. That thing is certainly better at finding closed forms than me.
I tried to explain a directory tree to one of them (a supposedly technical resource) for twenty minutes and failed. They’re idiots. They were ruined by baby tech like iPhones, iPads, and now AI.
Anyone can understand a directory tree. Not everyone is smart enough to explain it.
They were designing functionality that contained directory trees and didn’t understand directory trees. How is it my responsibility that this person is not qualified to do their own job?
If they designed a directory tree without knowing what a directory tree is, it sounds like they know what a directory tree is, they just don’t know the word, and you can’t explain the word properly.
They didn’t “design a directory tree” either. They were designing screens for a thing that sits on top of a directory tree, and they didn’t understand the underlying concept.
It was likely because they’re used to the abstraction that iPhones and iPads provide, where the underlying directory structures are largely hidden from users.
I’m assuming part of it is because you’re a bad teacher as well.
Just going off of my life experience, I notice the vast majority of people are bad at teaching and then blame the pupil.
I’m not a teacher. I thought I was in a design meeting not teaching remedial computers to someone who is supposed to be working in the industry.
Yeah, but you were still in a teaching position.
You probably did a bad job because you’re not skilled in teaching. That’s what I meant by saying you’re a bad teacher.
I could’ve said you’re “bad at teaching” and that may have made things clearer for you, my mistake.
Yeah, but you were still in a teaching position.
No, I was in a meeting with a supposedly technical person.
I’ve been in the industry for a while, and I’ve even mentored people. These gaps in basic computer knowledge are new and they’re also not my problem. I was not this person’s mentor or supposed to be teaching them anything.
They could’ve been exceptionally inept, and even if they were, I’m still going to stick with my initial conclusion that you’re bad at teaching.
It’s okay, most people are and you don’t have to be ashamed of it. Everyone won’t be on your side when you say it’s someone else’s fault that they couldn’t learn from you effectively.
If I knew I was teaching remedial computers that day, I would’ve come with a lesson plan.
I’m going to stick with my initial conclusion that you love to blame the “teacher” even when they aren’t in any way a teacher.
At some point in the mid-late 1990s, I recall having a (technically-inclined) friend who dialed up to a BBS and spent a considerable amount of time pinging and then chatting with Lisa, the “sysadmin’s sister”. When I heard about it, I spent quite some time arguing with him that Lisa was a bot. He was pretty convinced that she was human.
Honestly, I welcome this future.
I’d rather discuss with bots at this point than rubes.
deleted by creator
Not sure what’s alarming about that. It’s a bit early to worry about an AI Dred Scott, no?
It’s alarming people are so gullible that a glorified autocorrect can fool them into thinking it’s sapient
Is it still passing the Turing test if you don’t think either one is human?
“how dare you insult my robot waifu?!”
That’s a matter of philosophy and what a person even understands “consciousness” to be. You shouldn’t be surprised that others come to different conclusions about the nature of being and what it means to be conscious.
Are we really going to devil’s advocate for the idea that avoiding society and asking a language model for life advice is okay?
It’s not devil’s advocate. They’re correct. It’s purely in the realm of philosophy right now. If we can’t define “consciousness” (spoiler alert: we can’t), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We’re dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.
And honestly, if the CIA’s papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can’t rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it’s even “artificial” to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don’t fully understand.
The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we’d know we had it 😉
What do you mean? I don’t follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?
It’s an answer on if one is sure if they are not just a fancy autocomplete.
More directly; we can’t be sure if we are not some autocomplete program in a fancy computer but since we’re having an experience then we are conscious programs.
When I say “how can you be sure you’re not fancy auto-complete”, I’m not talking about being an LLM or even simulation hypothesis. I’m saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren’t causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That’s how auto-complete works. It’s just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it’s sensory data instead of a text prompt, but the mechanics remain the same.
And how do we know whether or not the LLM is having an experience or not? Again, this is the “hard problem of consciousness”. There’s no way to quantify consciousness, and it’s only ever experienced subjectively. We don’t know the mechanics of how consciousness fundamentally works (or at least, if we do, it’s likely still classified). Basically what I’m saying is that this is a new field and it’s still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.
If it was actually AI sure.
This is an unthinking machine algorithm chewing through mounds of stolen data.
That is certainly one way to view it. One might say the same about human brains, though.
What is thought?
To be fair, so am i
Consciousness is an emergent property, generally self awareness and singularity are key defining features.
There is no secret sauce to llms that would make them any more conscious than Wikipedia.
Consciousness comes from the soul, and souls are given to us by the gods. That’s why AI isn’t conscious.
How do you think god comes into the equation? What do you think about split brain syndrome in which people demonstrate having multiple consciousnesses? If consciousness is based on a metaphysical property why can it be altered with chemicals and drugs? What do you think happens during a lobotomy?
I get that evidence based thinking is generally not compatible with religious postulates, but just throwing up your hands and saying consciousness comes from the gods is an incredibly weak position to hold.
I respect the people who say machines have consciousness, because at least they’re consistent. But you’re just like me, and won’t admit it.
I agree, there are two consistent points of view with regards to conciousness IMO: either it is an emergent property of systems regardless of what they are made of, so there is no reasons machines couldnt be concious even if none now are; or that conciousness is a supernatural quantity that isnt a property of mater and energy that can be studied by science.
I dissagree with the later but it is far more consistent than people who claim to be materialist but insist there is something magicial about the matter in brains that can not be replicated by other forms of matter.
secret sauce
What would such a secret sauce look like? Like, what is it in humans, for example?
Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.
Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it’s computationally expensive to consider and the goal is different.
Some specific things the brain has that llms don’t directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain’s hebbian learning but do not perform precisely the same operations using the same systems.
In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.
As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.
a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.
That’s an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?
If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?
a dedicated module for consciousness would bridge the gap
What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?
They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past “reboot it”.
How they don’t understand that a LLM can’t be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.
How they don’t understand that a LLM can’t be conscious is not surprising
It’s because they’re all atheists! They don’t know that souls are granted to humans by the gods.
Have fun with your back problems!
It will happen to YOU!