What do you think about Artificial Intelligence?

CRAP
Total votes: 55 (87%)
NOT CRAP
Total votes: 8 (13%)
Total votes: 63

Re: Thing: Artificial Intelligence

181
Geiginni wrote: Wed Apr 16, 2025 5:19 pm
andyman wrote: Fri Mar 28, 2025 6:16 am The current AI hype is 95% just marketing bullshit/scam to sell whatever crappy software or service companies are putting out.
Remember in the early 2000s how everything was "HD", "High Definition", "Hi-Def" or whatever? Even if it wasn't related to video display or imaging systems.
This is absolutely aligned with what I've been saying.

"AI" has been a thing since the fucking ENIAC.

It's just now that the algorithms that the brainiacs have been envisioning for a LONG time are now executable in real time. They don't take hours anymore.

Like all of those other things (HD, etc.), "AI" will become part of the norm, but it won't be the hype forever.

Pretty soon, the hype will be all about something like virtual penises that can electromagnetically induce orgasms.
Last edited by jfv on Wed Apr 16, 2025 5:40 pm, edited 1 time in total.
jason (he/him/his) from volo (illinois)

Re: Thing: Artificial Intelligence

182
andyman wrote: Wed Apr 16, 2025 5:40 pm I say that just to clarify that what they're proposing (and claiming to sell) - artificial general intelligence - is currently nowhere NEAR achievable. It's almost on par with colonising Mars.
Absolutely. That's the other thing. The term "AI" has been bastardized to the point where it's even less meaningful than something like "organic".
jason (he/him/his) from volo (illinois)

Re: Thing: Artificial Intelligence

183
Leeplusplus wrote: Wed May 19, 2021 11:33 pm Jacques Ellul’s 76 Reasonable Questions to Ask About Any Technology

I. SOCIAL
Does it serve community?
Does it empower community members?
How does it affect our perception of our needs?
Is it consistent with the creation of a communal, human economy?
What are its effects on relationships?
Does it undermine conviviality?
Does it undermine traditional forms of community?
How does it affect our way of seeing and experiencing the world?
Does it foster a diversity of forms of knowledge?
Does it build on, or contribute to, the renewal of traditional forms of knowledge?
Does it serve to commodify knowledge or relationships?
To what extent does it redefine reality?
Does it erase a sense of time and history?
What is its potential to become addictive?

II. MORAL
What values does its use foster?
What is gained by its use?
What are its effects beyond its utility to the individual?
What is lost in using it?
What are its effects on the least advantaged in society?

III. ETHICAL
How complicated is it?
What does it allow us to ignore?
To what extent does it distance agent from effect?
Can we assume personal, or communal responsibility for its effects?
Can its effects be directly apprehended?
What ancillary technologies does it require?
What behavior might it make possible in the future?
What other technologies might it make possible?
Does it alter our sense of time and relationships in ways conducive to nihilism?

IV. PRACTICAL
What does it make?
Who does it benefit?
What is its purpose?
Where was it produced?
Where is it used?
Where must it go when it’s broken or obsolete?
How expensive is it?
Can it be repaired?
By an ordinary person?

V. VOCATIONAL
What is its impact on craft?
Does it reduce, deaden, or enhance human creativity?
Is it the least imposing technology available for the task?
Does it replace, or does it aid human hands and human beings?
Can it be responsive to organic circumstance?
Does it depress or enhance the quality of goods?
Does it depress or enhance the meaning of work?

VI. METAPHYSICAL
What aspect of the inner self does it reflect?
Does it express love?
Does it express rage?
What aspect of our past does it reflect?
Does it reflect cyclical or linear thinking?

VII. POLITICAL
Does it concentrate or equalize power?
Does it require, or institute a knowledge elite?
Is it totalitarian?
Does it require a bureaucracy for its perpetuation?
What legal empowerments does it require?
Does it undermine traditional moral authority?
Does it require military defense?
Does it enhance, or serve military purposes?
How does it affect warfare?
Is it massifying?
Is it consistent with the creation of a global economy?
Does it empower transnational corporations?
What kind of capital does it require?

VIII. AESTHETIC
Is it ugly?
Does it cause ugliness?
What noise does it make?
What pace does it set?
How does it affect the quality of life (as distinct from the standard of living)?

IX. ECOLOGICAL
What are its effects on the health of the planet and of the person?
Does it preserve or destroy biodiversity?
Does it preserve or reduce ecosystem integrity?
What are its effects on the land?
What are its effects on wildlife?
How much, and what kind of waste does it generate?
Does it incorporate the principles of ecological design?
Does it break the bond of renewal between humans and nature?
Does it preserve or reduce cultural diversity?
What is the totality of its effects, its “ecology”?
Jimbo wrote: Tue Mar 28, 2023 8:29 pm AI will only be as moral and ethical as the people who program it. Thus it will be a sociopathic nightmare technology that will destroy us all, because the people who program it are employed by sociopathic corporations run by sociopathic tech bros.
Kniferide wrote: Fri Mar 21, 2025 10:23 am I know that AI is saying "hold my beer" to Bitcoin as far as energy consumption, though, like Crypto, that argument against it should lessen if the world actually moves away from fossil fuels.
This article from The Guardian touches on AI's role in environmental racism, particularly Musk's supercomputer, Colossus, and how it's damaging to the low-income black neighborhoods in Memphis surrounding it. It also figures that Democrats like Memphis's Paul Young would be in Musk's back pocket.

If I could vote crap on AI again I would vote crap on AI again. This isn't making anyone's quality of life that much more enjoyable, and if it has the power to extend life that life will be in service to technology and the people who employ it. Man's exploitation of AI is the great filter that will keep us from reaching out truest potential.
Justice for Kyle Bassinga, Da'Quain Johnson, Logan Sharpe, Qaadir & Nazir Lewis, Emily Pike, Sam Nordquist, Randall Adjessom, Javion Magee, Destinii Hope, Kelaia Turner, Dexter Wade, Nakari Campbell, Sara Millerey González

Re: Thing: Artificial Intelligence

186
hbiden@onlyfans.com wrote: Thu May 08, 2025 5:01 pm
that was a very moving article. the judge allowed it this time because the sister made clear it was all her own words. not the dead victim's, not the lawyer's.
i'm sure there are good reasons to restrict this sort of thing. not too hard to imagine.
It's worse than hearsay, probably.
Records + CDs for sale

Re: Thing: Artificial Intelligence

187
enframed wrote: Thu May 08, 2025 8:33 pm
hbiden@onlyfans.com wrote: Thu May 08, 2025 5:01 pm
that was a very moving article. the judge allowed it this time because the sister made clear it was all her own words. not the dead victim's, not the lawyer's.
i'm sure there are good reasons to restrict this sort of thing. not too hard to imagine.
It's worse than hearsay, probably.
which would also be allowed in a sentencing hearing like this. just sayin', not arguing.

Re: Thing: Artificial Intelligence

188
Very solid article in this context:

Robert Epstein: The Empty Brain

An excerpt from the beginning:

No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

Who is online

Users browsing this forum: No registered users and 3 guests