• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 10


Status
Not open for further replies.
@Constance: BTW if you think Scientific Positivism is outmoded, how is it that Theism is something more progressive ( as is implied by the context of your comment within the discussion ) ?
I also want to quote here the MIT Press's page describing Yanofsky's major book, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us, because it summarizes the main point I wanted to make to Randall -- the by-now-obvious fact that claims of knowledge concerning the nature of 'reality' are misleading unless they recognize the limits of human knowledge and of human knowing ...
That's at least a constructive reply. It doesn't address the issue or the questions that were in focus at the time ( which are still perfectly valid ). But at least it's not attacking anyone on some personal level.
 
@Constance: BTW if you think Scientific Positivism is outmoded, how is it that Theism is something more progressive ( as is implied by the context of your comment within the discussion ) ?

That's at least a constructive reply. It doesn't address the issue or the questions that were in focus at the time ( which are still perfectly valid ). But at least it's not attacking anyone on some personal level.

Folks - Randall feeds on all this attention he is getting and he is wasting our time and pulling us away from a productive discussion.

@Usual Suspect
writes

But at least it's not attacking anyone on some personal level.


For example

1) by calling them "nitwits":

Consciousness and the Paranormal — Part 8

"Anyway, unlike the rest of the nitwits here, I'm also egotistical enough to think I get what you're saying, and IMO it all adds up to the same situation it always does, which is a matter of context or perspective. Within some span along the timeline, caused events are the result of intention, but within a wider span, intention is itself the result of unintentional processes. So the paradox is the result of an awareness of temporal elements within our worldview, state of existence, Dasein, Exisistenz ( take your pick ). Consequently, this naturally leads to the idea that consciousness is an emergent phenomena."

2) or "Maybe the only thing I'm fooling myself about is the capacity of some people to actually switch gears."

3) or: "I already know what the papers say. I was asking what you think that is. Or is cutting and pasting other people's stuff the extent of your thought process?"

Consciousness and the Paranormal — Part 10

So that's three attacks on a personal level by Randall. Clearly he wants one set of rules for himself and one for others.
 
I also want to quote here the MIT Press's page describing Yanofsky's major book, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us, because it summarizes the main point I wanted to make to Randall -- the by-now-obvious fact that claims of knowledge concerning the nature of 'reality' are misleading unless they recognize the limits of human knowledge and of human knowing:

"Overview:
Many books explain what is known about the universe. This book investigates what cannot be known. Rather than exploring the amazing facts that science, mathematics, and reason have revealed to us, this work studies what science, mathematics, and reason tell us cannot be revealed. In The Outer Limits of Reason, Noson Yanofsky considers what cannot be predicted, described, or known, and what will never be understood. He discusses the limitations of computers, physics, logic, and our own thought processes.

Yanofsky describes simple tasks that would take computers trillions of centuries to complete and other problems that computers can never solve; perfectly formed English sentences that make no sense; different levels of infinity; the bizarre world of the quantum; the relevance of relativity theory; the causes of chaos theory; math problems that cannot be solved by normal means; and statements that are true but cannot be proven. He explains the limitations of our intuitions about the world—our ideas about space, time, and motion, and the complex relationship between the knower and the known.

Moving from the concrete to the abstract, from problems of everyday language to straightforward philosophical questions to the formalities of physics and mathematics, Yanofsky demonstrates a myriad of unsolvable problems and paradoxes. Exploring the various limitations of our knowledge, he shows that many of these limitations have a similar pattern and that by investigating these patterns, we can better understand the structure and limitations of reason itself. Yanofsky even attempts to look beyond the borders of reason to see what, if anything, is out there.



About the Author: Noson S. Yanofsky is Professor in the Department of Computer and Information Science at Brooklyn College and The Graduate Center of the City University of New York. He is a coauthor of Quantum Computing for Computer Scientists.

Reviews: “Yanofsky takes on this mindboggling subject with confidence and impressive clarity. He eases the reader into the subject matter, ending each chapter with further readings. His book is a fascinating resource for anyone who seeks a better understanding of the world through the strangeness of its own limitations and a must-read for anyone studying information science.”—Publishers Weekly

“Yanofsky provides an entertaining and informative whirlwind trip through limits on reason in language, formal logic, mathematics—and in science, the culmination of humankind’s attempts to reason about the world.”—The New Scientist

“In my view, Outer Limits is an extraordinary, and extraordinarily interesting, book. It is a cornucopia of mind-bending ideas.”—Raymond S. Nickerson, PsycCRITIQUES

“The scope of the material covered is so wide, and the writing so clear and intuitive, that all readers will learn something new and stimulating.”—Thomas Colin, Leonardo Reviews

Endorsements: “Yanofsky has brought together insights about quantum mechanics, logic, and mathematics under one rubric. Very few others could pull that off. This book has the potential to be a classic.” —Prakash Panangaden, School of Computer Science, McGill University

Awards: Winner, 2013 American Publishers Award for Professional and Scholarly Excellence (PROSE Award) in Popular Science & Popular Mathematics, presented by the Professional and Scholarly Publishing Division of the Association of American Publishers


The Outer Limits of Reason

I had just read this myself and was about to post it up. All three of the links look fascinating.
 
No. I'm suggesting that critical thinking requires that we examine the limitations and shortcomings of the presuppositions we often unconsciously bring to our thinking.
Certainly no argument there, which is why I was asking for a description of the experiences Strathmann claims to have had, and by pointing out what seems to be a problem with his idea that tiny beings might inhabit subatomic realms in a way that consciously guides the workings and behavior of the larger universe.
 
Last edited:
Certainly no argument there, which is why I was asking for a description of the experiences Strathmann claims to have had, and by pointing out what seems to be a problem with his idea that tiny beings might inhabit subatomic realms in a way that consciously guides the workings and behavior of the larger universe.

@William Strathmann is this an accurate encapsulation of your idea?
 
Randall wrote: "But let's suppose just for a moment that there's something I'm missing."

If you really want to walk through that door you've momentarily cracked open, begin by recognizing and bracketing your own presuppositions. And read, a lot, against the grain of your own confirmation bias.
 
Randall wrote: "But let's suppose just for a moment that there's something I'm missing."

If you really want to walk through that door you've momentarily cracked open, begin by recognizing and bracketing your own presuppositions. And read, a lot, against the grain of your own confirmation bias.
That's whole point I've been making. I do that already. That's why I said I use methods that reduce confirmation bias ( like critical thinking ), and why I'm able to identify problems with various theories. I'm perfectly willing to adapt and change my views when I'm provided with sufficient reason to do so. However refusing to describe one's experiences and deal with the issues raised about them doesn't provide any reasons at all, let alone sufficient ones. So although you say you get the principles and anyone in grade school should know them, you're either not bothering to apply them or you don't really understand them. It's also a bit of a misnomer that people are taught how to think. It's more common that they've been told what to think.

 
That's whole point I've been making. I do that already. That's why I said I use methods that reduce confirmation bias ( like critical thinking ), and why I'm able to identify problems with various theories. I'm perfectly willing to adapt and change my views when I'm provided with sufficient reason to do so. However refusing to describe one's experiences and deal with the issues raised about them doesn't provide any reasons at all, let alone sufficient ones. So although you say you get the principles and anyone in grade school should know them, you're either not bothering to apply them or you don't really understand them. It's also a bit of a misnomer that people are taught how to think. It's more common that they've be en told what to think.


1. Enough with the damn YouTube videos.

2. @Ufology writes: "I do that already." (in response to @Constance: "If you really want to walk through that door you've momentarily cracked open, begin by recognizing and bracketing your own presuppositions. And read, a lot, against the grain of your own confirmation bias.")

  • What do you recognize as your own presuppositions?
  • What was the last thing you read "against the grain of your own confirmation bias"? and what was the result of that reading?
"That's why I said I use methods that reduce confirmation bias ( like critical thinking ) and why I'm able to identify problems with various theories. I'm perfectly willing to adapt and change my views when I'm provided with sufficient reason to do so."
  • Can you provide some (recent) examples of your having adapted and changed your views upon having been provided with sufficient reason for having done so?
 
Last edited:
The Enigma of Reason — Hugo Mercier, Dan Sperber | Harvard University Press

"Reason, they argue with a compelling mix of real-life and experimental evidence, is not geared to solitary use, to arriving at better beliefs and decisions on our own. What reason does, rather, is help us justify our beliefs and actions to others, convince them through argumentation, and evaluate the justifications and arguments that others address to us."

This reminds me very much of DWG Arthur Schopenhauer's book:

The Art of Controversy

http://www.gutenberg.org/cache/epub/10731/pg10731-images.html

Let me thus ... ahem ... erect a "wall of text" around my neighbor to the north (AND I will make him pay for it ... ;-)

thus

"Controversial Dialectic is the art of disputing, and of disputing in such a way as to hold one's own, whether one is in the right or the wrong—per fas et nefas.[1] A man may be objectively in the right, and nevertheless in the eyes of bystanders, and sometimes in his own, he may come off worst. For example, I may advance a proof of some assertion, and my adversary may refute the proof, and thus appear to have refuted the assertion, for which there may, nevertheless, be other proofs. In this case, of course, my adversary and I change places: he comes off best, although, as a matter of fact, he is in the wrong.

[Footnote 1: According to Diogenes Laertius, v., 28, Aristotle put Rhetoric and Dialectic together, as aiming at persuasion, [Greek: to pithanon]; and Analytic and Philosophy as aiming at truth. Aristotle does, indeed, distinguish between (1) Logic, or Analytic, as the theory or method of arriving at true or apodeictic conclusions; and (2) Dialectic as the method of arriving at conclusions that are accepted or pass current as true, [Greek: endoxa] probabilia; conclusions in regard to which it is not taken for granted that they are false, and also not taken for granted that they are true in themselves, since that is not the point. What is this but the art of being in the right, whether one has any reason for being so or not, in other words, the art of attaining the appearance of truth, regardless of its substance? That is, then, as I put it above.

Aristotle divides all conclusions into logical and dialectical, in the manner described, and then into eristical. (3) Eristic is the method by which the form of the conclusion is correct, but the premisses, the materials from which it is drawn, are not true, but only appear to be true. Finally (4) Sophistic is the method in which the form of the conclusion is false, although it seems correct. These three last properly belong to the art of Controversial Dialectic, as they have no objective truth in view, but only the appearance of it, and pay no regard to truth itself; that is to say, they aim at victory. Aristotle's book on Sophistic Conclusions was edited apart from the others, and at a later date. It was the last book of his Dialectic.]

If the reader asks how this is, I reply that it is simply the natural baseness of human nature. If human nature were not base, but thoroughly honourable, we should in every debate have no other aim than the discovery of truth; we should not in the least care whether the truth proved to be in favour of the opinion which we had begun by expressing, or of the opinion of our adversary. That we should regard as a matter of no moment, or, at any rate, of very secondary consequence; but, as things are, it is the main concern. Our innate vanity, which is particularly sensitive in reference to our intellectual powers, will not suffer us to allow that our first position was wrong and our adversary's right. The way out of this difficulty would be simply to take the trouble always to form a correct judgment. For this a man would have to think before he spoke. But, with most men, innate vanity is accompanied by loquacity and innate dishonesty. They speak before they think; and even though they may afterwards perceive that they are wrong, and that what they assert is false, they want it to seem the contrary. The interest in truth, which may be presumed to have been their only motive when they stated the proposition alleged to be true, now gives way to the interests of vanity: and so, for the sake of vanity, what is true must seem false, and what is false must seem true.

However, this very dishonesty, this persistence in a proposition which seems false even to ourselves, has something to be said for it. It often happens that we begin with the firm conviction of the truth of our statement; but our opponent's argument appears to refute it. Should we abandon our position at once, we may discover later on that we were right after all; the proof we offered was false, but nevertheless there was a proof for our statement which was true. The argument which would have been our salvation did not occur to us at the moment. Hence we make it a rule to attack a counter-argument, even though to all appearances it is true and forcible, in the belief that its truth is only superficial, and that in the course of the dispute another argument will occur to us by which we may upset it, or succeed in confirming the truth of our statement. In this way we are almost compelled to become dishonest; or, at any rate, the temptation to do so is very great. Thus it is that the weakness of our intellect and the perversity of our will lend each other mutual support; and that, generally, a disputant fights not for truth, but for his proposition, as though it were a battle pro aris et focis. He sets to work per fas et nefas; nay, as we have seen, he cannot easily do otherwise. As a rule, then, every man will insist on maintaining whatever he has said, even though for the moment he may consider it false or doubtful.[1]

[Footnote 1: Machiavelli recommends his Prince to make use of every moment that his neighbour is weak, in order to attack him; as otherwise his neighbour may do the same. If honour and fidelity prevailed in the world, it would be a different matter; but as these are qualities not to be expected, a man must not practise them himself, because he will meet with a bad return. It is just the same in a dispute: if I allow that my opponent is right as soon as he seems to be so, it is scarcely probable that he will do the same when the position is reversed; and as he acts wrongly, I am compelled to act wrongly too. It is easy to say that we must yield to truth, without any prepossession in favour of our own statements; but we cannot assume that our opponent will do it, and therefore we cannot do it either. Nay, if I were to abandon the position on which I had previously bestowed much thought, as soon as it appeared that he was right, it might easily happen that I might be misled by a momentary impression, and give up the truth in order to accept an error.]

To some extent every man is armed against such a procedure by his own cunning and villainy. He learns by daily experience, and thus comes to have his own natural Dialectic, just as he has his own natural Logic. But his Dialectic is by no means as safe a guide as his Logic. It is not so easy for any one to think or draw an inference contrary to the laws of Logic; false judgments are frequent, false conclusions very rare. A man cannot easily be deficient in natural Logic, but he may very easily be deficient in natural Dialectic, which is a gift apportioned in unequal measure. In so far natural Dialectic resembles the faculty of judgment, which differs in degree with every man; while reason, strictly speaking, is the same. For it often happens that in a matter in which a man is really in the right, he is confounded or refuted by merely superficial arguments; and if he emerges victorious from a contest, he owes it very often not so much to the correctness of his judgment in stating his proposition, as to the cunning and address with which he defended it.

Here, as in all other cases, the best gifts are born with a man; nevertheless, much may be done to make him a master of this art by practice, and also by a consideration of the tactics which may be used to defeat an opponent, or which he uses himself for a similar purpose. Therefore, even though Logic may be of no very real, practical use, Dialectic may certainly be so; and Aristotle, too, seems to me to have drawn up his Logic proper, or Analytic, as a foundation and preparation for his Dialectic, and to have made this his chief business. Logic is concerned with the mere form of propositions; Dialectic, with their contents or matter—in a word, with their substance. It was proper, therefore, to consider the general form of all propositions before proceeding to particulars."

etc. etc. etc.
 
I think the above is a paradigm modern western view of logic and dialectic. This is why I think having some history of philosophy is so important. Schopenhauer evokes Aristotle and Machiavelli and we still take much of this thinking, presuppositionally, in to "our" deliberations. Only to the extent that we are aware of this history and background can we make decisions as to "what" we might think.

I found this very provocative:

"Reason, they argue with a compelling mix of real-life and experimental evidence, is not geared to solitary use, to arriving at better beliefs and decisions on our own. What reason does, rather, is help us justify our beliefs and actions to others, convince them through argumentation, and evaluate the justifications and arguments that others address to us.

In other words, reason helps humans better exploit their uniquely rich social environment. This interactionist interpretation explains why reason may have evolved and how it fits with other cognitive mechanisms. It makes sense of strengths and weaknesses that have long puzzled philosophers and psychologists—why reason is biased in favor of what we already believe, why it may lead to terrible ideas and yet is indispensable to spreading good ones."

Our reason(s) then, are strictly speaking not our own - perhaps we're not blank slates that simply apply "neutral" principles of logic and "critical thinking" to arrive at the truth (positivism) but are fully embedded in our current situation (thrown into it ... some might say ;-) that's the existential situation - the unbearable lightness of reason - and on that basis, we could change our minds - here and there ... now and again.
 
Why Facts Don’t Change Our Minds

"Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own."

Ufology writes:

"That's why I said I use methods that reduce confirmation bias ( like critical thinking ) and why I'm able to identify problems with various theories."

But, apparently, we're all "quite adept at spotting the weaknesses" (in someone else's argument!) ... AND the positions we're blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with."

So that's a clue as to how we might go about changing our minds:

1. if we're good at spotting the weaknesses in other people's arguments
AND 2. we're blind about the weaknesses in our own

THEN others are good at spotting the weaknesses in OUR arguments ... and we should listen to them!

"This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Sounds a bit like Schopenhauer ... to me, but then ... I might be wrong.





 
"Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering."

- smcder this goes to Ufology's comment about not being an expert but fixing cars, computers, etc ... and should underscore the difference between fixing simple problems and being an expert

"Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”
 
Where is the Line Between an Expert and an Amateur?

I did find Randall's various statements about amateur vs expert reasoning provocative - there is a lot of research on expert reasoning, the above is just a "first grab" - there are interesting questions around "garage" or "basement" work in various fields - are there any significant contributions to knowledge made by amateurs? I suspect yes, but the specifics of this would be interesting to explore.

Also, It's one thing to fix relatively simple problems (and I'm sure one could save some money on computer, car repairs from time to time - certainly with basic maintenance tasks that manufacturer's expect a user to perform, but I also suspect that amateurs may screw things up as often as they fix them, if not immediately, then down the road - there are now very specific fluids that go in certain cars for example - and especially on any more complex system in a car or computer - (or a computer in your car! etc) and there is the cost of time, evaluating the true savings in a situation may be more complex than Randall has allowed. As the article above noted, the ignorance of the user - the "black box" interaction with technology is part of the plan - in hobby electronics, micro-controllers like the Arduino are designed specifially for prototyping by non-experts, allowing artists and "makers" to produce things in their field of expertise without having to be an expert programmer or having to understand even how a micro-controller works.
 
1. Enough with the damn YouTube videos.

I second that. I also want to second Steve's questions to Randall in this post:


2. What do you recognize as your own presuppositions?

What was the last thing you read "against the grain of your own confirmation bias"?
What was the result of that reading?

Can you provide some (recent) examples of your having adapted and changed your views upon having been provided with sufficient reason for having done so?

Steve asks these questions in response to the recent claims by Randall, such as:

"That's why I said I use methods that reduce confirmation bias ( like critical thinking ) and why I'm able to identify problems with various theories. I'm perfectly willing to adapt and change my views when I'm provided with sufficient reason to do so."

Randall's most recently posted 5-minute YT video is an attempt to provide support for his claim that he can judge the experiences, ideas, and hypotheses cited and discussed here, such as the interesting posts offered five pages ago by @William Strathmann, without his having read the many 'walls of text' posted here over the last three years concerning the nature of consciousness and the nature of 'reality' as disclosed by multiple aspects of experience and mind in our species and others. The video does not address the complexity of the issues we have recognized and explored in this thread, nor has Randall understood that complexity.

Some of that complexity has, as we have recognized, developed out of the evolution of biological species from autopoietic capacities recognized in the first single-celled organisms appearing in our primordial past, enabling by degrees over eons of evolutionary time the development of experiential awareness in countless species of life on earth leading to increasing degrees of protoconsciousness and consciousness. Somewhere along the line over the last five pages of this thread Randall has commented that he believes that members of our species have ‘a single ancestor’ – a sole unique individual cropping up accidentally or randomly and leading to our species’ branch of evolution through further random genetic changes without which our species would not now exist. This oversimplified view ignores the ways in which capacities for consciousness and thus mind have developed over eons of time through purposeful activities of species leading to ours.

The evolution of experiential awareness, consciousness, and mind is a very big topic, and an essential one for consciousness studies, and I hope we can take it up again after we resolve the distractions we've dealt with for the last ten days or so. If Randall had been following this recurring aspect of our discussions he might have rethought his claim/article of faith and caught the significance of @Soupie's lengthy exploration of the question of experiential awareness in the quantum substrate, further developed by @William Strathmann's interesting contributions here beginning about five pages/ten days ago, to which Randall has overreacted on the basis of his personal aversion to William’s contextualizing of the issue in terms of the history of religion and theological thinking.

Randall has also posted the following objection to our willingness to entertain the possibility that much of what we humans and our recent forebears have experienced, contemplated, and expressed over many millenia in terms of ontological questions and speculations signifies the presence of mind and thought prior to our development of language.

@Constance: BTW if you think Scientific Positivism is outmoded, how is it that Theism is something more progressive ( as is implied by the context of your comment within the discussion ) ?

I think that accounts for Randall’s typical approach in this thread -- to attempt to engage individual posters in brief debates over single posts they’ve added to the general development of ideas and approaches accumulated in this thread, and to attempt to defeat these individuals and their immediate contributions to discussion without having engaged the complexity of the thread’s multifaceted discussions to date. It’s why he demands that individual posters place summaries of their experiences in a brief post that he [thinks] he can critique from the basis of his own unrecognized and unacknowledged presuppositions. I think that if Randall wants to actually participate in and contribute to this thread, he needs to take the time required to follow its development to date, including the walls of text we have written and the walls of text we have quoted from relevant major philosophers, historians of ideas and cultures, and scientific researchers and theorists along the way. I think if he does so he will be surprised at the increasing numbers of practitioners of the ‘hard sciences’ that have recognized the need to go deeper into our history –the history of our and other species’ experienced worlds and the history we can recover of our universe itself -- if they are to find resources with which to account for what we are and the role of our experiential consciousness in approaching the question of the being of 'what-is' {always subject to the extent of the objective and subjective aspects of being that we are able to cognize from our situated perspective}.


Here is a paper concerning information theory, a subject we’ve explored and discussed at length in this thread’s development without finding definitions and interpretations of ‘information’ adequate to deal with the experiential sources and ‘nature’ of meaning as we recognize it in the evolution of species and understand it more fully through both affective neuroscience and phenomenological philosophy. I’ve just come across this paper today and have not yet read it, but indications from skimming the first few pages are that the paper might help to orient us in further discussions here of experiential awareness, consciousness, mind, and concepts of ‘reality’ as our species has experienced and conceived of it.

Stephen Mann, Consequences of a functional account of information

http://philsci-archive.pitt.edu/13384/1/functionalinfo-upload.pdf
 
Last edited:
I second that. I also want to second Steve's questions to Randall in this post ... "Can you provide some (recent) examples of your having adapted and changed your views upon having been provided with sufficient reason for having done so?"
Focusing on me is a total diversion from the subject matter I was attempting to explore with Strathmann, but I'll provide a single example of my shift in viewpoint anyway, because it's the most significant and most recent one I've had that relates to this forum. When I began exploring the idea of AI and consciousness in a bit more detail than the typical sci-fi movie, it was through a book called The Age of Spiritual Machines, by Ray Kurzweil, ( 1999 ), Wikipedia, and for your convenience a video with Kurzweil below:

Kurzweil Interview - Age of Spiritual Machines


So that was some 17 years ago, and it should also be noted that although sci-fi is pure fiction, it still has value when contemplating the issues surrounding AI and consciousness as well, and I've been pondering that casually since the movie 2001 came out in back in the late 60s ( when I was around 10 years old ). So for a long time, it seemed to me that for computers to become as intelligent to us, all that was needed was sufficient programming and processing power, and that view remained dominant until sometime in 2010 when we began discussing this topic in a thread I started called Philosophy Science and the Unexplained ( my how time flies! ). Example here: Philosophy, Science, and the Unexplained

So my shift in thinking on consciousness was in progress some 7 or so years ago, and ultimately since then I've become entirely sure that consciousness is not assured by simply constructing a silicon processor with the same processing capacity as the human brain and providing it with behavioral programming. Not only that, I also tend to lean heavily against that idea, not because I've shifted to a "spiritual" approach, but because of applying critical thinking to the issues of constructing an AI with the same properties ( including consciousness ), as we humans.

I now see that revelation as something very basic and obvious. But it took a long time for me to realize it, and I had a number of debates with people who could not provide a sufficient reason for me to change my views. However if someone had supplied the same reasoning I came upon for myself, I certainly could not have dismissed it, and ultimately I would have had to change my views. It's a rare thing when someone can do that for me, and I really wish it would happen more often, because I want to advance.

I learn from being shown better reasons than the ones I have. But people in general don't seem to be like me. They get upset when their views are challenged and sometimes even reject perfectly good reasons for changing because it challenges their beliefs or faith. I often think that the prevalence of that behavior is a shame and such a waste of time for humanity.


More Recent Kurzweil Published In 2014


He still doesn't seem to have hit on the idea that microchips
and programming do not necessarily equate to consciousness.
So maybe that's not so obvious as I think it should be.
 
Last edited:
Focusing on me is a total diversion from the subject matter I was attempting to explore with Strathmann, but I'll provide a single example of my shift in viewpoint anyway, because it's the most significant and most recent one I've had that relates to this forum.

Congratulations on changing your initial point of view re AI.

Re "focusing on you [being] a total diversion from the subject matter" introduced by @William Strathmann five days ago, it is indeed you who has produced the many distractions from unvexed pursuit of that subject matter. No one wants to focus on you and your distractions. I hope we can put an end to them, and I attempted to help that along with my last post. Please disappear and stop calling attention to yourself here until you have something relevant to say.
 
...
Here is a paper concerning information theory, a subject we’ve explored and discussed at length in this thread’s development without finding definitions and interpretations of ‘information’ adequate to deal with the experiential sources and ‘nature’ of meaning as we recognize it in the evolution of species and understand it more fully through both affective neuroscience and phenomenological philosophy. I’ve just come across this paper today and have not yet read it, but indications from skimming the first few pages are that the paper might help to orient us in further discussions here of experiential awareness, consciousness, mind, and concepts of ‘reality’ as our species has experienced and conceived of it.

Stephen Mann, Consequences of a functional account of information

http://philsci-archive.pitt.edu/13384/1/functionalinfo-upload.pdf
 
Status
Not open for further replies.
Back
Top