To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.off-topic.debateOpen lugnet.off-topic.debate in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Off-Topic / Debate / 26347
26346  |  26348
Subject: 
Re: Personality test vs. Religion
Newsgroups: 
lugnet.off-topic.debate
Date: 
Fri, 29 Oct 2004 22:01:58 GMT
Viewed: 
2226 times
  
In lugnet.off-topic.debate, Dave Schuler wrote:
This is part of the problem.  You're implicitly assuming that the test is a
valid instrument, and that therefore the only way to disprove the validity of
the test is to take the test (which is designed not to yield falsifiable
results) and make it yield false results.  This is a logical impossibility.

Not really-- because as I've said I've seen what I believe to be evidence of it
yielding *correct* results. And, as I've said, it IS (for my part) falsifiable,
because if I had measured someone (say) as indecisive, and they took the test
and it said they WERE decisive, that's (to me) falsifiable. As you correctly
point out, though, it's subjective. The fact that I think someone is indecisive
is subjective, and the fact that the test SAYS they're decisive or indecisive is
no more verifiably objective than my viewpoint. And yet despite that fact, both
myself and the test are TRYING (I believe) to be objective.

I have repeatedly pointed out several of the many ways that this test is
methodologically unsound, and for any of these reasons the test is invalid.
However, you choose to embrace the test regardless of any argument to the
contrary, so I assert that you're staking your claim at the wrong point.

Really? I agreed with you on every aspect insofar as saying that the methodology
is error-prone; I just disagreed insofar as I think it's less error-prone than
you. Your argument, I think, should be with the underlying theory BEHIND the
test, rather than the test itself, which I think we can both agree is subject to
personal bias and/or lying on the part of the tester and testee.

The theory BEHIND the test being the fact that people are measurable on the four
dimensions-- is "jump to conclusions" vs. "never decide anything" a valid metric
upon which to judge people? Does such a metric coincide with what we (I)
typically qualify as being "decisive" or not? Is there a fallacy in that axis
which allows people to be at two different spots equally, not represented in the
test results?

You cand disagree with that quite a bit and I don't think we can get anywhere. I
can show you hordes of people who are one way or the other, and you can show me
hordes of people who are both; and I'd just argue that the ones who are both
really just happen to be close to that 50/50 line.

The issue, as far as I'm concerned, is not whether the subjective test
accurately describes people who in your (and their) subjective description
match the test's description.

Instead, the issue is that the test does not allow the possibility for a
definitive true/false assessment of the results.

Well.. yeah! If there were a way to definitively assess the results, why not
just use that instead of the test? The fact is we only have people's judgements
to go by. Is this person decisive or not? If 100 people all say "yes", it's
still not an empirical result, but in lieu of a better method, I personally am
willing to accept those 100 unanimous testimonies as "pretty darn right". And
the fact that it matches the test most of the time also means I'm willing to
accept the test's results as "pretty right". Probably less so than 100 people's
opinions if they disagree with the test, but still, "pretty right".

You yourself have stated
this clearly, when you dismissed the final summary.  The final summary is, in
fact, the essence of the test's results, so you're saying in effect "I know
that the test is invalid, but I'm still choosing to believe that it's valid."

Uh, I'm dismissing the final results insofar as their presentation, not their
content. By showing a scale from one extreme to the other, you're showing a
clearer picture of what you're MORE likely to do and what you're LESS likely to
do. By showing (say):

Decisive                                           Indecisive
|     .     .     .     .     |     .     .     .     .     |
|-----------------------------|-----------X-----------------|

You're actually demonstrating that a person tends to be more indecisive than
decisive. You're showing both sides of the issue.

The final descriptions don't do that. They instead say something like "you
prefer to keep your options open and avoid making decisions before needed". Now,
the PROBLEM with that is that even though in the above depiction, the person is
70% indecisive, they're also 30% decisive. If the test INCORRECTLY said "you
like to decide things as soon as you can to get a direction quickly", you WOULD
partially agree that it describes you, because you WOULD be 30% decisive, and it
DOES describe you in PART. Saying it that way does NOT stress the full spectrum
on which you're being measured, and conveys only a partial picture. And BECAUSE:

1) the test is written positively (and needs to be, I would argue)
2) people will attempt to associate themselves with the positive
3) people trust authority and will agree with the test on the grounds that it's
a test

People will usually agree with the result, even if it were wrong. If INSTEAD, it
showed:

Decisive                                           Indecisive
|     .     .     .     .     |     .     .     .     .     |
|-----------------X-----------|-----------------------------|

You would probably be more apt to pick up on the innaccuracy, because it's
comparing both sides.

Anyway, THAT's my problem with the final summary.

Yes, but every single question on the test yields a similar objection, even
the seemingly opposite "open" and "closed" are problematic: does "open" mean
vulnerable?  Or receptive?  Or uncritical?  Or lascivious?  Or awake?  Or
generous?  How do you know?  And how do you contrast this multi-dimensional
and multiply-connotative word with "closed," when "closed" is similarly
faceted? Rather than the true binary choice that M/B pretends, you're facing
a field of say 36 choices, no one of which can be called "true" or "false"
with any certainty.

I think you may be talking youself into a corner here. You're perfectly correct,
but if that's your objection, one could argue (as I'm doing) that either better
words could be selected, or that *in general*, despite the occasional
misinterpretation, most people understood it correctly.

And you're right! There's no way of telling if the MAJORITY or MINORITY
understood the question as you intended it. There's no empirical evidence to
suggest definitions of English words or peoples' understanding of them, or
peoples' ability to relate them to their own personalities.

That's why (I assume) that when the test was developed, it was tested on people
to see whether or not what they evaluated themselves to be matched with both
their own self-valuations AND the evaluations of others who might classify them
on those lines. Certainly it worked on me, and several others I knew when I took
it-- and if it had come out with other results for me (which are possible
results of the test, some of which were even obtained by people I knew), I
would've asserted that the test was less accurate.

As for the dimensions being right, what would you accept as "proof"?

Actually, this is similar to the problems of biblical "prophecy."  The
so-called "prophecies" in the bible are so shamelessly open-ended and
non-specific that any of a zillion results can be claimed to "prove" the
"prophecy."

So here's what I propose:  The test as it exists is hopelessly untestable and
should be scrapped.  In its place, I would suggest developing a test that
makes specific descriptions that can be shown to be false or true.

Wait-- are you suggesting that such a thing is possible? That specific
descriptions can be shown to be true/false? I assume your answer is "no", just
checking.

Further, as a scientific instrument, the tester should be able to make
testable predictions about future behaviors as described by the test.  Since
the test makes bold claims about a range of behaviors, then I suggest that at
least several examples of each of the many possible results must be tested in
this way.

Is this not possible with the current test? Could I not make a prediction about
someone who's judged on the test to be decisive, then see whether it held true?

Let's say some biologists claim to have found the "decisive" gene in human
genetics. They remove it from 1,000 people, and add it to 1,000 people. How
would you test whether someone in your 2,000 people was "decisive" or not to
see if it worked?

I'm afraid that this is too hypothetical to be useful.  How did the
geneticists identify the target gene as the "decisive gene?"  Presumably the
testing would have identified it before that point, right?  So why not
examine that same test as it pertains to the post-gene-removal subjects?

?

1st off, I don't think it matters how it was discovered. Aren't most genes
discovered more-or-less by accident anyway? Let's say they identify it in the
year 2087 by accident in a computer-model of humans living in a virtual
environment. They run a bunch of simulations and it reveals that this one
particular gene affects decisiveness in the subject, by observing the actions of
the virtual people and noticing that "hey, those people really look
decisive/indecisive now, but I don't notice any other differences" (not that I'm
saying this COULD happen, mind you). They test it on 40 million virtual humans
and it appears to work flawlessly every time-- each one is decisive/indecisive
as predicted. They get approval to (somehow) test it in real people. They
(somehow, ok, I admit this is particularly silly) are able to remove/insert the
gene into 2000 people who are each willing to the experiment, and who want to be
more/less decisive.

Now what? Did it work? How would you tell? All of Ed's friends and family say
he's extra decisive now, as opposed to indecisive before, as does his team of 17
kazillion psychologists. He even re-takes the M/B test and it says he's
decisive, where it said he wasn't before. Did it work? Can there be any
empirical proof of "decisiveness"? Heck, you and I couldn't even agree on what
the word means-- can there ever hope to be an empirical solution? Or will we
forever be limited to "yeah, I think that's right"?

Because in your description the psychiatrists are testing with awareness of
what the test's intent is: namely, to achieve positive correlation between
the subject's answers and the testers' observations of the subjects.

Oh, sorry if you got that implication-- it wasn't intended. I would assume that
the psychiatrists would be told "Hey, rate these guys, and let us know what you
think". Only afterwards would I assume they're let in on the fact that they're
being compared to other psychiatrists and the patients' M/B results.

If you propose that the testers should make only general observations of the
subjects, then you face the problem of correlating these observations with
the M/B results, which is another subjective, interpretive process.

Is there anything in psychology that isn't subjective?

But if you're asking for them to test the M/B characteristics
whithout knowing they're testing them, well... I'm not sure that's possible.

Ideally, the subjects shouldn't know that they're being tested, and the
psychiatrists shouldn't know *what* they're testing.  However, any reasonable
psychiatrist (which, parenthetically, neither Myers nor Briggs was) would
recognize that the testing parameters of Myers-Briggs are inherently
non-scientific and non-falsifiable, and the psychiatrist would reject the
test as folly.

Really? Huh. I guess I'd assume not, but I'm no psychiatrist. Are
psychiatrists/psychologists generally against M/B for being useless? I mean, I'd
expect them to be against the test because:

A) it's a tool that may lose them money
B) its use in the specific case can be incorrect
C) its results aren't very useful in the specific
D) people's interpretations of the results may be inaccruate

But I wouldn't expect them to be against the categories themselves as being
unscientific or "generally unapplicable". At least not any moreso than any other
assessment they might make on a client/patient.

DaveE



Message has 1 Reply:
  Re: Personality test vs. Religion
 
-snippity- (...) -snipity- I've been following the debate for a bit here, but I'd say this is the core of the problem. Psychology is not a pure science like physics, chemistry, etc - and therefore does not operate on the same basis of scientific (...) (20 years ago, 31-Oct-04, to lugnet.off-topic.debate)

Message is in Reply To:
  Re: Personality test vs. Religion
 
(...) This is part of the problem. You're implicitly assuming that the test is a valid instrument, and that therefore the only way to disprove the validity of the test is to take the test (which is designed not to yield falsifiable results) and make (...) (20 years ago, 29-Oct-04, to lugnet.off-topic.debate)

53 Messages in This Thread:













Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact

This Message and its Replies on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR