REPEATING THE DEAL: As a DECISION-GUIDE, you will be given a FORMULA involving only FRACTIONS to be added and subtracted. If you've WORKED any PROBABILITY problem, you've done the same thing. So the TACTICS haven't changed -- only the STRATEGY, since the PROBABILITY MEASURES refer to EVENTS, while the ASSERBILITY MEASURES refer to ASSERTIONS. (ASSERBILITY relates to ASSERTIONS in the way that PROBABILITY relates to EVENTS. Dig?)And this FORMULA will show that WILDCATTING (TAKING RISKS) CAN PAY OFF BIG! (Something experience can tell you only for limited cases.)
To use it, we'll relabel its ASSERTIONS. No longer "A, B", but, respectively, "H, P" -- "H" for "HYPOTHESIS", "P" for "PREDICTION". So it runs: ((H -> P) & P) -> P.
I relabel the TABLES for this:
- H -> P: Hypotheses H implies prediction P.
- (...)&P: Prediction P is CONFIRMED.
- ((...)&P)ÉH: This implies that H is (possibly) TRUE.
However, I admitted that this uses something the Logician labels "FALLACY OF ASSERTING THE CONSEQUENT". However,
H P H -> P (H -> P) & P ((H -> P) & P) -> H 0 0 1 0 1 0 1 1 1 0 1 0 0 0 1 1 1 1 1 1 So what's so great about this FORM, otherwise?
- FAC FAILS BY JUST ONE CASE OUT OF FOUR.
- That single FAILURE QUALIFIES IT TO DEAL WITH REALITY, whereas NO FAILURE (as with MP) TAKES IT OUT OF REALITY AND PUTS IT INTO LANGUAGE!
Here's a clue of what you'll learn. The LAST TABLE ABOVE DISPLAYS ONLY ONE POSSIBILITY OUT OF FOUR OF THE CONCLUSION BEING UNTRUE.
The "great stuff" is the "positive part" can get better, with "the negative" remaining CONSTANT.
- Suppose that, besides the CONFIRMED PREDICTION P (now labeled P1), Hypothesis H makes another prediction P2 WHICH IS ALSO CONFIRMED.
- The FORM becomes ((H -> ((P1 & P2)&(P1&P2)) -> H. Here are the TABLES for this:
H P1 P2 (P1&P2) H -> (P1 & P2) ((H -> (P1&P2))& (P1&P2) (((H -> (P1&P2))& (P1&P2)) -> H 0 0 0 0 1 0 1 0 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 1 0 1 0 0 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1
- THE NUMBER OF ROWS (POSSIBILITIES) DOUBLED, from 4 ROWS to 8 ROWS.
- THE NUMBER OF POSSIBLE FAILURES REMAINED THE SAME, 1 -- NOW 1 OUT OF 8!
Continuing, if H PREDICTED ONE MORE CONFIRMED PREDICTION,
- THE POSSIBILITIES WOULD DOUBLE TO 16,
- with still only 1 POSSIBILLITY OUT OF THE 16 OF FAILURE!
"SUCCESS BREEDS SUCCESS!" As long as THE NUMBER OF CONFIRMED PREDICTIONS FOLLOW FROM THE SINGLE HYPOTHESIS, the "SCORE" gets BETTER and BETTER. That's the "great thing" about FAC.
What? You're crying over the work involved in TABLES and READING THEM? Not to worry! I'll give you an ASSERBILITY FORMULA, which will calculate ALL IN A MINUTE. And, in another file, I'll give you an ASSERBILITY CALCULATOR.
First, THE ASSERBILITY FORMULA, PROVIDING A MATHMEASURE FOR DECISIONMAKING UNDER UNCERTAINTY.