THE NOVACASTRIAN PHILOSOPHER
  • Home
  • Topics
  • MeetUps
  • Groups
  • Debates
  • About
  • Contact

Debate Archive

Kant's Critique of Pure Reason

4/6/2019

17 Comments

 
Picture
This is an Online Debate to go with the Specialized Group, Kant's Critique of Pure Reason.

INSTRUCTIONS:
• "Leave a Reply" to this entry to start a new discussion thread, with some question or some other focus, and perhaps your thoughts on that topic.
• "Reply" to contribute to a discussion in the relevant thread.
• Don't forget the "Notify me of new comments" checkbox if that is what you want.
• There are various software limitations: (i) Comments are limited to 800 words, so you will need to post multiple replies for longer contributions; (ii) Nesting of replies is limited to two levels, so it may be a good idea in some replies to indicate to whom you are responding, eg "@JoeM 6/6/2019 05:25:26 pm" or suchlike.
• Contact the moderator if you have any problems or questions.
17 Comments
Joe M
6/6/2019 05:25:26 pm

IS KANT's SYNTHETIC A PRIORI THREATENED BY MODERN SCIENTIFIC DEVELOPMENTS?

Reply
Joe M
6/6/2019 05:41:01 pm

If I get him right, Kant claims that there are fixed "principles" (let us call them) that are imposed on all our thoughts (in the case of logic alone) but also—this is his Copernican revolution—on all our experiences. These principles are a priori. There are various areas of a priori knowledge, and he is definite about what the relevant principles are:

(1) Logic—Aristotle's Syllogistic, which it seems is what he bases his definition of "analytic" on (B10), and so implies that Logic is analytic, while he thinks the rest are all synthetic a priori;
(2a) Arithmetic (B15)—his example is 7+5=12 (B15)
(2b) Geometry—Euclidean geometry, as description of space (B16)
(3) Physics—at least the fundamental principles thereof, such as Conservation of Matter, and Newton's Third Law (B17)
(4) Metaphysics—prime example here is the judgment that all events have a cause (B5).

Modern developments have complicated this picture, mostly against Kant's claims, but sometimes in support.

(1) Logic—Aristotle's Syllogistic has been replaced by Frege's classical propositional and predicate calculus. In principle, this is not a problem for Kant. However, it turns out that Classical Logic is not a perfect articulation of the laws of thought. First, there are various paradoxes—it implies (eg) that "Q, therefore if P then Q" is a valid argument. Second, it still does not cover all logical principles—eg, it says nothing about the validity of inferences involving modal notions such as possibly and necessarily. This is a problem for Kant, because there are many competing proposals for solving each of these problems. There is now no longer just one candidate for the True Theory of Logic.

(2a) Arithmetic—this is of course a very small area of modern mathematics, and has been generalized to get modern analysis. This may or may not be a problem for Kant, since there is some philosophical dispute between classical mathematicians (who, for example, are prepared to make existence claims without being able to produce the thing they claim exists) and other mathematicians of a more Kantian bent (eg, Intuitionists and Finitists, who insist that mathematical reasoning be restricted to what we can construct). So there is space here for a philosophical defense of Kant's position on mathematics understood more broadly.

(2b) Geometry—developments after Kant revealed the possibility of non-Euclidean geometry, and Relativity Theory contradicts the claim that space is Euclidean. These developments seem fatal to Kant's claims here.

(3) Physics—Relativity Theory also supersedes Newtonian physics, though I am not quite sure whether this contradicts any of the specific physical principles that Kant thinks are a priori. (Actually it does deny Conservation of Mass with the mass-energy equivalence, implying that mass can be destroyed and converted into energy, though I suppose mass-energy is conserved. Something is conserved!).

(4) Metaphysics—Quantum Theory seems to imply that not every event has a cause, eg the radioactive decay of an alpha particle. Again, this seems fatal to Kant's claim, but I note below that there is some room for dispute.

So, what might we say in response to all this? . . .

Reply
Joe M
6/6/2019 05:41:45 pm

... So, what might we say in response to all this? I think there are a number of different options:

(i) Deny the modern developments—Thus, as I think Brian noted at our first meeting, one might claim that the statistical laws of Quantum Theory are epistemic rather than ontological (ie, the so-called hidden variables interpretation), and so do not contradict the claim that every event has a cause.

Not sure about this response—this sounds like a complex matter for physicists rather than philosophers, and this particular response does not address all the other problems.

(ii) Accept the modern developments, but deny that this implies we should reject Kant's a priori argument for the specific principles he endorses. (I think TimS suggested this.) This got me thinking about something I read a long time about by Russell, who said something to the effect that modern science has shown that most objects are mostly empty space. It seemed to me at the time that this cannot be right, not because there are no huge spaces between nuclei and electrons, but rather because, in the sense of "empty space" that matters, our experience shows us that this is not so. Perhaps the same applies to modern physics on Kant's principles. Even if Relativity Theory and Quantum Theory are true, it might nevertheless be a fact about our experience that we must necessarily impose Newtonian and Deterministic interpretation on our experience of the world.

This would be more convincing, though, if Kant hadn't been so sure that (eg) Newtonian basic principles. In essence, we would need an a priori argument for Conservation of Mass, and for Parallel Postulate in Euclid, and it is hard to see how that could be.

(iii) Reject Kant's a priori argument for the specific principles he endorses, and find some weaker claims to defend. Here are some alternatives, from strongest to weakest:

(K) The specific principle listed above (ie, determinism, Newton, etc) are imposed on each experience of objects—this is Kant's view.
(K') There are (fixed) principles regarding space and time, basic physics, and causation that are imposed on each experience of objects—the same at all times and between all people, though not necessarily the ones that Kant endorses.
(K") Each experience of objects involves imposition of some principles (or other) regarding space and time, basic physics, and causation that are imposed on—these principles may vary over time, or between people, and may or may not be the principles Kant endorses.

I don't know what to think myself. I am hoping that (K') is true, though we would then have to identify what these principles are, and then give some a priori argument for them. (K") may well be true, and I think is even consistent with supposing that the principles we impose is socially determined. Plenty to think about, then.

Reply
Brian Ness
15/6/2019 03:58:44 pm

Joe stated the following above:

(4) Metaphysics—Quantum Theory seems to imply that not every event has a cause, eg the radioactive decay of an alpha particle. Again, this seems fatal to Kant's claim, but I note below that there is some room for dispute.

This was said in response to Kant's claim that "all events have a cause" is a synthetic a priori.
My feeling is that Kant's view is essentially unaffected by our discovery of the regularities (laws) of the quantum world. My contention was that although Heisenberg showed that there is an inherent uncertainty in our ability to define the properties of a quantum event - this uncertainty is known, in principle, with certainty. In other words the uncertainty itself is certain. My friends in the discussion looked at me with doubtful expressions on their faces regarding this "meta-certainty" so I undertook to obtain a view from a physicist.
The email exchange I had with A/Prof. Chris McNeill at Monash is pasted in the next entry.

Reply
Brian Ness
15/6/2019 04:00:49 pm

Discussion with Chris McNeill in May 2019:
Chris
Thanks for your response via Leonie. I had temporarily forgotten that we have a physicist of your standing in our midst.
Can I ask for some more clarity on your response (there is is a very specific - albeit minor - philosophical issue resting on this)?
Accepting that the two 1kg samples of Carbon 11 would decay at the same rate - my question is "Do the laws of quantum physics demand it". If not do the laws demand that those two masses (after 20 minutes of decay) will weigh x with a y probability or within a y tolerance (eg within exactly 1% of x).

In western philosophy there was a widely held view after Newton that the universe is a mechanical Newtonian machine which follows fixed laws (albeit those laws had not been fully discovered) without exception (leaving aside theism and the possibility of miracles). After quantum theory (QT) became the new scientific paradigm the common philosophical view was/is that the physical world is not determined - to the extent that in the quantum world events follow statistical laws. This means that quantum events cannot be predicted absolutely. We can only predict that event A will occur with a probability of 68% (say) (eg electron e will be at position x with a probability of 68% under certain defined circumstances).
Our dispute relates to this 68% probability. Is that probability itself determined exactly (precisely) within QT (leaving aside whether we are capable of measuring it precisely) - or is it only an approximation or a probability (a meta-probability!!!).
We are debating Immanual Kant's Critique of Pure Reason and in particular his theory of Transcendental Realism. He produced this work in light of Newton's work - but well before QT. We are wondering if Kant would feel the need to rework parts of his crtitique (or not) if he was aware of QT. The answer to this might hinge on what you tell us.

Yes, I know, many scientists can't believe we philosophers can spend time on this sort of seemingly irrelevant stuff!! But we enjoy it - and secretly hope that that all-important meaning in life might become clear if we keep beavering away at our Metaphysics (plus perhaps our contemplations on theological beliefs):):):)

Is my question clearer or am I getting very confused? Joe might like to chip in?

No hurry Chris. Metaphysics has made little progress for several centuries.
Cheers Brian

Brian
So if I understand correctly, the key element is whether the probabilities determined by quantum theory are precise or whether there is uncertainty in the uncertainty? Or is it whether quantum theory is only higher level theory approximating some underlying reality in the same way that Newtonian physics is only an approximation of what is more properly described by quantum physics?
Regards,
Chris


Chris
Sorry for the delay. Stuff came up.
Good question. My question did not cover the distinction you raise. It's your first option I am considering.
What I am asking is whether there is uncertainty in the uncertainty (call it meta-uncertainty) - in principle. Is that meta-uncertainty uncertain because we simply don't yet have the capability to measure it or is it an uncertainty which is built into reality (as we currently understand it)?
The question of whether there is a deeper reality which quantum theory only approximates is a separate issue (I'm betting on there being a deeper reality. I guess the inconsistencies between relativity and quantum theory point to that?).
Thanks
Brian

Brian
I would say we are pretty certain about the uncertainty. The Planck is used to constant quantify the minimum uncertainty in the Heisenberg uncertainty principle and is known to a precision of 6.626070150(81)×10−34Js (i.e. 6.626070150×10−34Js ± 0.000 000 081×10−34Js) which means we know it to 9 decimal places, which is a pretty low uncertainty in my book. This accuracy comes down to experimental precision. Going back to the 2 x lumps of 1kg of carbon 11 I would say that our ability to determine the difference in the number of atoms that have decayed after a certain period will be limited by experimental precision as well. But in this example it is also related to how many carbon atoms there are since it is a statistical process. 1 kg of carbon is essentially equivalent to an infinite number of carbon atoms (~ 5.5 x 10^25 carbon atoms).
Regards,
Chris

Reply
Brian Ness
15/6/2019 04:11:06 pm

So it seems (unless I am misunderstanding something) that we know the value of a quantum probabilities to an accuracy of 9 decimal places - and that tiny uncertainty is not an inherent uncertainty - it is due to experimental imprecision.

If I am right then I am suggesting that Kant's view is undamaged by quantum theory. He might claim that all quantum events have causes - a particular category of cause which has a built-in, known probability associated with it.
So we know for certain that a particular uranium atom will be caused to decay at a time which we know with a precise probability.

Shoot me down!!!

Joe M
17/6/2019 11:50:49 am

Hi Brian

We agree that Kant claims that it is synthetic a priori that all events have a cause, and our question is what implications Quantum Theory has for this claim.

I said that QT seems to imply that this is false. To fill this out, consider some single uranium atom. At each point in time, there is a precise probability that it will decay. Wait long enough, and it will decay. But when it does decay, that specific event—it's decaying at that time—will not have been caused. So Kant's claim needs to be rejected, or modified in some way.

Your response is that, even so, this uncertainty is known with certainty, in principle at least. You say that Kant "might claim that all quantum events have causes - a particular category of cause which has a built-in, known probability associated with it. So we know for certain that a particular uranium atom will be caused to decay at a time which we know with a precise probability". And so Kant's claim is essentially unaffected.

I agree that, in principle, there is a precise probability that a specific uranium atom will decay over a given period of time. However, I do not think that /Kant/ will admit this notion of a probabilistic cause. I assume that Kant thinks that causes are deterministic (though some very close textual analysis might show otherwise). That is, Kant thinks that (i) if A causes B then, the objective (ie, in-principle) probability of B, given A, is 100%, and that (ii) if B is caused then, the objective probability of B, given the compete state of the universe just before B, is 100%. It seems to me that QT denies this, since, as precise as the objective probability admittedly are, they are not 100%.

So I think that Kant has a number of options.

(1) Agree that QT says that these probabilities are not 100%, but claim that these are not objective probabilities, but only due to limits in our knowledge—this is the so-called "hidden-variables" interpretation of QT. Does Chris have an opinion on this?

(2) Agree that QT says that the objective probabilities are not 100%, and admit your notion of probabilistic causation (though this notion would need to be explained, to see if it coherent). He could then still claim that all events have causes (this time including probabilistic causes), and that this is synthetic a priori. He gets to keep the letter of his claim.

(3) Insist that /causes/ are deterministic, but that /laws/ need not be. (Note that it is much easier to see how probabilistic /laws/ are possible—the laws of QT are not deterministic, but they are as precise as any other laws of nature, as you pointed out). He could then claim that all events fall under /laws/, and that this is synthetic a priori. This may well keep the spirit of his claim, and it may be enough for what he wants to do.

What do you think?

Reply
Brian Ness
19/6/2019 12:05:08 pm

Nicely summarized Joe. I agree that Kant (in his deterministic paradigm) fails to take account of a probabilistic universe in his argument. But we are putting ourselves into Kant’s shoes post-quantum - hence your three options open to (a 2019-era) Kant if he is to maintain his claim.
You state “It seems to me that QT denies this, since, as precise as the objective probability admittedly are, they are not 100%.”
I think Chris is saying that the lack of precision in that probability is due to experimental limitations only - and can therefore be taken as being known (100%) in principle (in the same way that the gravitational constant is not known precisely but is taken as being fixed/known in principle). I don’t think this is synonymous with the idea of “hidden variables”.
My inclination would be to go for your option (3) to rescue Kant’s claim. But I am not able to offer very clear reasons because I am still grappling with that vexed question of how ‘causation’ and the ‘laws of nature’ are related.

Joe M
21/6/2019 01:28:28 pm

@ Brian 19/6/2019 12:05:08 pm

I've had a couple of further thoughts, Brian, about what you say,

First, yes, I wasn't quite sure how to interpret the relevant bit of Chris' email, which I take it is: "Going back to the 2 x lumps of 1kg of carbon 11 I would say that our ability to determine the difference in the number of atoms that have decayed after a certain period will be limited by experimental precision as well".

Suppose we start at time t1 with N1 atoms of carbon. We run the experiment, and come to time t2, at which point there are a definite number of atoms remaining. Two questions:

(i) Can we now, at t2, determine (very quickly, obviously) how many atoms now (at t2) remain?—I'm happy to agree that this can in principle be determined precisely, now that events have transpired.

(ii) Could we beforehand, at t1, have determined how many atoms /will/ remain at t2?—I'm presuming that QT says not precisely, and that, in principle, all we can say is that there is a given probability distribution for that number. I gather you would say otherwise.

I think the simplest thing would be just to ask Chris about this more precise issue. I would certainly be interested in what he says.

Second, I'm inclined to agree with you about the best option (viz, (3)) for Kant. I do not see how to make sense of the notion of "probabilistic causation" (option (2)), and the interesting thing about physicists, including quantum physicists, is that they are all still looking for /laws/ of nature. They suppose that, in principle, the world operates on the basis of universal laws, so much so that it seems Kant was right about the human inclination to impose order on events. He was just wrong about what form this ordering takes—it isn't a causal order, but a law-like order.

Brian Ness
27/6/2019 10:55:05 am

Joe
I think you've highlighted a clear point of difference between our conceptions of probability - as applied to the uranium decay example.
Your interpretation seems to be that the number of atoms which will decay between t1 and t2 cannot be known precisely (in principle i.e. ignoring experimental limitations). We can only say that at t2 x-number of atoms will have decayed - plus or minus some probability factor (say 2%).
My interpretation is that If we observe any PARTICULAR atom within that 1kg lump from t1 onwards - the probability that THAT atom will decay over the subsequent time period t1+y is z (i.e. 0.000000something). Under this conception of quantum probability it seems to me that at t2 we can calculate exactly how many atoms have decayed (in principle). In other words if we have 1000 atoms of a radioactive element A at t1 (and A has a half-life of exactly 60 minutes) then quantum theory would tell us that there is precisely a 50% probability that any particular atom will decay in a period t1 to t1+60 minutes - in principle.
I am not at all certain of my view so I agree that we should get Chris's input. Can I suggest we give him access to this site and invite him to comment? We may need a 'resident' physicist on this forum from time to time. We might also find he has an interest in obscure philosophical questions :):)
How do I go about giving him access if you are willing?

Reply
Joe M
15/8/2019 07:19:27 pm

The following is a reflection on Kant's thoughts on judgment, motivated in part by discussion between myself and Brian. Kant is concerned with the application of rules, ie whether something "does or does not stand under a given rule" (B171), for example whether the rules we use to identify chairs (= the schema for the concept of chair) apply in a particular case.

Our Question is: IS THE APPLICATION OF RULES ITSELF RULE-GOVERNED?

Kant thinks not. He gives a familiar regress argument: "If it is sought to give general instructions how we are to subsume under these rules, ... that could only be by means of another rule. This in turn, for the very reason that it is a rule, again demands guidance from judgment. And thus it appears that, though understanding is capable of being instructed, and of being equipped by rules [being instructed = being equipped by rules, it seems], judgment [which by definition applies the rules to cases] is a peculiar talent which can be practised only, and cannot be taught" (B172), by which Kant must mean that judgment cannot be equipped by rules.

Brian seems to think otherwise. If I understand him correctly, he thinks that the application of rules must itself fall under rules, either—I am filling in here—because there is already no question about how the rule is to be applied in a particular case, or because when there is some question the matter is to be determined by the application of a further rule which decides the matter.

I think there is a way of reconciling the two views.

Let us grant, with Kant, that applying a rule to a case requires something more than just being equipped with the rule and grasping the case. We have to do the applying, which we call "making a judgment". Further, we can even grant that the judgment does not involve any other rules—for suppose rule R is applied to case C by a judgment J, but that J involved the application of a further rule R2 which (say) gives the conditions under which R is to be applied to cases; then we can just say instead that rules R+R2 were applied to case C, by some different judgment J*, which now does not involve any further rules. At the end of the day, we just have to apply the rules.

But there is still something in what Brian says. True, the application of rules must involve something that is not codified (the judgment), but it does not follow that this cannot be codified more than it currently is (even if not completely), and, indeed, when the judgment is controversial, we say should involve more codification and less judgment.

For example, here are some rules (R) about chairs: they have legs, they have a place to rest your bum, they have some sort of backing. Here is an arm-chair. "That's a chair", you judge without any ado, applying these rules without saying anything further. Your application of the rules is uncodified. But still, it could be, and sometimes it should be. Here is a deck-chair. You judge without any ado that it is a chair, but this time others disagree. Your application of the rules is again uncodified, but, in this controversial case, you should say more. You should explain why the rules R apply in this case, by saying (for example) that deck-chairs also have features XYZ in common with other chairs. In effect, you propose some further rule R2 which you now apply in conjunction with the original rules R. Your concept of a chair is now more codified than it was (you added the bit about their being XYZ), though, of course, still not completely codified.

In sum, it may be impossible to codify a concept completely, but even so it may be that we (or at least computers) can always codify a concept more and more, whenever we need to (eg, to deal with a controversial case). This is not a problem. It is impossible to draw a line completely (its being infinite), but even so we can always extend a line segment more and more, whenever we need to do so. That's all we need when it comes to lines, and maybe that's all we need when it comes to concepts.

Reply
Brian Ness
19/8/2019 04:39:38 pm

Nice one Joe!
I'm going to mull this over for a while - perhaps even consign it to my sub-conscious for a week or two (where I suspect/hope there may reside some meta - and perhaps meta-meta - rules which are not available to my conscious mind)(Ref Kahneman,D. 2011. "Thinking, Fast and Slow").
The above may seem strange to those who have not been party to our discussions - but fear not all will hopefully be revealed in time. This issue may be central to my doctoral project so it's not going away in a hurry

Reply
Brian Ness
20/8/2019 12:02:51 pm

Joe can I ask you (or anyone else) a question on Aristotelian virtue ethics in light of this discussion on Kantian codifiability vs judgement?
Consider a very complex moral situation that requires a moral response. A fully virtuous person A ( i.e. A possesses phronesis maximally - he is 'practically wise') considers this situation and decides that X is the right moral response.
If we were to introduce another fully virtuous person (or any number of them) to exactly this same moral situation I would expect Aristotle to claim that they would all (without exception) select X as the right response. If this is true then I get the sense that there are rules "all the way down" - there seems to be no room for any judgement per se. If there IS room for this judgement then I would expect to see one or more fully virtuous people selecting a different response.
What am I missing?

Reply
Joe M
30/8/2019 12:04:37 pm

The nub of your question, Brian, seems to be whether reliable decision-making depends on the application of rules. To address this, I think we need to distinguish two claims:

(1) Moral Realism: In each situation, there is an objectively right thing to do—ie, there is at least one right thing to do, and this is so independently of what anyone thinks it is.

(2) Moral Codifiability: Over all situations, there is a pattern in the right things to do—ie, there is a rule that describes all the right things to do, of the form "anything is right if and only if it is D" (to simplify greatly), eg the utilitarian principle that an action is right if and only if it maximizes happiness.

Given (1), we can define a "fully virtuous person" as one who, in each situation, chooses an objectively right thing to do. Now, this does not quite imply that all fully virtuous persons choose the same thing in any given situation, because there may be more than one right thing to do in that situation, and they may just choose differently. (Aristotle does allow that it is a bit fuzzy what the right thing to do is.) But we can ignore this complication, and suppose that in each situation there is only one right thing to do.

Our question, then, is whether (2) follows from the existence of fully virtuous persons, the idea being that, to always get the right answer (or to always agree with the other virtuous persons), you /must/ be following some rule, and so there must /be/ some rule that tells them what the right thing to do is. I suspect it does not follow (Aristotle and Dancy will agree).

Here is a little model to show this. Suppose there are 100 different situations, and in each there are 10 options from which to choose. Now, suppose we are God, so we can make morality. Suppose, in each of the 100 different situations, we RANDOMLY choose one of the ten options and make it the right thing to do. Just like that. By definition, the fully virtuous person will be the one who, when presented with the 100 different situations, chooses the right one each time. (How will they end up like that, if it is completely random? Who knows? By luck presumably, but all we need to suppose is that it is possible that such a person exists, as unlikely as they may be.)

In this little model, (1) is true but (2) is false. Claim (1) is true because, in each of the 100 situations, there is an objectively right thing to do—it was made so by God. But (2) is false, because there is no rule that describes all these 100 right things to do—recall, they were randomly chosen by God to be the right things, and so exhibit no pattern. (Yes, it is true that a randomly chosen sequence could by happenstance exhibit a pattern, but there will be plenty that do not, and we are talking about one of them.). Thus, (1) does not imply (2).

The virtuous person in the situation makes the right decisions in each case, but they are not following any rules (they just KNOW), and so we have to say that they have good judgment, though we will be mystified about how we might acquire that judgment. (But, remember, Kant thought that good judgment cannot be taught, and this may be what he meant.)

Brian Ness
5/9/2019 02:38:54 pm

Thanks for your thoughtful responses Joe.

You make a good point with your Euthyphroian example (i.e. God defines right and wrong). God could randomly (i.e. without reference to any rules) determine the morally right response to any particular circumstance (given moral realism). On this assumption ‘right action' is not codifiable - we must depend entirely on that mysterious Kantian 'good judgement’ (or Aristotelian phronesis) possessed by the fully virtuous person to reliably identify 'right actions’ in particular circumstances.

If we accept this possibility, what are the consequences for the possibility of ethical action by artificial agents (AI)? [Perhaps I am moving off-topic here from ‘concepts' to ‘morality' but my doctoral project beckons]. Firstly we must surely assume that the capacities of ‘judgement' and ‘phronesis’ which we are discussing cannot (in principle) be possessed by machines governed by AI. It is tempting prima facie to conclude from this that AI might never (by definition) be able to reliably identify the 'right thing to do’. But on reflection this cannot be true. Imagine two autonomous AI-governed robots (B and C) with exactly the same AI coding. We then decide to add one extra instruction to B’s coding - “to act justly”. Thereafter, if a fully virtuous person (D) was to monitor the behaviour of B and C (while reacting to multiple different ethical situations) she would notice that B tended to act more justly than C. D would say that B is more ethical than C. In other words it seems that moral action can be codified to some extent. I think this has been called the ‘weak codifiability thesis’.

This brings me back to your earlier observation that while morality might not be ‘rules all the way down’, there may be significantly deeper levels of moral codifiability in any set of moral circumstances.
It is commonly thought that codifiability implies teachability. But it seems to me that there are two reasons why some sorts of codifiable truth might not be universally teachable:
1. Some people have more mental capability to identify and manipulate rules in a coherent way (e.g. some of us are better logicians than others);
2. There is a limit to the number of pertinent facts a person can consciously hold in mind while trying reach a codifiable conclusion (e.g. remembering all the cards that have been tabled in a game of Blackjack).
AI is superior to the human brain in both 1. and 2. (think AlphaGo and Deep Blue). So my speculation is that in the moral sphere it is possible that suitably structured AI will be capable of detecting and manipulating deeper levels of moral code (if they exist) than most of us. It may even be that Kant’s ‘judgement’ and Aristotle’s 'phronesis’ encompass a degree of code manipulation which is beyond the conscious/teachable (as opposed to sub-conscious) capacities of even the fully virtuous person.
Hmmmmm!

Reply
Joe M
16/9/2019 01:02:29 pm

Hi Brian. Your most recent response prompts various thoughts. In particular, I'm beginning to wonder about my own example . . .

I'm not sure the definition was quite right. I said a "fully virtuous person" is one who, in each situation, chooses the objectively right thing to do. But consider my 100-situation example again, in which God has randomly decided what actions (of ten) in each situation is the right one. I don't know what the right thing to do is, so when I approach each situation, I too decide at random what to do. Now, it is very unlikely, but I might happen to chose the right one in each situation! That would NOT make me a virtuous person, but only a very lucky one. So I should have said something like: a "fully virtuous person" is one who, in each situation, /would/ choose the objectively right thing to do in any situation, whatever that might be. That is, the virtuous person is one who /tracks/ objective rightness. This means the lesson of my example is more complex. There are three cases.

(1) Real life is as bounded as my example—before we start decision-making, there are only 100 types of situation we could be in, and in each situation there is a uniquely right thing to do.

In this case, even if objective rightness is random, it would still be possible for a person (and even more for a computer) to learn the right thing to do /by rote/, and so come to track the right thing to do in each real situation. If real life is as simple as this, then fully virtuous persons are possible, and fully virtuous computers even more so.

(2) Real life is bounded, though the bounds exceed human but not computer capacities—that is, there is still a finite number of types of situation, and these are within the capacity of computers to rote learn, but they are way beyond the capacities of human to rote learn.

In this case, fully virtuous persons would NOT be possible, but fully virtuous computers would be possible. I take this this is similar to the case you imagine: "my speculation is that in the moral sphere it is possible that suitably structured AI will be capable of detecting and manipulating deeper levels of moral code (if they exist) than most of us". So morality is codifiable (beforehand?), but the code is too much for humans.

But in that case, how can we have /any/ virtuous humans? You surmise: "it may even be that Kant’s ‘judgement’ and Aristotle’s 'phronesis’ encompass a degree of code manipulation which is beyond the conscious/teachable (as opposed to sub-conscious) capacities of even the fully virtuous person". I gather your answer is that certain humans—the virtuous ones—have /sub-conscious/ capacities for right decision-making, even though they are beyond articulation. Hmmmm, indeed.

(3) But the first 100-situation example is pretty mickey-mouse, and real life is probably unbounded—before we start decision-making, we probably do NOT know what type of situations we could be in, though (let us suppose) it remains true that in each situation there is a uniquely right thing to do.

In this case, if objective rightness is nevertheless random, then it will be impossible for anyone (or any computer, no matter how powerful) to track it, and so it will be impossible for anyone (or any computer) to be fully virtuous. You can rote learn what to do only if you have the complete list of what to do before you start acting. Contraposing, if objective rightness exists in real life, fully virtuous people (or computers) will be possible only if objective rightness is NOT random, and so—does this follow?—must fall under rules. Are you assuming, Brian, that fully virtuous entities have to be possible?

Reply
Brian Ness
17/9/2019 02:21:03 pm

Hi Joe
I look forward to discussing this face-to-face on Thursday if we have time.
Perhaps one way to narrow the focus of the discussion will be to consider the likelihood that objective rightness is completely (or even largely) random. This seems intuitively unlikely. There are too many obvious patterns (i.e. evidence of foundational rules) in areas of morality which are not contentious - quite apart from the rules-based consequentialist and Kantian worldviews which have persisted for centuries.
Consider torture. Whether we take a folk-morality poll or an academically rigorous poll we find that torture is (and has for centuries) been considered wrong by the vast majority of the population. This is surely not what we would find in a world where objective rightness is random? The consequentialist and Kantian postulates are, of course, founded entirely on this intuition.
Now if we accept the existence of at least some rules in the moral domain then surely we can say that the more rules an agent knows (and can apply) - the more virtuous she will be? Perfect virtue may well be unattainable in principle (eg objective rightness might be partially random), but my question from an AI point of view is whether there are good reasons that a modern artificial moral agent might be more capable of acting in a consistently ethical way than the most virtuous human?
I am pushing us further and further away from Kant - perhaps we should transfer this discussion to a new topic titled “Artifical Moral Agents” (or similar)?




Leave a Reply.

    Members

    Want to start a debate? Great! Just Contact the moderator with a topic, description and any links

    Archives

    September 2021
    February 2020
    June 2019
    May 2019
    April 2019
    March 2019

    Categories

    All
    Animals
    Artificial Intelligence
    Free Speech
    Philosophers
    Religion
    Science
    Toleration
    Universities

    RSS Feed

Proudly powered by Weebly
  • Home
  • Topics
  • MeetUps
  • Groups
  • Debates
  • About
  • Contact