Meta-Ethics

Preface 1
There is much to preface here before diving into my meta-ethical account itself and I can only hope to account for all of what would be required for my reader to understand my position.

First and foremost, any meta ethical account is likely going to be built on other assumptions. Those assumptions may not be accepted by my reader thus negating the entirety of the account. I could attempt to argue for those assumptions too, however, there is not enough time or space in a single article to cover all those bases. I’ll try and state in this preface assumptions that may be controversial to the philosophical types so you know where you may agree or disagree early, but if it is the case you disagree, do not expect an argument for everything within a single article.

Assumption 1) There are other minds, there is an external world.

Assumption 2) What we, as subjects are, are either accountable to mechanistic explanations or to random chance. There is no power like “the will” that is uncaused itself and causes things to happen.

Assumption 3) You are in some way accountable to your brain-states.

As you can see from the above, my views very much sit with the scientific community, even if I think the scientific community upholds these views in a generally ignorant fashion. (As in, I agree with many of their conclusions, even if I think individual scientists couldn’t defend the views). I’m not arguing for reductive physicalism here, nor would I need to in order to make my points.

Preface 2
This preface is to set a couple of stages for discussion. First, I want to state what I think meta ethics, and I suppose, much of philosophy is. By my view, it’s an analysis of moral language. This contrasts with it’s an analysis of morality or morals. A subtle but important difference. The former means that I acknowledge that people use moral language; People say things like “It’s wrong to…”, ” You are obligated to…”, ” It’s bad to…” or ” It is impermissible to…”. However, I’m not assuming that there is anything real behind these words, that they make sense before starting my analysis. The latter, it’s an analysis of morality or morals, would make the mistake of assuming before analysis that there is something real to analyze. It simply leaves out the possibility of it being one giant confused language game. This honestly won’t come up in my arguments very much and you can somewhat ignore this paragraph, but I do think it’s helpful to understand where my analysis begins.

Secondly, I just want to make some disclaimers. There is no way I can seriously give a comprehensive defense to my meta-ethical stance in an article. I’d honestly want to spend a good year making something substantive, so this article will serve as a springboard for that future project and at least be a detailed enough answer to people who ask about my meta ethics. Additionally, sometimes I’m going to use sentences that… I don’t really mean, mostly because they would require an entire article themselves just to explain. You know how intro level classes in college/university often tell you that they are giving you a simple/wrong explanation because the truth of the matter would take too long? Sometimes it is like that.

Thirdly, I want to clarify that that no where in my meta-ethical theory do I address Moral Semantics. This is the field that tries to understand what moral sentences are trying to say. I know a lot of people find moral semantics to be an important part of meta-ethics. However, I find it a problematic discussion. I don't think everyone means the same thing by moral sentences. Some people are trying to talk about objective features in the world, some people are expressing, some may be emoting, some may be making cognitive statements about themselves. What I am mostly concerned with is how morality exists, how we could come to know it, and what could possibly make a moral sentence true. In later sections when I talk about the meaning of moral sentences, I am not talking about what people actually do or what their intentions are, rather, I'm giving an account where moral sentences could plausibly obtain truth.

Moral and Ethical Distinction
I’m going start by making some broad distinctions between Morals and Ethics. We will explore these in more detail individually.

Morals: The things that are morally valuable.

Ethics: The relationships between moral values and the world/others.

Morality - Definition
You may have noticed that my above definition for morals has a circularity problem. I’ve involved the word moral in my definition of morals. That’s true, I just wanted a quick summation that can be used to show a distinction.

Let’s dive into the concept of morality more deeply.

Morals are a type of value. An object of positivity or negativity. I say type because there are different types. We talk about aesthetic values, things that are pleasurable to us by their form/presentation. We can talk about taste pleasures as a type of value. Or friendship/love.

What makes a moral value distinct from another type? A moral value is a subjective value that one does not accept its presence/absence when compared to its counter factual. These values come with a certain inconsolability. Let’s explain that in more detail, and with illustrative examples.

Imagine John goes to the store to pick up some bread. On arrival, the store tells him they are sold out. John has wasted time getting dressed and coming out here for nothing and is quite annoyed. However, does John accept it? No matter how pissed off John here is, he understands that this can happen, and he accepts that despite not liking the situation, he can, in fact, accept it. He would never go on a crusade looking to change the world so that a situation like this never occurs again. He doesn’t think stores sometimes not having bread must be eradicated from this world. John is consolable.

John, on the way back home, passes an alleyway where he sees a woman being raped, crying out for help. John again is quite pissed off. Just like the store not having bread, this event invokes a negative reaction within John. However, this time, John cannot accept it. He knows that rapes sometimes happen, but that doesn’t matter, because to his view, they never should. Events like this should be eradicated from the world. John is not consolable.

Let’s relate this back to my definition of a moral value. We often have values that we find positive and negative, but none-the-less, their presence or absence we accept. However, when such a value crosses over into unacceptability, that is when we cross into a moral value. There is more to say here, such as “If moral values can change, doesn’t that mean it can be accepted?” The short response to this is: Not without changing who and what you are. It isn’t accepted by the same sort of person; the entire character of the accepter has changed. We can consider acceptance without character change to my non-moral values and acceptance requiring character change a moral value.

The last part of the definition which was not address was “when compared to its counter factual.” That means, in order to be a moral value, one must be able to imagine a scenario that is different than the one valued/devalued. If I see a rape, I must be able to imagine a scenario such that the perpetrator does not rape. In cases like the trolley case, if I were to take a moral stance on it, I must be able to imagine the available options. This puts moral values in a hypothetical space, they represent how we want things to be in relation to how they are. We may think that some being, perhaps an animal, cannot accept a particular state of affairs, that they are inconsolable, and yet, because they do not imagine scenarios otherwise, they do not enter the moral domain.

There can exist overlap between a moral value and, say, an aesthetic value. Nothing about my definitions make these types of value exclusive to one another. One may find that certain sensory experiences, certain forms, bring a sense of aesthetic pleasure. They may also consider that any counter-factual to these sensory experiences are simply unacceptable and should be stricken from the world. Some people may find that any setback, slight or negative experience is unacceptable, and so, all negative sensations to them are of a moral sort. Some are so egotistical. It is less common, among humans, that there is so much overlap, yet such a thing could and does occur.

Morality 2 – Subjective Types
As I’ve stated above, moral values are subjective. What is meant by that is that the existence of values is within subjects, subjects who can evaluate. When the sentence “murder is bad” is uttered, we can take its truth to be about the subject uttering it. Much like “sushi is tasty” can grammatically sound as if it is talking about sushi containing the property “tastiness”, we know that tastiness is a combination of physical components of the sushi with the interaction with us as subjects. Analogically, moral statements like “murder is bad” is a combination of a descriptive account of what murder is and a value judgment coming from the subject, derived from the subject’s assessment of it. The murder itself does not contain the badness. (I am skipping over much linguistic analysis of terms like “murder” containing valence inside of them for the sake of posterity, the point can still follow).

This account is a descriptive account of moral value. Moral sentences can describe subjects and are true/false in relation to how accurately they do so. This contrasts with a prescriptive account, which says that moral sentences inform us of what should be important, and that they can be assessed for truth as to whether that was really what should be important. I must admit, I have no idea how any epistemological case can be given for the prescriptive account. One may say that moral sentences are prescriptive, but ultimately their truth is a description of whether the subject really wants to prescribe the state of affairs they have uttered. This view I am sympathetic too as well, as its epistemic grounding is a description of the subject.

We must then talk about in what way we, as subjects, obtain the truth of these types of sentences. I lay out three descriptive distinctions: Moral Experiences, Moral Dispositions and Moral Beliefs.

Moral Experience: This can be characterized as something akin to intuition or emotion that follows witnessing or imagining an event. It is a type of sensation that something about one’s current situation is wrong or right. In our example of witnessing a rape, we can imagine that a type of disgust (Something different than just something gross) hitting us. It’s that feeling of being bothered or disturbed, the feeling of desire that demands the absence or presence of something.

Before I go into Moral Dispositions, it may be good to explain what dispositions are. A simple way to understand it is that certain objects, when met with certain conditions, will produce certain results. This relationship between object and condition is a disposition. We can say that “Sugar is disposed to dissolve in water” or “Glass is disposed to shatter when dropped.” I can say I am disposed to enjoy chocolate. I’m not currently enjoying chocolate, but if chocolate were to be put on my tongue, I’m disposed to a pleasurable sensation. It is also likely the case that I’m disposed to pleasurable sensations from foods I’ve not tried or never even heard of. I do not need to have had the pleasurable experience to be disposed to have one. In physical terms we can say that I have a brain that can receive certain sensory inputs and is disposed to use that information to create certain mental outputs.

Moral Disposition: A description of oneself as a subject, such that they are disposed to have a moral experience in relationship to their understanding of certain events/states of affairs.

In this, we may translate “murder is bad” as “I am the kind of person, who when witnessing/imagining a person being murdered, am disposed to having a moral experience.” Moral dispositions become the basis of the truth condition for moral sentences. To say “X is wrong” or “X is good” is true if and only if you have a moral disposition for X.

Moral Belief: A moral belief is a propositional statement, that the subject believes is accurate, about their moral dispositions.

Thus, when I say “murder is wrong” I am expressing a belief about the sorts of dispositions I hold. That is not to say it’s true that I hold such a disposition. This creates a certain disconnect between the moral sentences we utter and the truth of the matter, and distinguishes my type of subjective morality from subjective morality that would say “If I express “x is wrong” then it is true that I, as a subject, find x wrong”.

My meta ethics allow for the possibility for a subject to express “x is wrong” and for that belief to be wrong, due to it failing to accurately describe a disposition.

Such disconnects are not odd between our belief states and our dispositions. Let’s look at some examples to motivate such a disconnect:

As a child, John enjoyed sour gummies. When John ate a sour gummy, he would experience a pleasurable sensation. John also developed the belief that he enjoyed sour gummies because of those experiences. John, now 20, had not have a sour gummy in quite a long time. In that time, unbeknownst to John, his tongue and brain had undergone alterations, and such alterations change the nature of John’s dispositions. John’s tongue is now in such a state that it would not generate a pleasurable experience. However, John’s beliefs have no reason to change, he is not aware of any of these changes. John sees gummies in a store and thinks ‘boy, I sure liked these as a child, I should get them as a treat’. To John’s surprise, they are overly sour and sweet, not enjoyable at all. John’s beliefs meet John’s new dispositions.

One reason for a disconnect is that our bodies and brains change, and it’s not as if we receive patch notes. We hold beliefs that were accurate, but considering the changes, no longer are. A second example would go as thus:

John meets someone from Ecuador for the first time. He tries to get to know him, but the man is rude and abrasive. The man tells John he is American scum. John forms a belief: “People from Ecuador are nationalistic, hate Americans and are rude.” And John forms a belief about his dispositions: “I would not enjoy being around people from Ecuador.” However, we know that John’s belief is a hasty generalization. John formed this belief on a sample size of one. When John is introduced to another person from Ecuador, he is suspicious and anticipates a bad time. However, when it turns out the rather jovial man is quite friendly, John has a great time. John’s beliefs meet John’s real dispositions. In this case, John misattributed what it was that bothered him: It wasn’t people from Ecuador, it was just people who acted rude and nationalistic. However, we don’t always form beliefs about what really is motivating us. Sometimes we attribute our negative sensation to some other property by mistake of reasoning. Such can be the case with moral beliefs.

Sometimes one may claim they are a hardcore utilitarian. They believe that utilitarian calculus is what is good. However, when explained to them counter-examples such as the Utility Monster or the Experience Machine, they engage their moral imagination (imagining the scenario and its counter factual), and have a negative moral experience towards those examples. They may then give up their beliefs.

However, I caution, do not underestimate the power of belief. Beliefs can be just as motivating as moral dispositions/experiences themselves, even if they are false. Someone may continue to hold a moral belief because it’s easier than self-reflecting and they would rather believe they have a correct account of morality than discover their true moral dispositions. Someone may continue to hold a moral belief because it is required as part of social cohesion, perhaps as part of an ingroup over society in general.

This, I think, is how we get people who say, “I don’t feel X is bad, but I believe X is bad.” Some say they came to these beliefs “through logic” (Though no logical system I have ever heard could tell you what is bad), or that they just “know” it is bad. However, I argue that these beliefs are not reflective of any sense of morality, but through some alternative process. Perhaps they trusted someone else who influenced them of these beliefs, perhaps they were beaten into believing it, perhaps they are just caught up in a language game about morals, one they find they can continue participating in despite it reflecting no internal reality.

It is my contention that moral beliefs that do not correspond to moral experiences or moral dispositions are malformed beliefs, motivated by some non-moral process, wrapped up in some meaningless language game.

Morality 3 – Origins and Evolution
An important question may be: From where do we get our moral dispositions? Why do they change?

My argument for this is an argument for the formation and reformation of values in general. First, we have innate to us certain values. Common forms of brain structures and body structures give us certain value dispositions. These values tend to be survival oriented, and either way, evolutionarily derived. Simple things, like a desire to suckle, eat and be warm. Perhaps some initial way of interpreting faces, such that a smiling face is pleasurable. I cannot say for certain what innate values exist at birth and which do not, however, it should be enough to understand that we would exist with certain things being positive and certain things being negative, such that we can survive.

Most human beings, in this regard, are similar in starting values. We may allow for possible genetic outliers, perhaps a child not having empathy systems, an inability to recognize faces or perhaps a disgust towards all food. Some of these outliers have survival potential, and some do not.

Values, from this initial starting point are associative. Thus, whether a new value is formed (That X is bad or good) depends on its association with something already bad or good. A child may take simple values, and associate value to its mother, and thus, to other humans, through positive interaction.

Values can go from being dependent to being independent, and more specific. For example, a child may be taught soccer from his father. The child may enjoy the exercise and the activity with their father, thinking nothing of soccer in itself. However, over time, one may develop a value for soccer without the presence of exercise or parental activity. Often association is dependent at the start. “I like X because of the presence of Y” and becomes “I like X, regardless of the presence of Y.” Perhaps a neurological explanation could be given, stating that a value is independent when it forms its own independent neuronal chains within its system.

It is also entirely possible that brain damage can modify these value associations. Even the act of thinking can modify them as well, connecting two ideas may also connect whether a value disposition is altered, and these processes as associative, but entirely internal.

Thus, when it comes to moral value, we can expect that moral attitudes change when new associations are introduced, or new connections between pre-existing are made, or through brain damage/alteration. It is my contention, that because value experiences are either associative or innate, that there is no ultimate abstract truth about value, and their existence is ultimately happenstance. As Schopenhauer states, "A man can do what he will, but not will as he will." We may have values, and we may have values that wish to change those values, but we cannot, as subjects, get underneath it all and evaluate abstractly and independently of those systems. We are those systems.

Ethics
With the understanding of morality laid out, we can shift our attention to a brief discussion about ethics. Brief because I think there is much to say about ethics, however I merely want to lay out a description of how it could go, rather than lay out a list of best practices. Morality, as I argue, is true in virtue of subjective dispositions. However, just knowing the sorts of beings we are individually does very little in telling us how to achieve situations that promote our values.

Let ethics be the conversation that tries to understand how best to go about exerting our power into the world such that we can shape it in ways that promote our values. Much of the time, ethics is about other people. Other people have equal power to us and can enforce their will on us, and ours on them. We may want to shape the world in a particular way and others may wish to impede us. The first recognition to ethics is to realize we are not all powerful and we cannot always get our way.

Thus, much of ethics is compromise with other people. Another aspect is learning about in what ways others are like you in terms of value. When a common end is agreed on, ethics can continue by trying to develop systems, whether by law or government, by implicit agreement, or however our social interactions go, such that we act in a way that optimizes the promotion of those goals.

Ethical sentences follow the hypothetical imperative: “If you value X, then you should do Y.” If you value being thin, then you should diet. And if you value people not being killed, then you should take preventative measures for this to occur, and collectively punish those who do kill. In this sense, ethics are objective. There are objectively better or worse ways of accomplishing goals, and the truth of those sentences are determined by whether or not the way you suggest works.

Moral/Ethical Conversation and Debate
 I often find myself distressed at certain pointless avenues that conversations take in moral discussion, especially when I consider moral conversation to have possible deep conversations available to it. Much of moral debate revolves around stating a value as an objective truth, and not our subjective relationship to it. Conversations like this tend to exist by simple assertion, a proclamation that only sentient things have moral value, or a proclamation that only the virtues are of value, and yet no epistemic process seems adequate such that both participants could jointly endeavor to solve. They must keep asserting or calling into question each other’s nature or other such fallacious methods. Much of this line of reasoning seems, by my view, to come by due to an inability to accept the consequences of there being no objective truth to values (Much like those who cannot accept the consequences to there being no objective purpose to life), they state things like “Well, if morality is subjective, then Nazi’s were equally correct as we are about our moral statements” as some sort of gotcha. If that is how reality really works, then being upset about it does nothing. Sometimes we must accept harsh truths. And nothing about my account says you must tolerate them, in fact, you likely cannot. That is the nature of your moral reality.

However, as said before, moral conversations can still run deep. I shall give ways in which a moral conversation can still be fruitful despite an ultimately subjective morality:

1) We may question whether someone’s moral beliefs match their moral dispositions. Perhaps because we know this person personally, or because the vast majority of people are a certain way, we may make inferences as to what their dispositions are, and why we may think it is unlikely that their belief statements match their dispositions. We can often provide counterexamples to belief statements in order to motivate them to change description.

2) We may change someone’s understanding about the world. I may value X because I think it contains property Y. If, however, I am wrong that it contains property Y, and I am convinced that this is the case, I may stop valuing Y. (It may also be the case that I never really valued X because of Y, I just valued X independently, but I had a mistaken belief). As an example, one may not care about stabbing fish because they believe they don’t feel pain. If, however, they become convinced they do, they may come to value the fish. Conversations about the world can influence where our values are allocated.

3) Ethical conversations about the most effective way to promote shared values may provide fruitful conversations. Activists may think of how to convince others, lawmakers may revise and update laws and political philosophers may consider forms of institutions that work best for them.

Moral Progress
I stated before that humans have innate values, and that values grow by association. I think from this observation we can suppose a type of moral progress to exist. If a beings moral (or other value type) development is based on its interactions with valent things, then when rival possibilities are presented, the one closest to innate values of humans will be developed into, due to their easier associations.

A simple example would be: It is easier to get someone to accept a form of government that promotes freedom of their values than it is to get someone to accept a form of government that tortures them, or requests them to kill themselves, all else equal. (And very rarely is all else equal, however). We develop ethical systems due to our developed sense of morals, and those ethical systems shape the next generation of development, and whatever has the greater association to innate human values is more likely to survive. Systems that provide people with food and shelter/warmth will be associated into easier than those that do not (unless there is some even higher pay-off).

While this is a bit simple and watered down in all the interactions of human life, we can see a mechanism that allows for certain ethical systems to persevere and others to die out, based on the ease of interaction with basic, innate human values.