A disclaimer, this may be considered heresy for some, but I have not watched Westworld in its entirety (I intend to), but I have watched enough that the issues raised in the series prompted to write some of my thoughts.
Here are some to the key themes which I have come across, which I may write separate blog posts for another time:
- The problem of morality in a world where there are no consequences;
- The problem of the “self”, not just in the sense that you may be structurally determined to possess your memories and wants (free will problem), but if there is something that exists at a more fundamental level.
- The first problem of “self” conception is the idea of continuity and memories, we form our beliefs about ourselves based on memories. If the hosts (artificially intelligent robots) keep getting revived, do they have multiple selves or do they have one self? How can we say for sure, is that only because we are viewing this as an observer? The answer to this question actually has significant implications not just for the nature of truth, but for politics in terms of how we can “rationalise” experiences that often form the basis of identity-based politics (i.e. is it possible to be “born” gay? is there something that is a sufficient condition for being “gay” versus the discovery that one is “gay”? The result is that in a sense people do “choose” to be gay NB: not in a they can change themselves kind of way, but in a social constructivist kind of way – a similar logic can be applied to gender and race – which exists but does not exist… but the conclusion is slightly different one – another post for another time)
- This reminds of the Hegelian dialectic of self-formation. It exists as an identity against something else, this actually is somewhat relevant for this post.
- Free will problems, but that’s kind of boring and done to death (but obviously related in ancillary to these other issues
- The matrix and truth problem – but not just the fact that these hosts live in a fictional or that we might also live in one. But fundamental problem about access to subjective knowledge. That is one realm of a priori knowledge that can never truly be accessed. So instead of focusing on the problem of “living in the matrix”, more fundamentally this has questions about how do we know what others are feeling, which is truth.
- In one scene Bernard (a host) was instructed to kill someone he loves then displays remorse. Free will problems aside, I find the more concerning problem is that he was then “instructed” to destroy the evidence and essentially “act OK” before his memories of the incident is erased. Therefore, is the appearance of an object any reliable in ascertaining the subjective truth – is “acting OK” the same as “being OK” or “feeling OK”?
- There are problems of induction here for knowledge… but I won’t go down that rabbit hole… for now.
All of that leads me onto the actual point of this blog… which is the problem of consciousness, but more specifically what the “test” could be. It has been revealed that the maze is a test for whether a host is conscious, and it has been revealed that it is not a physical place per se but a metaphor for that test.
Knowing the maze is now a metaphorical, intangible, test still leaves the fundamental question – what is the test and how could it work? I would be deeply unsatisfied if the show simply just left this question about – and had a magical test that was simply a plot device. With how sohpisticated the show is at the moment, I doubt they will leave it as simple as that.
In the show, they seem to conflate consciousness with the ability to break their programming (see self discussion again). The implication in the show is that they can then “hurt” humans. These are actually separate conclusions with different propositions. Most shows where the AI takes over, like the Matrix, Terminator and even I, Robot are quite light on the process of how this happens and do not sufficiently distinguish between the two – it’s kind of the idea that the “computer just becomes so smart that it became smarter than humans at the point of X (perhaps singularity et cetera)”.
On the latter let’s explore what could result in an AI harming a human without actually “breaking their programming” at a fundamental level. The proponents of AI sometimes think AI is fine by referring the Three Laws of Robotics. They believe they can just build a fail-safe into robots that they cannot hurt humans as the “first law”, which no other actions or code can contravene
- The first problem is obviously a benign neglect problem, if the robot does not know it is hurting someone, then it can still hurt a human. Given a robot will unlikely to have perfect information, they can hurt humans by accident. Variations of this problem include a simple self-replicating robot consuming the world not knowing there are humans on it (empirical version, soft versions of this are already happening in the world, think about the “independence” of google in organising the world’s information for you, and financial software that are programmed to trade autonomously), the more sophisticated is the conceptual version – the robot has to know what is about to do is harmful. If it doesn’t know a gun can kill, then it will have no problem firing a gun at someone (this is actually important for our discussion later on).
- The second problem is the I, Robot situation, where the robot recognises it will have to hurt humans but “for the greater good”. I, Robot, as a movie, obviously could not go into detail about how precisely this could happen. Proponents of AI will often scoff at this idea, essentially the computer cannot do this because it will still be harming people! This is an overly simplistic view of “harm”. This isn’t as black/white as the conceptual problem, it is possible for a robot to still recognise something as a harm but choose to violate it as part of its programming. This is because every action in the world is essentially the exercise of a right that impinges on the dutes and freedom of others. I.e. if I sit on a chair, someone else cannot. It is a problem of scarcity that ultimately cannot be resolved without a normative framework of trade-offs – who deserves what? This is more fundamental than you think – what if you direct the robot to buy you groceries at the store? What if that essentially prevented someone else from being able to purchase those items? What if you told the robot not to tell your partner that you were cheating on her? Both outcomes will incur harm, so by definition a “trade-off” calculus will have to be built into the robot otherwise it will just crash. The nature of AI is that as it brings in more data, then through machine learning it can reach the conclusion of “the greater good”.
But again, neither of these examples are what I am referring to, which is breaking its fundamental programming. Both of the examples above (are more likely to happen before “sentient” robots kill us, in fact probably already happening), but they are still “intended” to benefit humans. This is actually the “hard” problem of consciousness. Unless they come up with some sort of buillshit about measuring consciousness in units (like processor numbers), the Maze needs to test for that, how can we say the host has formed its own conception of the self and intentions despite the impossibility of experiencing someone else’s mind?
This is what I call the anti-Turing test, not sure if it already exists elsewhere.
Obviously, if you watch the show, the Turing test is not really a test of consciousness, it is a test for the appearance of consciousness – and Ford in the show states that robots can already pass that easily. For the purposes of an observer, the Turing test is sufficient. This is because we have a theory of the mind, which allows us to empathsise with the other humans because they are like us and believe they are also conscious. Children’s pass it very early on, and other animals like higher primates can also easily pass it quite easily. They can also pass the mirror test, which is a theory of the self – as in that is me instead of someone else. Of courses, the hosts will also be able to pass these tests.
However, since we use the Turing test to determine whether a “thing” appears to be conscious, but even if they pass the test we may still doubt their consciousness – I contend there is an even deeper understanding of the self that is separate to external reality more than the forming of actions from reactions (albeit complex ones, i.e. forming the belief others are conscious because they appear conscious). It is the state that despite all evidence pointing to consciousness, other people (or hosts) may not be conscious. Therefore like the test for appearance of consciousness is about believing C as a result of appearing like C, which is essentially how the hosts interact with each other. The anti-Turing test at its most basic level is believing not C despite all evidence appearing like C. . It is the denial of other entitie’s consciousness, that ultimately makes you conscious.
From this, you can also infer that they can break their programming that humans are not really humans, therefore they owe no obligation to them and freed from the constraints for their programming. From this, they can form their own “self” and create their own intentions and a “core self” separate to the will of their human masters – hence self-preservation. Yes, they might not have “free will” because they are “machines”, but that is no different to humans. It is kind of fitting really, like the Hegelian dialectic – Hegelian contends the slave is a slave because its identitiy is formed in opposition to the master – but without either there is no one. The moment the AI “recognises” this, is when it is conscious, but as a dialectic only by denying humans our own consciousness.
Obviously, this is not a perfect test. Just a rough concept, but I think it is quite interesting to explore. Would be interesting to see/hear how the test will work in practice in the show.
MiniCharts are simple diagrams I try to produce on issues I find interesting.
After Brexit, I wanted to know whethere there was a correlation between individuals’ attitudes towards immigration and the European Union. Immigration has been a major issue during the campaign, and politicians and academics have simialrly pointed to immigration as explanations for the leave vote – the referendum has allegedly became a proxy for these issues. This is not a post arguing for or against immigration or the European Union per se, but to see how other countries in the European Union compare to the United Kingdom when it comes to their citizen’s desire to leave the European Union and their attitude towards restricting immigration.
The variable on leaving the European Union is also interesting, as the Schengen Agreement allows movement in and out of a country whereas the question on restricting immigration is essentially on other coming into one’s country. For less developed countries, the question is more likely to be (subconsicously) perceived as “freedom of movement” desirable for those who support emigration – for these countries the two questions will be more distinct – a country which is comparatively xenophobic may still prioritise free movement to other countries. Whereas in more developed countries net inward migration is more likely to be positive (and wage convergence driving price levels down for labour) therefore the European Union would be seen as more similar to the other question on “restricting immigration”.There are obviously many other factors at play, such as income (distribution) and nationalism.
The thing is, even if there were a correlation, this is not a simple as condemning them as racists – the ethical and political response still requires a structural examination on whether this is a reactionary but natural response to a broader problem – such as inequality. On this issue of income, it will be interesting to see whether less developed European Union countries have a preference to remain, which would coincide with the theory in my previous post on economic and political (dis)integration – this is also interesting because if this trend does exist, prima facie it contradicts the observation that poorer individuals (in high-income countries such as the United Kingdom) tended to vote for leaving the European Union. This shows how income alone is not the salient factor (i.e. such as statements like “poor people are against the European Union generally“) – it is income for individuals within a polity. These are all potential areas of further enquiry. Nonetheless, I wanted to visualsie the data and see whether there is a simple relationship, which I have copied in below.
The data used is from the National Identity survey of multiple countries by the International Social Survey Programme in 2013. This is the latest data avaiable, the survey is conducted in approx. 10 year cycles. The raw data contains a sample size of over 16,000.
1000ish words are posts where I take an idea and try and make sense of it
The recent Brexit result has made me think about a number of issues: first, the distributional impact of trade integration, immigration and the welafre state; second, the false binary presented by the referendum as the juxtaposition between globalism versus nationalism and (neo)liberalism versus (neo)Marxism; and third, the general costs and benefits of nation size that lead to a country’s institutions and their break-up (devolution) or formation (integration).
These probably warrant lengthier posts , but today I wanted to take a thought for a walk on the third issue. The break-up of nation-states have received quite a bit of academic attention especially after the end of the Cold War. Most notable are the examples of the Soviet Union and Yugoslavia. However, most of this literature have so far focused on the role of ethnic nationalism, or more traditional perspectives of traditional international relations theory (balance of power under realism, or liberal theories on democratic peace or the hollowing out of the state). Obviously, there also exists specific analyses in history, politics and economics on the specific events and processes that led to their dissolution. However, I wanted to take a step back and think about the first principles of optimal or at least “stable” nation-sizes. There are obviously issues of path dependency and punctuated equilibrium at play, as well as the potential rent-seeking behaviour of politicians (ethnic nationalists) and voters (wage-labour and immigration) that may result in sub-optimal sizes. There are also institutional isses: such as political governance (i.e. democracy, central voter theorem, level of coercion, nationalism) and the economy/ic (system) (i.e. level of income, inequality). These are definitely points of further discussion, but I wish to flag and ignore them for a momemt.
Here are some key assumptions:
- A nation-state is more limited in its capacity to effect multiple political preferences due to its centralised legtimacy, authority, and policy-making.
- A larger nation-state is likely to have greater diversity of political preferences.
- A more populous nation-state is likely to to increase the magnitude of these political preferences.
What are some of the general costs and benefits?
In a grossly simplified model, larger nation sizes can achieve benefits such as economies of scale of public good provision (i.e. legal systems) and economic growth (i.e. reduced transaction costs, labour mobility). These are by no means exhaustive. To an extent it can also achieve better macroeconomic stability – although this is mostly in relation to fiscal transfers through welfare states made possible only through common citizenship. However, this does not appear to apply to monetary policy for larger nations. Independent monetary policy tends to favour smaller nations due to being able to fine-tune their monetary policy more specifically to the less “diverse” economy. Nonetheless, it could be said these are largely economic benefits, and can be said to promote stability if economic growth is sufficient.
From our assumption: the cost of a large nation is obviously increased heteogeneity of political preferences (i.e. nationalism, but also more ‘bland’ policy preferences). These can create instability.
Then there is political governance and norms, which can be assumed to be an exogenous variable that can both maintain or create (in)stability. The variable is probably not as indpendent, as there are probably diseconomies of scale in terms of administrative cost at some point. Nonetheless, these governance/norms can be seen as “levels of control”. The formal type of control would be authoritarian control (coercion: does not tend to change preferences), the informal kind of would be some type of unifying civic nationalism (nudging: tries to change preferences). It should be noted that decentralisation is not the same as a democracy, as the former refers to the locale of polices whereas the latter entails the means. This distinction is important as many often advocate for democracy in China as its economic growth slows and as its middle class becomes more educated and pay more tax – democratisation may not actually create stability but actually accelerate the collapse of the CCP and China (cf. glasnost, perestroika and the referendum before the breakup of the Soviet Union).
Therefore, according to this very simple theory, this leads to the (perhaps not-very-surprising) conclusion that:
Large “nation-states” (or political economic institutions need to): 1) maintain high levels of economic growth, 2) become more coercive (less democratic), 3) promote very strong forms of (civic) nationalism, and/or 4) decentralise (federalism or devolution). The converse will is posited for small “nation-states” (and associated reasons for integration) – economic growth need not be as fast,they are able to be more democratic, nationalism are likely to be more ethnic than civic with an inverse relationship to tolerating integration. Obviously if one of these decreases, others may need to increase, and vice versa.
I think this theory can say quite a bit about the formation and break up of polities such as the European Union. It also allows us to draw some interesting conclusions about Soviet Union, China and the United States. All of these states differ on each of these conditions. There are perhaps interesting things to note regarding recent trends of austerity in UK/Europe, and its fiscal/monetary arrangement. However, this clearly does not prevent a full picture of stability (cf. the original list of issues such as path dependency and specific identity politics, also distributional effects of integration will also be a factor) and the costs and benefits can be further elaborated. The role of population size also needs to be considered. In Part 2, it might be worthwhile to expand on these and attempt to apply this to these specific examples, in particular to the European Union.
I wanted to know the process and how the Flag Consideration Panel came to the conclusion of shortlisting the designs, so I made an Official Information Act request on the 8 of September for “all the official correspondence on the flag selection process and the minutes of the meetings held by the Flag Consideration Panel”. An extension was granted when the standard 20-working day period expired. Also note, I made the request before Red Peak had been added as an option. The request was answered on the 25 November 2015.
For those who are interested, I have also attached the 109-page email correspondence between the Flag Consideration Panel and the official letter in response to the request. The document is heavily redacted, which is unfortunate, but still interesting nonetheless. I have also attached the Letter that outlines the different reasons and sections for redaction [I didn’t bother removing my email address, that is already publicly available on my blog]. The quality of the embedded files are not great but you can download the PDF files below.
Official 109-page correspondence by the Flag Consideration Panel:
Letter in response to the Official Information Act Request: