Endeavoring to model a social system that serves the purposes of all of it’s components in a synergistic and thus symbiotic manner should be the focus of any one that wishes to produce a successful, thriving society. Though an initial model isn’t likely to be representative of all of the subtleties of the finished product through practical application, the basic principles of the model are likely to have a great deal of influence on it. The model is a framework that enforces the approach; which guides what solutions are sought along the way. Though it’s the individual solutions, which aren’t necessarily intuitive, that will be enabling practical application, the top down influence of the initial model will likely be influencing the synergistic function; as the initial model is considered when choosing discrete solutions. The focus of this article is to present an example of an initial model for such a synergistic social system.


Polyocracy is a principle predicated on the ubiquitous sharing of the responsibility for the function of the social system. This requires the understanding that each establishment is essential in the success of it. This suggests that there is common interest in synergy; that is likely to produce sustainable success. Disenfranchisement of any of the essential components would thus likely have an unfavorable overall result; that would eventually result, and has historically resulted in the general failure of the system. Polyocracy is in essence the mutual respect for the vitality of each and every societal structure.

Polyocracy on a global scale serves to promote diplomacy with other forms of governance. Historically, nationalization has produced a host of unfavorable results; that have been expressed in short term financial patchwork that eventually resulted in financial crisis and social unrest. Since the eventuality is similar with either of the approaches, diplomacy is the logical solution.



Education may be the most important aspect of social contribution. Whether it be hard scientific application or spiritual guidance, knowledge of the practice and of how it would either be a general service or generally non-destructive is essential in polyocratic policy. A balance between personal liberty and social responsibility can only be achieved if the understanding that everyone is necessary is promoted through curriculum. Historically there have been numbers of social movements that have divided societies; based upon a wide variety of beliefs, that have been detrimental to the overall function. These include political parties, religious backgrounds, class systems, exploitation, gender rolls etc. Education is likely the most valuable tool for sorting the issues that a developing society has to face. This might suggest that education be fully promoted and open access at all levels. This would maximize the potential for an educated populous.

The Model:


A secure society entails a large number of contexts that would require a large number of solutions to produce. Much of it can not be predicted in foresight, however our current hindsight can be a useful tool in modeling an initial foundation.

Global security is an extremely complex subject that may best be approached first through a combination of judicial and diplomatic practices. An attempt at diplomacy is probably the most reasonable course of initial action. Adjudication should also be a part of this process for the purpose of promoting fairness. Systems are thus in place in the event that fails. Defense would of course be part of the security structure.

Domestic security is also very complex; though a little more manageable with respect to the legalities. This is because the laws enforced by the Judiciary are always applicable or a candidate for change. These outcomes are made possible by diplomatic and / or judicial interaction. The interface for the Judiciary would of course be the courts and police forces. The interface for the Diplomats would be the public representatives.


Legal representation might best be served with a combination of practical application of adjudication and epistecratic application of Instrumentalism. The ability to enforce laws that serve society has a great deal of dependence on the coherence of the laws themselves. An epistecratic approach to law making might minimize instance of lack of representation of important or even critical, systemic issues. The interface for Instrumenatlism would of course be academia.


Public policy would be best determined through interaction between the public and public representatives. It might be served by diplomacy and proceduralism. It would stand to reason that the most favorable outcome for the general populous is probably the outcome that functions the most favorably in practice. The details of course would be hashed out with diplomacy.


Economic influence has been a point of contention both globally and domestically, for all civilizations, throughout known history. The most effective way to approach global economics may be to produce a domestic economy that has unprecedented success. A combination of Instrumentalism and Proceduralism may produce much more favorable results in practice; as a combination of epistecratic modeling and real world, proceduralistic testing have historically resulted in general advancement.


Maximally efficient and effective function of all of the components of a social system are likely to produce a thriving society. Historically, thriving societies have enjoyed more evenly distributed wealth and have tended to promote civil rights. Where there is less concern over stability, there tends to be much more cooperation. It also seems likely that the reverse is true. Added cooperation is probably likely to result in stability. Previous models don’t account for this. The Old World view is based upon an expectation of competitiveness that evaporates when society is growing rapidly; and returns at growth maximum. Cooperation is expected when the system is functioning relatively well; therefor a model that promotes function rather than one that promotes competition is in order for the general good.

Towards a Deeper Understanding of Disorder



Disorder has many definitions. Even scientific disciplines have varied definitions of it. For instance, the clinical disciplines characterize it by ones diminished ability to function. This seems a little truncated from a systems perspective. This is because it doesn’t seem to coordinate with sociological questioning of the social order; much less the dogma that must exist in even our most unified models.

The issue appears to be lack of congruency between the hierarchical models that we construct for understanding of natural systems. To implement a more unified understanding of systems seems to require central dogmas that scale well with a general systems model. Even as such Entropy, which produces much more disorder than it ever will order, is by far the natural norm. Disorder is in essence the rule; and order is the exception.

The impulse toward order appears to be rooted in self-preservation. This makes the view a candidate for clinical justification; as the interest of the patient is prioritized. This doesn’t however produce the most viable model for understanding.

My efforts toward Naturalized Socioeconomics are encountering these types of semantical obstacles with almost all of the scientific disciplines. Even with Physical Science, there are instances where Entropy in the system stands to reason but can’t be demonstrated empirically due to complexity. Does this situation warrant dismissal of the logic that appears to be self-evident? Are we to assume that natural systems are not generally compatible because we have no accepted model for unification? Is our description of Entropy truncated in thermodynamic representations? Is a new general systems axiom required to solve this issue? It would appear that a Bohmian perspective would require congruence between the disciplines; in order to provide a model that would allow the rigor favored in general systems analysis.

Hierarchical Reasoning (and the lack there of):

*Classical Mechanics:

The human brain can only parse about a dozen pieces of information at a time. This is probably a cognitive constraint of human neurology that produced the reductive aspects of music. Songs tend to be broken down into pieces like intro, verse, chorus, ontro, and outro. Octaves are of course composed of eight notes; though many wavelengths of sound lie in between each note. These wavelengths could probably be accessed and utilized; however many of the instruments are designed to only play the traditional notes and chords. This isn’t a bad metaphor for the way that humans think. We do the best that we can with the instrument that we have. On that same “note” however, our neurological resources can only reconcile a small degree of complexity. Hierarchies appear to be our cognitive solution to this issue. We search for more generalized, ordered patterns in order to analyze more complex systems.

*Quantum Mechanics:

We humans also do this with quantum systems. Measurement of particles tends to change the sate of the particle. This is because we have no way of observing elementary particulates without directly interacting with them. Since interaction influences the particles state, there is no technology for passive observation to date. This means we must find (and have found) methods for approximating the state of a particle without measuring it directly. The method that we employ influences our perspective and thus our models. We now see Quantum Mechanics as probabilistic because we use probability to infer within it. This however doesn’t mean that particles behave probabilistically. Once again, we’re doing the best that we can with the instruments that we have.


Unification may be the most difficult problem we have to solve. Considering that we are naturally tooled to be hunter gatherers in the wilds of this planet; dealing with issues that we can see, feel, hear, smell and taste, complex and quantum systems provide a wide variety of challenges to our observational skill sets. Our tendency toward dogmatic thinking often has us taking our perspectives or our cognitive models as an approximation of reality. This may be one of the more difficult trials in the quest for unification; as we will probably require a new model designed for that specific purpose. The organizing aspects of nature are self-evident and probably a sufficient rout toward a unifying model; however the pervasiveness of Entropy and Extinction suggest that we may also have a biased account of base rates of what is important in the formation of complex, self-organizing, systems. This could also hinder the understanding of our own creative processes. We humans also have a tendency to invest in false dichotomies; even when they appear to be compatible through general observations.



Entropy is often thought of as the natural variation in systems. It’s also thought of as primarily disordered. It’s most commonly attributed to thermodynamic change. The dispersal of heat in the universe has measurable influences on the state of the universe and many systems. This evidence is favorable to the logical aspects of the human mind. It may be in many instances to consider the disorder in a system as entropic, in bad form based upon the ability to have direct evidence or at least some methodology for measurement. This is of course a fair argument, however our constant and consistent uncertainty needs to be considered as well. Where systems become complex to the degree that human cognition, even with the aid of computational systems cannot make direct physical connections between processes that have been demonstrated to produce entropy in a more empirical manner, and disorder in their complex aspects, we appear to hit a wall with respect to unifying theory.

At this point, questioning our current state may be able to produce favorable results. Is Thermodynamics the most likely promoter of Entropy? Is Thermodynamics likely the only promoter of Entropy? Is Thermodynamics the simplest explanation of Entropy in all of the scientific disciplines? Can complex systemic functions be characterized as entropic, when correlated with other self organizational, theoretical principles?


Normalization has many definitions, however there is a general meaning of a preference toward a minimal level of compatibility within a system. There is not a concerning amount of semantical issues with this term, however a more unified context of Normalization would likely be enough of a variable to create some confusion. The possible issues with this term appear to be contextual.


Novelty can be characterized by that which is not normal, yet is relatively non-destructive. In a general systems sense, Novelty would have low Extinction risk associated with it as well. There may be more ways to characterize it that could prevent confusion, that could become evident in practice.


Extinction is often characterized by the process by which in-sustainability of that which cannot become normalized eventually becomes non-existent. It’s the most likely scenario for the mass of disorder that is produced by Entropy. The explanation for this appears to be that Entropy is primarily arbitrary with little to no organizing properties. Entropy has the appearance of being probabilistic; however that may be more of a perception than an objective observation. The fact that, that assertion comes from contrast with the properties of Normalization insists that perception plays a significant role. Human motivation for Normalization, due to evolutionary predispositions toward selfpreservation are probably resulting in some degree of cognitive bias.

Cross Disciplinary, Inferential, Statistical, Analysis:

The many scientific disciplines have been producing wide varieties of favorable results across the board, for a few hundred years now. Methodologies have been being refined throughout; with added promise and increase in competitive advantage. Unfortunately the various disciplines do not speak in the same tongue. One of the more concerning developments in recent times is the lack of homeostatic function in our systems. This has resulted in growing concerns for not only human extinction risk but also the existential risk of all life on the biosphere. Not all of the blame for this can be pawned off on political prowess however. If political systems were accepting of the data, would science have the answers that they required; or would decades or even centuries of uncoordinated patchwork be the result?

For the purpose of coherent, homeostatic systems, the answer is probably cross disciplinary, inferential, statistical analysis. It would stand to reason that such an endeavor would result in Normalization of systems; thus mitigating unfavorable human influence of extinction and existential risk. In order to produce such an effect, it appears that a General Systems Theory that correlates with each and every scientific discipline for the purpose of coordinating them is in order; to minimize these risks.

Personal Note:

Get used to the idea. It’s pretty much “do or die” at this point.

The Obsolescence of Internet 2.0


Opening Remarks:

All technologies become obsolete over time. In current times, the longevity of technologies is increasingly short. Internet 1.0 was implemented in the 1980s and was essentially commandeered by the .coms in the early 90s. It was probably the most lucrative platform in the history of mankind. It saved in many of the costs associated with overhead and created direct connections between producers and consumers. It brought educational resources that include higher education to the masses; at little to no cost. It connected peoples from opposing ends of the globe into virtual friendships. It sparked mass political involvement. It created new decentralized markets, with novel new ways of contributing to society. But this is only a few of the favorable outcomes that we have enjoyed over the past few decades. I haven’t yet gotten into the unfavorable outcomes that have resulted from what could easily be characterized as the 8th wonder of the world.

In The Beginning:

Internet 2.0 began with a great deal of planning. All of the systems had to be re-invisioned in order to make it a suitable platform for world wide commerce. In the development process many mistakes were made that really could only have been detected in hindsight. This is because of the self-organizing nature of social systems. Predicting how a system is going to propagate over decades is extremely difficult due to social constructs, emerging technologies, scientific advances, cultural memes etc. Predicting such things are entirely improbable and bare significant weight on the outcomes over decades. One of the more heated debates among the developers of Internet 2.0 was the issue of “Privacy vs Provenance”. This was a decision between the “copy / paste” nature of content sharing (which is essentially gratis), and a form of sharing that includes a tag of sorts that points back to the original source. This allows the original creator to receive credit for their work. This would of course have had advantages and disadvantages that many would feel strongly about… hence the heated debate. Jaron Lanier suggested that the wrong choice was made; even though he favored privacy in the beginning. In hindsight, it seems that a provenance protocol may have solved a number of issues; however I doubt that it would have been near enough to save the second implementation of the Internet.


Tim Berners Lee suggested that the Internet had to be a singular public space for it to function at all. He was probably right considering the initial conditions; but this too has created a large number of issues. This is because, it produced a centralized Internet. Central hubs are obviously convenient and effective; however they are also a singular point of failure when considering the stability of a system. The issues began with interest groups jockeying for the bulk of control of the infrastructure. The competitive advantages of this are painfully obvious today to the savvy or geeky; but I will explain. The Internet is most commonly thought of as the copper wire that carries the data over the distance. This is also what is most commonly thought of as the infrastructure for the Internet. This isn’t however the only interpretation. Many large businesses like to consider themselves and their servers or their connections to those servers as infrastructure. From this perspective, these businesses feel that they have control of infrastructure. It’s not an entirely ridiculous notion though. Though not required in principle, they were required in practice for the function of the Internet; as of late. This was however due to the implementation of middlemen. This is what modern ISPs are. You see Internet 1.0 allowed surfers to dial up servers and brows their content without an ISP. The .coms created infrastructure by man in the middle business strategies. This was implemented with the old dial-up modems; and the payoff was the lack of expense of long distance and international rates for direct dial. The price difference was enormous.

Now the servers that serve the bulk of Internet traffic are owned by companies like Google, Wal Mart and Amazon. These companies have a lot of control (by possession) of a significant amount of Internet traffic. This has become a commodity… the control of traffic, that is. This is one of the most concerning issues with the Internet today; and the main focus of Net Neutrality. Though most are concerned about the slowing of Internet speeds or expensive tiers with higher data caps, the real concern should be over egalitarian attention. There is much twisted correlation between attention and advertising in the new business models. This is probably the notion that is used to justify the business models associated with social media. The value of personal data is extremely high; though most don’t get near the return on it with usage of social media. This is implemented with license agreements that almost no one reads or even cares about. It also makes nefarious businesses extremely wealthy. It’s one of the most unethical business models I’ve ever seen. In most cases, it flirts with theft by deception. The fact that License Agreements are legally binding under these conditions is just ridiculous. From a systems perspective it’s a total failure… not to mention common decency.

The false dichotomy of Private Sector vs Public Sector has it’s dagger in the neck of Internet 2.0 as well. Both sectors are enjoying unprecedented prosperity at the expense of the “consumer”. Both are working together to constantly patch a cobbled, mature technology while the taxpayer pays for everything. This is where the infrastructure semantic is most harmful. The US government has been paying subsidies to ISPs since 1996 for the purpose of infrastructure upgrades. If ISPs see themselves as the functional infrastructure, then they have done just that… right? Meanwhile telecom is traveling from cable from the late 70s to 100 year old copper wire; when there is much much faster fiber optic cable that could be rolled out. This complete (probably deliberate) misunderstanding is unacceptable. In the US the most common way around government regulations is conflict of interest. Ex administrators or even ex CEOs of prominent companies are appointed to positions with regulatory agencies. This is the case now with the FCC. The current director of the FCC is a former lawyer for Verizon. His position on Net Neutrality is thus no surprise.


There is one public space that has no dedicated law enforcement agency… it’s the Internet. This isn’t just an issue for parents either. This is a global issue. The fact that nation states have such a difficult time with correlating political agendas with their populous and their neighbors is a huge issue for regulating public behavior. This is another issue where silly semantics plays a large roll. There is lack of agreement on whether it is even a space, much less a public one. This is because it is a virtual space. The private sector wants to be the infrastructure; however they do not want to police the space that they have created. The government is so tied up in global politics that they are essentially ineffective. This is more concerning when realizing that different nation states have different laws. The only solution that may exist is a lot of hard work in buttoning up international law. This would of course require unprecedented global diplomacy. Culture has risen to the challenge to a certain degree; however there is an unacceptable amount of failure on a daily basis. This conversation about privacy in a public space is so oxymoronic it’s just ridiculous. Though one is at home in their chair, or in their car, there should be no expectation of privacy in the public forums. This is something that needs to be hashed out; and it won’t be because of the advantages that the confusion boasts. People are making incoherent trades; and keeping the system misunderstood and unstable. A reread of the previous paragraphs will show the relevance of this argument.

Closing Remarks:

The Internet as we know it cannot stand. From a systems perspective the Internet 2.0 is a system that is being driven into extinction. The problems with it are mounting to the point that it isn’t likely to be able to function at all in the next decade or so. I might not even give it ten years from today. With a ridiculous caveat of absolutely no change in it’s condition, I wouldn’t give it five years. Of course change is going to happen; but probably not enough to save it. It’s just becoming fundamentally obsolete. The degree of acceptance that the powers that be are demonstrating is alarmingly low. This is because big business runs the developed world; and business doesn’t tend to look toward even the near future. Business tends to consider maybe 2 – 5 years at most. The number of business professionals that are considering what they will be doing in the 2020s is probably a miniscule minority.

The good news is what the Internet really is. It’s the wires. The Internet is already diversifying. The .net .org .edu and .gov are indicative of natural variation. It’s completely reasonable and probably expected that new connections will emerge in the coming years. Mesh networks and home servers are likely to be at least tried in the coming years as well as they will be financially permissible. Competition for new solutions will become more probable as the current state becomes less financially permissible. Natural systems (including social and technological systems) self-organize; and the principles that govern such organization should be expected to apply with the global communication technologies that are coming. I don’t like to make hard predictions; but I think that the 2020s are going to be an interesting time for global communications. The death of Internet 2.0 is likely to result in the birth of something really interesting.

Social Dogma As an Initial Condition


 Social transformation is under heavy influence from the collective mindset of it’s time. It’s likely that our current condition is significantly influenced by the mindset that existed during the dawn of civilization. It likely functioned as axioms for the distributed, discrete solutions that solved the problems that arose. It was a top down influence that set the tone for the next several thousand years in Socioeconomics.

Social Organization:

* What’s commonly in the collective consciousness

In all human organizations, endeavors begin with a consensus concerning a model of reality that results in a roughly agreed upon world view. Individuals share notes on what they believe to be real; and socially unifying behaviors result in significant agreement via bonding impulses. It’s this general belief about the world and humanity’s place in it that is the basis for models of possible socioeconomic change. This is an initial condition for public awareness.

The details of socioeconomic change are often worked out in the implementation. This is clearly necessary as the number of issues that arise are likely to overwhelm a central organization. This is because the shear mass of issues are the product of natural distribution. This probably suggests that distributed organization is the more adequate approach for solutions.

The two previous paragraphs are brief descriptions of two proposed aspects of society that appear to function in a feedback loop to organizational ends. It’s feedback between individuals and the collective that produces as society. The experience of the individual is data that is shared with the collective to either enforce or to amend the general world view; or to enforce or to amend agreement on possible solutions.

* What’s not commonly in the collective consciousness

Though humans are the most conscious and sentient beings that we humans are aware of, almost all of our behaviors are unconscious. This of course does include all of our unconscious, autonomic behaviors; however it also includes our sub-conscious, impulses. The latter has more of an effect on our socialization than most tend to appreciate. For instance “conscious” decisions are often effected by things like mood and the neurological state. There are two categorizations in Neurology of the nervous system concerning modal function. One is the Sympathetic Nervous System; which is responsible for the “fight or flight” mode that bolsters outward defense of the biology against predation and the like. There is also the Parasympathetic Nervous System; that is responsible for development, growth, fighting illness, healing etc. Both of these states influence our decisions; as decisions are made in the moment. Decisions are generally based as much on the environmental stimuli of the time as they are on socially supported world views.

Concerns of fear mongering exist because it’s known that the Sympathetic Nervous System can influence decision making; even in large crowds of humans. We humans are predisposed to behave in a manner that is appropriate for the initial conditions. This is in essence what “good behavior” is. This however is not exactly what happens in reality. It’s more of a perception of initial conditions that produces behavior. Where the perception is not indicative of the actual initial condition, inappropriate behaviors are more likely. This can be (and is) used as an axiom for a Pavlov strategy. Creating environmental stimuli that are likely to result in defensive responses is a common tool in political punditry. This is also a tool that is used in Social Engineering. The opposing stimuli also has significant influence in decision making. Where awareness of possible red flags is present, they can be mitigated; and interactions that are risky or inappropriate can seem like perfectly normal and appropriate interactions. For instance, a power company once hired a group of social engineers to test the security of their facility. The social engineers entered the facility and were essentially undetected; posing as technicians. Contracted technicians just don’t look out of place in such a technical environment. In this instance, just looking the part was half the battle.

Manipulation of the masses is probably one of the earliest forms of crowd technologies. This is the main focus of Noam Chomsky’s “Manufacturing Consent”. Unfortunately, consent is dependent upon the most accurate perception of initial conditions as practical. Otherwise, consent can be directed by creating a perception of a type of initial condition that is appropriate for the desired type of behavior. This is in essence fabricated signal noise that promotes acceptance of a Pavlov strategy. The understanding of how humans respond to environmental stimuli is too often exploited in the interest of personal gain for an individual or small group. The evidence of this is overwhelmingly pervasive.

* Epistemology and lack there of

Natural distributions of interest and aptitude in populations of humans is diverse and specialized. This produces a large number of individuals that are adept at narrowly defined skill sets. This of course, in reality, also produces a very small number of broad thinkers; but the vast majority are specialists. This may be because the skills of specialists are required for broad thinkers to test their models. This is probably because of human neurological constraints. This is addressed naturally by collective or distributed intelligence. Many well trained hands can make large, complex tasks light by coordinating their efforts and skills.

Considering the arguments in the previous bullet point, inconsistent or misrepresented data can (and does) have an unfavorable effect on the general world view. This of course translates to an unfavorable impact on system modeling. Systemic issues are thus likely due to this. This can be (and is) a huge hurdle for systemic change; when the systemic issues become the focus of social polarity. This prevents social consensus; even where Epistemological consensus exists. These instances are probably and historically not long lived; however in modern times they tend to produce increasing amounts of risk. This appears to be a product of both general human inclinations toward Trusting Tit For Tat strategies; and the much less numerous but still pervasive Pavlov strategies.

* Economy and lack there of

Where maximally accurate perceptions of initial conditions are present, economy is essentially maximized. This is because the collective or distributed intelligence is in the maximally effective state to produce economy. This suggests that an maximally, economically effective world view is the maximally accurate one. It isn’t however as simple as circular logic. The support is in how well the outcomes meet the expectations. If a model fails to produce an expected outcome, there is probably an issue with the model.

All too often economic issues are addressed with inputs that mitigate systemic issues. This is in essence creating issues and then leveraging resources toward patching the symptoms; rather than solving the initial issue. This is of course not economically viable. This is probably a product of the two previous bullet points.


It’s difficult to argue against social dogma as an initial condition; as the evidence for it as a major contribution to social organization is clearly… well… evident. The question as to whether it is favorable or not is much more debatable. Where it produces false positives for personal and civil liberty it also produces false positives for personal and social economy.

With the increases in population that almost always accompany rises in the standard of living, the risks become more concerning. The more humans that exist, the more of an environmental influence we become. Of course effective extrapolation of this data is probably a more difficult task than our current level of sophistication can muster; however the fact remains that we are more a part of than apart from our environment. The social dogma doesn’t generally appreciate this. This is concerning considering risks can be not only extinction risks but also existential risks.

Social dogma is a part of social organization. This doesn’t necessarily mean that it’s the most influential part of social organization though. Social organization is a more complex process than we tend to be aware of. We are always forgetting something when trying to make predictions. Social systems are chaotic systems that defy our attempts to extrapolate with accuracy. There are just too many variables for the human mind to account for. At this point, computer mediation of empirical data may be our most viable ally in the quest to organize and economize.

“The major source of unhappiness is that we are incoherent; and therefor producing results that we don’t really want; and then trying to overcome them; while we keep on producing them” ~ David Bohm

Brave New World


The US is the most medicated country on the planet. It is also the country that is known for having large numbers of mass murders, by single actors, with unknown motivations. This is something that has come about post 1970. It’s difficult to say what is exactly the cause; however that may not be the best way to consider it. Causation isn’t the most accurate description of interactions. Feedback loops are much more accurate. Taking medications for emotional issues may also not be the best way to address all issues; as civilized life isn’t always kind to the human psyche. The reasoning behind this post is the political discourse, the scientific responses, the bureaucratic responses and the public opinion concerning mass murder in the US. I find them all lacking in the rigor required to properly address the issue. I’m also concerned that the problem is likely to become more prevalent in the coming years with the increase of financial crises alone. Disillusionment from other factors could make it even worse.

The current climate:

The left tends to blame the guns rather than the actors or the environment. This of course is not a scientific approach; as there is no correlation between gun ownership and the steep increase of gun violence. As a matter of fact, they oppose each other.


The right tends to blame the actors themselves or maybe mental illness. This is probably a better approach; but yet not scientific either. Acts of violence are not more common among the mentally ill. This would also not correlate with the steep, recent incline.


The drug companies are marketing products to consumers. This is a bit of a paradoxical problem; as the right to be involved with one’s treatment seems ethical; however, self medication is always a bad idea. This borders self medication. “Ask your doctor about (fill in the blank)” is dubious at best. Being advised with a commercial rather than a consultation with a medical or mental health professional is far from ideal.


The government regulatory commissions are so involved with politics and bureaucracy that the conversation doesn’t even exist or is completely hand waved. This isn’t surprising; as a significant number of the bureaucrats are former administrators or legal experts of drug companies. The situation is wrought with conflict of interest


The mental health professionals spew statistics without even considering the statistical significance of the actors in question being almost unanimously mentally ill and on medications; and not questioning the lack of correlation with the data itself. This is just bad Epistemology. The bureaucrats and mental health professionals provide the data to sort this; however they don’t seem to consider it themselves. This requires a bit of qualification; of course.

The data:

Before the 1970s there wasn’t even one mass murder per year. The years where there was a mass murder, there was one that year. During the 70s and up to current times there has been much larger numbers than the population increase could account for.


This is a time when medication has become the “go to” solution for emotional discontent; along with emotional and psychological issues. This is evident in the distributions of medicated individuals. This number exceeds that of the rest of the world by a large margin. The US is 5% of the world’s population; however we consume 75% of the world’s prescription drugs.


Every day issues in modern society can result in emotional issues. Poverty and social inequality for instance, can result in mental distress. This is an issue as the US economy is probably in decline.


The red pill:

Some of the data appears to be disingenuous.

The number of people in the US that are being treated for some type of mental illness, disorder or condition doesn’t correlate with that of other countries. This is quite alarming; considering that almost all of the US’s population has roots in other countries. This appears to be more of a product of advertising and the vast income inequality in the US.


The distribution of people with mental illness in mass murder instances is surprisingly high; considering that mental illness isn’t a prerequisite for such violent behavior. Circumstances under development are a much more likely culprit. This probably isn’t something that could be addressed with prescriptions.


This is obviously a case of misrepresentation of the “facts”. This is probably due to not addressing the issue at hand in a scientific way. Few wish to take the time to actually test the claims and make some sort of coherent statement on the subject. The response tends to be rooted in political prowess or the issue is just denied all together. Either way, the issue is likely to worsen; as the environmental pressures worsen as well. A saturated health care market in the US is one of the main factors in the distribution of medications.


Here’s some food for thought concerning crime and punishment.

Here is some consideration of violent crime and social constructs.

Personal note:

I think that David Brin was right in suggesting that the Huxleyan implications of our current trajectory are more concerning than the Orwellian ones.


Criticism of UBI


Attempting to make an assessment of the probable outcomes of UBI (Universal Basic Income), would seem to require a general systems analysis. It’s understood by Sociology and Social Psychology that social systems are extremely complex and chaotic. This significantly decreases confidence in prediction value of discrete or nuanced instances, however a more generalized, probabilistic risk assessment can still be achieved. This can be done by considering the social system as a system; and a chaotic one as well. Though a precise prediction of the resulting state of the system isn’t likely; many of the more important risk factors can be considered. Considering UBI as an input to the social system, with regard to the initial conditions is probably an effective manner of considering the probable risk factors. An understanding of the relevant disciplines can produce predictive value in various aspects of the social system. This is the value of Statistical Mechanics; as the morphology of the system is the basis of the understanding. It’s also the natural path the outcomes are likely to take.

Initial Criticism:

The bulk of the issues that we face today are likely due to lack of understanding of the morphology of human presence on Earth. This has improved along with the conditions; however long term challenges still exist. This is probably the result of solving issues as they come without a general analysis of the system itself. The more time that goes by, this is less understandable and more dangerous. Taking UBI as a solution for a singular issue without a general survey of the system, though explainable by conditioning, is not justifiable by our current understanding of natural systems. This is what appears to be occurring in almost all instances.


The more influential aspects of human society in general are probably the best candidates for comparison and contrast with probable outcomes. These would likely include but may not be limited to Pavlovian Conditioning, social norms, evolutionary predispositions, emerging technologies etc.. These would represent the state of the system; and the initial conditions that UBI would be the input to.


For the analysis, I’ll be using the methodology presented in this article.


Pavlovian Conditioning:

One of the likely causes of lack of general analysis in creating solutions to social issues is being conditioned to the existing state of the system. This appears to be the result of heuristic efficiencies in the processing of thought who’s value is in normative environmental influence. This can become an issue where masses of artifices can obstruct the environment. In this condition the artifices can take the place of normative influence. General analysis is not innate human behavior. The heuristic behaviors of humans are so well understood that certain types of behaviors can be predicted under certain types of initial conditions. This is the principle of “Manufacturing Consent”. The earlier crowd technologies revolved around promoting desired types of behaviors in masses of people by creating a perception of initial conditions that the desired type of behavior would be appropriate for. The population would then probabilistically behave appropriately to the perception; producing the desired types of behaviors. Over time, specific types of stimuli are now habituated. The two party system keeps populations polarized in a “divide and conquer” capacity; due to habituated behavior patterns. This is however only one example.

The current conditioning of societies is largely influenced by the industrial, 5/40 job market. The apparent, impending end to this 100 year old social construct is already displaying characteristics of Pavlovian Conditioning. The expectation of receiving funds for sustenance from a centralized institution has been clearly habituated to the point that other solutions aren’t even on the radar. This is probably to be expected.

A more generalized account of the conditions might suggest that, automation could become decentralized. This will be supported later on.

Evolutionary Predispositions:

There are many interpretations of what is “fit” in systems. The accepted general assumption about fitness however, is that cooperative self-organization mitigates extinction risk to a significant degree. Being cooperative in essence means being generally useful. Humans have innate predispositions toward being useful as a consequence. The human motivational systems promote usefulness where pathology isn’t present. One could easily predict that humans would still behave in ways that would be useful even if the 5/40 job market were to evaporate and income were to be publicly funded.

What exactly would the common perception of this condition likely be?

Social Norms:

The previous state of the system would likely influence the perception of the populous when surveying the state with the UBI input. Given that the population would likely still be doing useful tasks at home while the central government was providing income, the likely assessment might be that the work force has been taken over by the central government and egalitarian compensation replaced pay grades.

Emerging Technologies:

Automation isn’t just effecting the job market. It’s also effecting society in a decentralized manner that is producing interesting communities like the maker community, open source, first wave hackers, citizen scientists, self publishing, home networks etc.. This is being facilitated by automating technologies being passed down to the consumer. No one needs a printing press with affordable personal computers at hand. This is probably going to be very influential with large numbers of people with 100+ I.Q.s and plenty of free time. The pressure to be useful in the immediate environment is likely to produce home manufacturing of a wide variety of goods… facilitated by existing technologies. The obvious benefit is economizing with respect to the UBI funding. This also might effect implementation of “off grid” technologies like solar and wind power, mesh networking, art culture, research and development etc..

Artificial Intelligence may also be impacted by such a shift in the socioeconomic system. With so many people, producing so much, at such low cost, the concern of becoming less relevant might produce a proposal to not invest in AGI. This of course is going to happen anyway; however, the likelihood of bifurcation by those who are more concerned could greatly increase. If the central government decides that AGI is a security risk, this could become a messy prospect. This may seem unlikely, however the current condition is one of implemented control measures. This is the basis of all governing structures to date.


UBI could solve a large number of our current issues; however the solutions are likely to be accompanied with dangerous by-products as well. Many balances might dissolve along with such a socialized solution. People are likely to solve a lot of the issues that arise by innate behavior alone; however centuries of political conditioning would be a factor to be aware of.

UBI as a solution doesn’t seem to be sustainable; as relative self-sufficiency seems somewhat probable as a result. Bifurcation has historically played a large roll in fundamental social change; and there really is no reason to think that the results of technological advancement would be dissimilar. It seems very probable that one could expect an instance of diversification the likes that no one alive has seen. This becomes a problem in a society that is systematically polarized, psychologically manipulated and financially exploited. This is not a pessimistic view of society. It is known facts about society; that need to be considered by those who have the interest and aptitude… for our own sake.

Personal Note:

The notion that governments can be trusted to take care of citizens is so naive that I’m not even going to consider it. I’m also not going to consider the notion that humans would be “on vacation” while everything comes to them on a whim. That’s the epitome of Utopian nonsense. I’m also not going to consider AGI providing for humans; as it’s pure speculation that has no coherent basis what so ever. Natural systems tend to behave in specific ways and that account doesn’t include them.

My own concerns about UBI revolve around the general lack of acceptance that exists in many social systems. Natural systems diversify and lack of acceptance is a poor environment for it.

Notes on Human Interaction and Autonomy

There are many arguments for autonomy in humans that fall short of producing confidence in it. The common perception is that of autonomous agency; however reduction and testing tends to suggest otherwise. Though human interaction and behavior is chaotic and thus difficult to predict in discrete instances, more general predispositions are trivial to demonstrate. The notions of determinism and causation both appear to be incoherent upon further examination as well. A more wholistic approach is probably more likely to be successful.

The common perception is that of the individual being in the drivers seat of a biological organism. The cognitive constraints that we all share however, tend to produce truncated perceptions. These perceptions are produced by the limited amount of interactions that we are attending to. We often take credit for learned behaviors, evolutionary predispositions, social heuristics, family traditions, impulses etc..

Human interaction, more carefully considered, appears to be feedback loops with various environmental stimuli. It’s also subject to normative pressures. Though there are degrees of freedom, consequences are a constant concern. Almost all human behavior is a result of impulse. Though it’s over 90%, it’s difficult to say how much because even cognitive responses become habituated and thus impulsive. Habituating a generally successful cognitive response is only rational. What one has learned from experience is too often thought of as an autonomous response; however it appears to be merely a deprecation of less successful thoughts and behaviors; and the before mentioned habituation of more successful ones. The act of thinking before responding is merely an economy of this process.

The success of game theoretical understanding has uncovered some interesting arguments against determinism. The presence of cheating and signal noise are chaotic components to the system. Though reducible after the fact, discrete prediction isn’t likely. Since the ability to reduce the instances exists, cheating and signal noise are not likely candidates for autonomy either. This is because the cheating and / or signal noise are themselves products of environmental stimuli as well.

We tend to truncate the evidence in reduction as well. We try to see causal factors in the interactions; though the evidence suggests that all interactions are feedback loops. Our cognitive constraints are the likely reason behind this; though they too are economic products of the environment. In order to be capable of reducing systems and interactions, we truncate them into hierarchies. These hierarchies are products of human cognition and not so much an accurate depiction of nature. The Bohmian view holds up to scrutiny much better. General Systems Theory holds up to scrutiny well too as it doesn’t focus on hierarchies. Rather it focuses on prevalent systemic behaviors. These behaviors scale in our hierarchical accounts.

Personal note:

I’m having more success with General Systems Theory and Bohm’s Implicate Order than I could have anticipated. Though hierarchies are a part of my understanding of natural systems, the reality that nature is not in essence composed of hierarchies specifically creates an interface between the two. I now think of systems as a fractal froth of discrete components; with overlapping spheres of influence. None are causal or responsive; but interactive and cooperative, or at risk for extinction. Biological systems are proving to be subject to this as well… even humans. This is the understanding that I’m gaining from the sciences. It’s also allowing me to consider systems across a wide variety of disciplines; as the axioms provided by General Systems Theory are producing results that are expected by the various disciplines. Whoda’ thunk it? General Systems Theory appears to be a general systems theory.

Maximizing Prediction Value in Complex Social Systems Analysis


The founding principle of Statistical Mechanics is the notion that an understanding of the evolution of a system produces predictive value. This would suggest confidence in the existence of normative function and a preference toward it. It would appear that normative function is a fundamental requirement for prediction; as discrete prediction in an entirely chaotic system would be improbable. In masses of interactions beyond that which humans can parse, probabilistic logic becomes a useful tool for producing predictive value. Statistical Analysis of Classical Systems has become commonplace for this reason. Data analysis is proving it’s worth in recent times.

Probability in Classical Systems:

The cognitive constraints of humans has been a focus of tool production as of late. Concerns over the issues that we face with finance, food production, energy production and environmental influence have become much more prevalent as the population increases. Data analysis has become a staple of not only scientific endeavors, but also for more general research.

The human brain can only reconcile about a dozen pieces of information. This falls far short of the number of significant interactions in complex systems. The most recent solution to this issue has been computer mediation. The combination of relevant information and narrow artificial intelligence has been very useful in sorting our thoughts on many issues. What it has also produced is axioms for consideration of complex systems. The value of the data that is analyzed after the fact, is the prediction value that it produces, as it demonstrates normative function. It’s the statistical analysis of the data that is producing the information that we are interested in. It’s our ability to make predictions about finance, food production, energy production and environmental influence that is the desired human payoff.

The ability to produce prediction value is in the ability to distinguish that which is interesting from that which is normative. This as an axiom, can produce a methodology for assigning probability to possible outcomes. This however requires discrete information concerning the morphology of normative function. This is where probability becomes a useful tool in analyzing systems with large numbers of variables; as the axioms can guide statistics toward significant findings.

Entropy, Normalization and Novelty:

Having some understanding of the morphology of complex systems is essential in producing axioms for statistical analysis. With a framework for classification and logic, the data can produce interesting and useful information. Much of this is already being accomplished with narrow artificial intelligence; however the data is being rendered into information that is useful and understandable to humans. It is important that the end product be humanly intelligible for obvious reasons. Producing an interface between large amounts of data and human cognitive ability seems an effective rout to a functional tool.

Existing theory is more than adequate; as the evidence demonstrates that social systems form with similar axioms. Motivation toward self-interest and self-preservation produce correlation between normative function and behavior. This is because the natural alternative to normalization is extinction. Entropy, being nature’s creative process, doesn’t produce normative function. That is the function of Normalization. Entropy may produce novel properties or systems that are non-destructive to normative function; however it’s normalization that sets the standard. It’s the combination of Entropy and Normalization that then produces Novelty. Entropy, Normalization and Novelty are the fundamentals of morphology and development in all systems.

Chaos and Emergence:

All natural systems are subject to Chaos and Emergence. This is probably because of the prevalence of Entropy. Chaos is of course produced with large numbers of interactions that are difficult to parse. Chaotic systems often do not produce discernible patterns in repetition. This is a large hurdle for prediction; as the changes in the patterning are also difficult to predict. This however can still be observed and expected. This is important when modeling, economizing and assigning probabilities. Emergence is of course the properties, components and systems that emerge to human surprise. Like chaos, it probably occurs due to large masses of interactions. It might also be a product of natural properties that we are not yet aware of. This too can be expected statistically; and can aid in assigning probabilities.


For the purpose of clarity, it’s important to define and give context to what is meant by prediction. In this article, it is a symbol of statistical significance. Complex Social Systems are the epitome of chaotic systems. The collection of data and the analysis of it can only result in the assignment of a probability. It’s not generally the intention to produce accurate predictions of the emergence of discrete facts at a specified time. For economic purposes, the intent would be to create models that have the highest probability of success as practical. By aligning the model with statistically predictive axioms, this can be achieved overall.

Considering the evidence concerning the Second Law of Thermodynamics, one would expect that Entropy be the most prevalent aspect of morphology. The second most prevalent aspect would then be extinction; as Normalization and Novelty would be far less frequent occurrences. One could then mathematically, logically and systemically deduce that the difference between Extinction and Entropy is the sum of Normalization and Novelty. One could also deduce that the third most prevalent aspect is Normalization; as Novelty over time becomes normal. That which is normal contains the properties that produce Normalization. Novelty that coordinates with a critical mass of normal properties has a high probability of becoming normal and amending Normalization. The understanding of the morphology of the system, in discrete processes, guided by the axioms allows the assigning of probabilities to the success of the discrete processes.

Considering the evidence concerning General Systems Theory, discrete systems are influenced by overarching systems; as systems that are higher in the hierarchy produce initial conditions. The initial conditions are often normative and influence discrete systems via Normalization. Where Novelty is present, one might consider it’s economic advantages and weaknesses. One might also consider the influence it might have on that which is normal. This may maximize long term predictive value.

Considering the evidence concerning the behavioral sciences, types of behaviors in individuals, and large and small groups are somewhat predictable. Being chaotic systems means not only that patterns can be difficult to demonstrate, but also that initial conditions are the focus of influence. Where the behaviors are not congruent with the initial conditions, one might consider that some form of Entropy is at play. This might come in many forms. This would also help to determine the predictive value of the success of the behavior. By the rigor of consideration with multiple axioms, the most predictive information as practical is gathered. With the application of each axiom the information is refined producing more accurate probabilities.


The difficulty in producing predictive value in Complex Social Systems Analysis can be overwhelming to the human psyche. It however isn’t just the complexity of social systems that produces the response. The observer effect appears to play a large roll in the responses. The fact that Behavioral Science is still socially in it’s infancy; and hasn’t yet had time to penetrate common knowledge may be one factor. The frustration that comes with argumentation between individuals may be another. There is also the constant natural perception of our hunter, gatherer ancestry influencing our thoughts. We are intrinsically poorly suited to Social Science; as we are evolved and conditioned to the life of a hunter gatherer. This however is not where we are. The difficulties that we face are the product of Entropy; and the solution, by natural processes, will likely either be the normative coordination with initial conditions that leads to transcendence, or abrupt extinction.

Considering the probable outcomes that are likely to be either our future or our end may insight one to be adamant toward public awareness. This however isn’t part of the model. Public awareness is much more likely to produce Entropy than Normalization. The issue is with the state of human understanding in general. The issues in the paragraph above are the root. Human behavior is not necessarily a result of initial conditions. It’s more of a perception of initial conditions. A path toward normative function would likely be more effective for risk management. This means that a perception of initial conditions that produces normative behaviors would be the desired condition. This isn’t likely to result as the current perception of initial conditions for human society in general is still entropic: and thus the vast majority cannot distinguish between normative and entropic behaviors. This doesn’t necessarily mean that humans cannot survive this phase. There are environmental factors that would likely produce normative, impulsive responses via self-preservation. This seems the most likely source of Normalization; as artificial perceptions are so prevalent. This also appears to be the most effective axiom for the purpose of economics; as it is dictated by the initial conditions.

Personal note;

It is likely that normative behaviors will result from the current state of the biosphere. What is in question is whether or not humanity or the whole of biological life as we know it will be a part of it. This article is in a sense optimistic; in that the impulsive responses that are likely to bring normative behaviors will be accompanied with the dangerous results of entropic behavior. There is no precedent for social movements creating significantly normative behaviors in recorded history. This is because polarization is a part of our social paradigm; and has been since the dawn of civilization. The notion that society would all of a sudden wake up and start behaving in a normative fashion is just pure fantasy. Until the initial conditions are so dire that they have much greater influence than the polarizing entropy and the ineffective rituals that are associated with it, no significant normative behavior should be expected. It is probable that we will put ourselves and the rest of the biosphere at great risk before change will occur. Many are concerned that it will be too late; however the arguments for it are weak. This is not the first time that humans or even the rest of the biosphere has been in a similar or even worse situation. Many scientists have stated that this phase of development (from type 0 to type 1) may be the most dangerous; however all phases appear to be wrought with extinction and existential risk for a wide variety of reasons. This type of danger appears to be as natural and as prevalent as at any other point in time. There appears to be nothing historically special about the dangers that lay before us. I think that this notion is probably rooted in a misunderstanding of how we interact with our environment. We seem to have a bloated account of our influence on others, society, the biosphere and beyond.

Notes on Exploitation

In my lifetime Behavioral Science has come of age. This is an exciting time to be alive; because this new understanding of ourselves allows us greater discipline and more tools for the pursuit of happiness. It might also aid in our quest to become better ancestors and stewards. For many it is answering the big questions about the importance of ethics and morality. This is of course what it is doing for me. These are the questions that I have been asking and the answers that Behavioral Science is leading me toward.

Q: How is exploitation possible if free will doesn’t exist?

A: It’s not free will that is being manipulated. It’s functional stimuli being replaced with strategic artifices. Human behavior is a product of initial conditions; conditions which can be obscured, confused and misinterpreted. Behaviors under certain conditions are somewhat predictable. Creating perceptions of specific types of conditions often results in behaviors that are appropriate to the perceived conditions. It’s not so much that one is being manipulated. It’s more that one is responding to a perceived condition that is not likely to be the actual condition. This is the danger that dissimilation (lying) presents. In order for humans to behave maximally appropriately, we require maximally accurate approximations of the conditions. In order to produce a specific type of behavior, one only needs to create a perception of the conditions that the specified behavior is appropriate for. This includes forced behaviors; as self-preservation is a behavior. Also self-preservation is a part of financial coercion; as necessities are “financed”.

Q: How is one to describe exploitation in a manner that is scientifically coherent?

A: Exploitation in a systemic context isn’t necessarily unfavorable. The issues arise when unfavorable outcomes are the wage. When unused resources are being exploited in a manner that is generally cooperative with respect to the overarching system, this would probably be seen as normative. Where conditions are misinterpreted to produce inappropriate responses for a specific cog in the system, without consideration for general systemic function, this would probably be considered entropic. Entropy however isn’t necessarily unfavorable. We find this unfavorable because normative function is in the interest of normative emergences such as biological systems. Entropy can be a wonderfully novel occurrence; if the outcome is non-destructive to normative function. Entropy, Normalization and Novelty are a scientific trinity that is a necessity for the fruitful existence of biological systems.

Where biological entities are allowed to produce novelty via distributed intelligence, all three aspects of the trinity can be maximized. This makes the system competitive and useful; or in Darwinian terms fit. The exploitation that exists in human societies is probably holding humanity back in a systemic sense. This could and has resulted in abrupt extinction and existential risk. Social paradigms as an economy might help to produce more of what we refer to as liberty as well as providing the security that humans unwittingly strive for. This may mean that favorable and unfavorable forms of exploitation can be accounted for and distinguished scientifically. The probability of this is increased by the ability to scientifically analyze and describe initial conditions.

Q: Why do humans exploit each other?

A: There really is no scientific answer as to why. There is however a description of the conditions that lead to this happening. Humans have a general need for security; that results from the natural predisposition toward self-preservation. Where there are not accommodations to this disposition, unfavorable behaviors can result. Natural systems appear to be tiered or hierarchical systems that support each other through cooperation. This is in essence what normative function is, with respect to our current understanding. This has produced dispositions toward certain types of behaviors that are associated with specific types of conditions. Where the conditions are obscured by some form of pathology, inappropriate behaviors can result. This can happen in the many tiers of human interaction. It can happen in a one on one capacity. It can happen in a family or circle of friends. It can happen in a community or even in governmental structures.

This doesn’t necessarily mean that ill intention is the basis for exploitation. It could just as likely be a false perception of defense against ill intention. It’s often a false perception of the conditions that results in unfavorable behaviors. This can happen in the many tiers of human interaction as well. The hierarchy appears to function as a unit; and thus each tier is effected by the rest.

Most might argue that the basic need for various forms of security not being met or the perception of such is a general cause. There is also the possibility of physical or developmental damage or deformity. The latter of course isn’t likely in the case of social issues. In that case specifically, it’s more likely that a more generalized, false perception is the factor of interest. With respect to the behavior of individuals however, most might default to considering that insecurity based in false perceptions is the root.

Q: How can we most effectively address exploitation?

A?: By concerning ourselves with human needs; as opposed to human rights? By endeavoring to gain and share the most generally useful perceptions of conditions as practical? By being just as aware of the conditions as the behaviors? By the negative utility of removing the conditions that promote insecurity? Is Positive Psychology and Positive Social Psychology the answer? I think so… for what it’s worth.

Issues With Rigorous Consideration of Modern Forms of Exploitation

Noticing the nebulous language in my initial argument for naturalized socioeconomics concerning modern forms of slavery, I decided to work on a more rigorous survey. This has presented many issues concerning nomenclature, definitions, measurements and the meaningfulness of axioms. With the nomenclature, it’s sometimes debatable if the name is acceptably descriptive. Concerning definitions, it’s difficult at times to categorize specific conditions as they fall into gray areas or are suited to multiple definitions. Measurement however, is even more of an issue as scientific descriptions have to be rigorously supported with scientific evidence. This becomes a sizable issue when there is little to no evidence supporting a founding concept. The axioms themselves are unscientific in so many ways that science often lacks the tools to address them. This in many cases, leaves the ball in the hands of philosophical disciplines such as Ethics.

When trying to do the math concerning the prevalence of slavery in modern times, there were issues with nomenclature like human trafficking; and issues with definitions concerning extreme financial coercion. There are obvious gray areas concerning choice and the lack there of; however this isn’t really an issue as all forms of exploitation are ethically unacceptable. The obvious rout to solution is to address the whole of exploitation. This however has it’s difficulties as well. In order to rigorously describe and argue against exploitation, one must first demonstrate it in a scientific manner. For the purpose of scientific study, “I know it when I see it” just isn’t good enough.

The difficulties begin with clearly defining what exploitation is. Using terms and theory from Behavioral Science isn’t as helpful as one might think. For instance, defining exploitation as manipulation requires a scientific description of manipulation. This runs into issues with theory as there is no theoretical principle to manipulation. The problem is with the description of manipulation itself. It essentially requires that a person have a certain level of sentient autonomy. Where as this is the perception of most humans there is little to no evidence for it. This is a problem because perceptions are not good enough for scientific description. This is fertile soil for the Observer Effect and the like. That being said, this isn’t just a scientific issue; it’s an epistemological issue as well.

An additional issue in scientifically arguing against exploitation is rigorously arguing against exploitation as an axiom. This requires demonstrating that it is generally unfavorable. Again “I know that it is” just isn’t good enough to call science. This presents a problem concerning natural distributions of leadership qualities. Most people aren’t inclined toward leadership or have the “will” to make the big decisions. Though there is a large difference between investing in the strengths of others and a tiny minority exploiting the vast majority, the axiom itself is somewhat unscientific. This is because the condition could and probably should itself be considered natural regardless of whether or not it’s accepted by society. This of course is tempered by systems theoretical axioms as social acceptance is a part of the equation; however the level of social acceptance has not been static concerning it.

Even with ethical consideration of social exploitation, there arises an issue with where to draw the hard lines. What is or is not socially acceptable at any given time is merely generalized and not rigorously defined. There is some undeniable subjectivity to the prospect.

Maybe we shouldn’t concern ourselves with the fine, discrete details if it hinders our forward progress toward a general higher standard of living. If there are some minor details that we are incapable of ironing out, why “pet the sweaty stuff”? Because it will matter greatly to that tiny minority that we allow to fall through the cracks.