It may be beyond current human understanding to produce a coherent socioeconomic system. There may however be a methodology that would maximize coherence in a naturalized model. Treating a socioeconomic system that coordinates with overarching natural systems as an experiment, with a scientific methodology may produce such an effect. The goal of this open project is to explore the feasibility of such a system; as it may be required for the purpose of extinction and existential risk management.
A naturalized socioeconomic system would be one that exhibits the normative properties generally found in self-organizing systems. Being normative, it would of course cooperate with the bulk of self-organizing systems. It might also cooperate with novel systems; to varying degrees. This would suggest dynamics and self-organization in it as well. In order to maximize the probability of such outcomes, a scientific methodology might be the most viable model for functions.
The scientific method boasts a wide variety of strategies for producing maximally empirical data and coherent understanding. Employing this method to socioeconomics, as it is employed to economics in science might produce a socioeconomic system that is adaptive and resistant to extinction risk. Considering the influence that such a system might have on the general ecology, it may also produce resistance to existential risk.
A naturalized socioeconomic system would be distinguishable from current socioeconomic models mainly by its unified structuring. Current models are based solely on usage of currency to distribute resources. This is tempered with political influence in an attempt to see that resources are distributed fairly. Though this has a long history of cyclic failure, it has resulted in advancement and some degree of self-correction with respect to the severity of the unfavorable outcomes associated with the coercive aspects of currency systems. Pathological exploitation however is still a large part of socioeconomics. This appears to be a product of truncated models.
A naturalized approach itself differs from financial / political models in the main goals. Rather than attempting to coerce desired behaviors through bruit force social influence, a naturalized model would be one that nurtures existing natural behaviors to produce favorable outcomes. The former is probably a product of the ancient political structuring of Coercion; and the latter is probably a product of the primordial structuring of natural Normalization. Normalization is however coercive itself; when considering it from a unified lens. The payoff of normative behaviors is lack of extinction risk. The difference between the current model and a naturalized model might be that coercion is being maximally minimized; as we would be reducing systemic coercion as much as humanly possible, rather than employing it for discrete, minority influence.
It would stand to reason that the principles of a unified hypothesis base self interest on general viabilities that coordinate with each other to fortify each other’s effect; in a distributed manner. The longevity of any system appears to be rooted in how effectively it coordinates with a critical mass of pre-existing normalized systems. Therefor, the priorities should be Risk Management, Synergy and Distribution respectively. This view is based upon the works of Adam Smith and David Bohm’s “Implicate Order”.
As previously stated, Normalization is coercive; as it is the natural alternative to extinction risk. This has been shown to be a basis for normative behavior. This includes the behaviors of self-organizing systems including human behavior. It is unlikely that humans would have the ability to mitigate this in the foreseeable future; thus it should probably be included in the approach as a basis. Though positive utility associated with human interests would be part of the model as well, it would likely be tied naturally to the rewards and consequences of interaction with the overarching systems. The possible rewards are more likely to be put at risk without risk management. This suggests that risk management should be prioritized.
One of the more unified views of physical existence is the reduction of interactions into Entropy, Normalization, Novelty and Extinction. This is of course based upon probabilities of possible types of outcomes from types of interactions. Risk Management in a naturalized socioeconomic system might be best descried with these principles; for the time being. Since Entropy is the most common type of interaction in self-organizing systems and Extinction is the most common outcome, the risks are apparently generally high. In self-organizing systems normative behaviors are observed to mitigate risk of extinction. Novel behaviors though not so likely to mitigate extinction risk are generally non destructive to normative function. Novelty can also provide dynamics that might also result in the amendment of Normalization; that can result in the adaptability of Normalization as well. That being said, the difference between Entropy and Extinction is the sum of Normalization and Novelty. This appears to describe a natural, risk management function.
The synergistic principle of Open Naturalized Socioeconomics is Polyocracy. It is characterized by socioeconomic establishments that coordinate with each other as having equal importance. It is intended to mitigate the disenfranchisement that leads to fragmentation and dysfunction. The history of such dysfunction is the observation that supports the notion of the general necessity of the systemic components.
Polyocracy in more detail is based upon coordination of instrumentalistic and proceduralistic methods with a choice of judiciary and diplomatic support. This is to promote the coherence of law, security, public policy and economics. Understanding the importance of synergy in the system requires the understanding that each component has influence on the stability of the system. For instance Instrumentalism is the epistecratic, top down influence; and Proceduralism is the bottom up, practical application. The coordination of these two is more likely to produce stable economics. This is based upon General Systems Theoretical models. Instrumnentalism would be characterized as an Archetype; and Proceduralism would be characterized as the Particular. This would in essence be an Epistocracy providing services for the general public’s projects.
The model is based on a common tuple. The Particular is represented by T (time), U (Input), Y (output) and Q (state). This states that an input to the system would over time produce an output that would result in the system being in a particular state. The Archetype is represented by Ω Omega (admissible input), δ delta (transition) and λ lambda (observed output). This states that an admissible input (input that is likely to produce the desired output) would initiate a transition in the state of the system; resulting in an observed output, to compare to the desired output. This might allow citizens to develop models that coordinate with overarching systems and increasing the probability of producing the outcome they themselves desire. This is intended to unify the populous with the state; on the General Systems principle of coordinating subsystems with overarching systems.
Epistocracy is predicated on the notion that those who better understand the issues are better to employ toward their solution. This is obviously the case; however in complex social systems, distributed intelligence is probably required to do so. This suggests that sharing between the Epistocracy and entrepreneurs might be a more effective model for addressing the complexity. This also enhances security as it inhibits the emergence of singular points of failure. This might also maximize efficiency of human resource management and also maximize the emergence of novel systems. That being the case, an increase in the rate of advancement might be expected.
A scientific approach toward Socioeconomics currently appears to be the most likely approach to be most effective. The scientific method may produce more success than any other model to date. This may be because failures are more often used as useful data; and the failed projects aren’t often revisited in multitude. The scientific method has demonstrated that Economics is indeed a measurable endeavor; and there is a wealth of data to apply toward it. More recently there has been more cross disciplinary study to promote the unified theory that is probably required for scientific, socioeconomic modeling. Cross Disciplinary Inferential Statistical Analysis may be the proper vehicle for a naturalized, socioeconomic paradigm. General Systems Theory seems the likely candidate for a basis.
The lack of precedent for a naturalized, socioeconomic system in human history presents an interesting issue. This produces economic modeling without real world test cases to reference; thus the suggestion to treat the system dynamics as an experiment itself. The Archetype is a proverbial claim of sorts; and the Particular is a test of the claim. Where the Archetype is supported by the results in the Particular, the Archetype is unchanged. Where the Archetype is not supported it is amended with the empirical data produced by the Particular. This would of course however, require a number of independent test cases. With complex, social systems, there is also likely to often be a third option. Individual instances are unlikely to have been conducted under similar initial conditions. This is obviously something that would have to be taken into account. This might be a new path for research.
Complex social systems are the epitome of chaotic systems. Treating them as an experiment probably requires accounting for a large number of variables; when considering individual test cases. Associating the initial conditions with the outcomes may be the only way to address test cases that are not conducted under similar conditions. The current rise in data sharing and analysis may be a path toward the viability of such a model.
Tim Berners Lee has been suggesting that the “next web” be centered on data sharing. He has been supporting the idea that data in general is more useful than is commonly appreciated. A naturalized socioeconomic system might for instance use such data to account for relevant variables. A decentralized network that allows the Archetype to share data with the Particular for both the purposes of empirical data collection and practical problem solving might be a viable model for an efficient, effective, unified socioeconomic system.
An example of such a decentralized network may be something similar to Ethereum. A number of securely synced, full nodes securing copies of a main database, with relatively large scale human resources maintaining and analyzing the data, might be an adequate foundation for an Archetype. A larger number of staffed, light clients, with public, user access might be an effective interface for the Particular. Entrepreneurs would be able to share data and search solutions with such a network. This might be a viable model for a prospective .eco fork of the “next web”.
Throughout history, the only type of instance that has been shown to initiate fundamental socioeconomic change has been critical failure. Loss of confidence in the system promotes a general search for alternatives. For instance, the many failures in the British Empire to prepare for the industrial age may have been the main catalyst for the advent of Capitalism. Feudalism just wasn’t adequately compatible with industry. Adam Smith’s approach was to analyze the system and attempt to create solutions for not only an adequately functional, industrial socioeconomic system, but also a more favorable one. This was a more daunting task for Smith as he didn’t have the behavioral and societal understanding that we have today. He also didn’t have the understanding required to employ statistical, mechanical models for issue solution as the basis is the evolution of systems; as he died decades before the birth of Charles Darwin. These facts are reflected in his brilliant but lacking analyses.
There are a great many failures accruing in our transition from an industrial to a technological society. Many of them are due not only to an inability to secure products and services from pernicious outside and inside forces, but also an accelerating path to aggregating ends. Ray Kurzweil is known partially for his Accelerating Advancement hypothesis. He suggests that technological advancement accelerates in exponents over time. This appears to be true. This also appears to have an interesting side effect. Emergent technologies have been aiding in the aggregation of wealth and markets. For instance, automation of the stock markets, the advent of the Startup and on line shopping have been significant influences on growth rates; but have not been generally convenient for small business. Wealth and markets are now easily aggregated and markets are now quickly saturated. A list of the financial crises throughout history shows how the rate at which crises come about has accelerated. Plotted out it would obviously be a significantly, increasingly steep curve.
The Crisis Cycle:
The Crisis Cycle is a term that represents the consistent consequence of aggregation of wealth and markets. During the growth phase, state economies are relatively stable. At growth maximum, instability accrues until the economy rapidly declines. What follows of course is recovery. This is where acceptance of more fundamental change is more likely. This is historically where more fundamental change has happened. It’s the lack of confidence in the model that produces change in complex social systems. This tends to only occur when the consequences are so severe that the populous generally agrees upon the lack of inequity in the model.
“You never let a serious crisis go to waste” ~Rahm Emanuel
Socioeconomic systems are self-organizing systems. Attempts at even correction in unstable systems are unlikely to produce results because of the nature of human thought. There is a difference between normal and normative for this reason. Humans being a product of both their immediate and general environment produces cognitive dissonance when the immediate environment is significantly a product of human hubris. That which has become socially normal has become confused with that which is generally normative. This results in a destructive layer of complexity that is resistant to the normative appeal that tends to produce self-correction. This is probably the reason that self-correction tends to occur following severe consequences.
In the event that socioeconomic instability reduces public confidence in the current model, contingencies would have a high probability of being seriously considered. Using methodologies that are likely to produce feasible solutions for review during crises appears to be the most effective strategy for effecting fundamental change. Rather than leveraging time and resources toward public awareness, while taking said time and resources away from producing effective contingencies, focusing on research and development of contingencies is more likely to produce prospective solutions that would have the desired result. It might be best to concentrate on the quality of work as opposed to selling the idea to the public; which has become second nature in this capitalistic society. Being prepared for crises with solutions to fundamental issues is probably the best strategy for any archetype.
Much of the political discourse of late concerning the Civil War is grossly oversimplified. This includes the notion of the war being centered on the south’s will to preserve slavery. This is a purely political opinion and not so much a scientific one. It’s another account in the massive, historical pool of accounts of political support of existing industry. It’s also an account in just as large a historical pool of accounts of disenfranchisement of the majority.
“Money is Power”:
The Limited Liability Act of 1855, The Joint Stock Companies Act (1856) and Companies Act (1862) had President Lincoln more concerned than about the Civil War itself.
“I see in the near future a crisis approaching that unnerves me and causes me to tremble for the safety of my country. . . . corporations have been enthroned and an era of corruption in high places will follow, and the money power of the country will endeavor to prolong its reign by working upon the prejudices of the people until all wealth is aggregated in a few hands and the Republic is destroyed.” ~Abraham Lincoln
The distribution of slave owning plantations was small as well. The concerns that saturated markets entail were the concern of the average farmer during this time. The corruption that existed in Washington also existed in the southern states. Those large economic entities (large plantations as well as corporations) are the ones that have the most impact on economies. Loss of them equates to significant loss in the economy itself. This is in essence the leverage that large industries and financial institutions employ when interacting with governance. The plantations in the south had similar economic leverage to influence legislation in the southern states; thus influencing congress, that corporations had to leverage influence over legislation in central governance. President Lincoln’s (R(when it meant something)) concerns revolved around the threat to the republic. To President Lincoln the war was not about slavery; it was about the loss of governmental power.
Politics in the southern states was vastly influenced by the plantations that produced cash crops. The writs of secession were thus porked with anti-abolition entitlements. Less than one third of the southern soldiers had family income from slave ownership. For the rest, their perspective of the war was one of fighting against an invasion. They were disenfranchised and in essence along for the ride. They were being invaded through no fault of their own and were concerned about the safety of their homes. There is however more to it. This was a time when the country was being industrialized. The advent of the barn engine and subsequently the tractor was likely to end slavery regardless of the outcome of the war. Over a period of decades, it became increasingly difficult to sustain a family on a family farm. This is because of the commodification of farming and expenses like shipping via the railroads. The railroads for instance, were charging high prices due to no competition. This was part of the corporate influence on the economy. Many of the family farmers were coerced to look toward employment due to the automation of farming. by then reconstruction was under way and working tractors existed. This change would likely have had an effect on the motive for southern soldiers to fight in the war. This would of course be more of a subconscious, impulsive behavior. The rejection of the change that was being imposed upon them was likely a strong motivator.
“It Was About Slavery”:
The notion of what anything is about is entirely subjective. It is not a scientific notion. It can however be criticized scientifically. It appears that, to the vast majority, the war was not about slavery at all. Most of the people who were wielding influence were aware that slavery was nearing obsolescence through automating technologies. This is uncomfortable for most to realize; due to the will to be masters of ones own path. The abolition movement would likely have not even existed if there were no replacement for slavery. It “coincided” with the advent of the barn engine; and slavery was made illegal coinciding the advent of the tractor. Civilized humans tend to take credit for what emerges and until then, take advantage of each other. History is full of precedent that attests to that fact.
The Civil War was an interesting though trying time in our history. Even today it influences politics with various truncated opinions that divide the populous against itself. This is concerning on many levels. In order for one to vote their conscience, there is a requirement that one be educated on the subject matter. This is not something that the Industrial Revolution has promoted in the American culture. Political discourse has since become brandishing truncated factoids that are taken out of context in order to prolong the division. President Lincoln’s worst fear has come to be.
Revolt does not come from a population that is oppressed alone. It comes from one that is also uneducated. The struggle between political and economic influence tends to produce both. There is a great deal of precedent in our history that attests to that fact as well.
The world view that has been in development since the dawn of civilization has produced a wide variety of favorable attributes. This is however on the creation of unfavorable ones as well. When such a large influence as collective consciousness is wielded as a control measure by a minority, the outcome is most likely going to be large scale disenfranchisement. This has been empirically demonstrated in written history and in common practice. The most disconcerting aspect of this fact is probably the billions of lives that this has influenced in an unfavorable way; though the current extinction and existential risk obviously warrants much concern as well.
Control Based “Work Ethics”:
The current view of work is generally one of sacrifice for the common good. Though it’s often the case in practice, it’s also a product of the view itself. The manner in which society is developing is highly dependent upon the views of the society itself. There is a “group think” type of hive mind that is guiding social structuring. Such a view of reality ensures the acceptance of the solutions that are produced.
Humans tend to think in a top down type of manner. Axioms provide a conceptual model for the development of solutions to problems. If the conceptual model is incoherent, the solutions are more likely to produce unfavorable effects. When leaders spread unsupported dogma in the place of awareness of maximally accurate approximations of the initial conditions, for the purpose of personal gain, the outcome is more of a control measure than an attempt at socioeconomic solutions.
A false dichotomy of leaders and followers is also part of the current world view. Though there is natural predisposition to invest in the strengths of leaders, natural systems diversify, in mass, in a distributed manner. Allowing small numbers of leaders to aggressively guide society is probably hindering novelty to a significant degree. This probably means that much more advancement might have been possible if distributed intelligence had been employed in the initial model of civilization. Rather, top down influence produced exploitation of the vast majority of the population; which is as prevalent today as it has been for thousands of years. The severity of it has however decreased significantly.
It would appear that contribution to society was generally, originally a very purposeful and meaningful experience that was generally passion provoking and fulfilling. It was likely more appreciated as an important part of social interaction. It was also probably more enjoyable than most of the experiences that people have with employment.
The Separation of Education and Entertainment:
In all mammals accepting civilized human societies, education is developed through play. Animals play to learn useful life skills. Humans however have separated entertainment from education for the reason of acclimation to the work ethic. Entertainment thus became a reward for contributing to society.
It would stand to reason that a fun, game like, educational experience is what humans have evolutionary predispositions toward. This being the case, an educational system that is framed in that manner is likely to be the most successful model; as humans are likely to be intrinsically suited to it. Rather, educational models are too often taken for granted or under appreciated; due to incoherent socioeconomic views. For instance, many have asked “why invest in the education of someone that is doing unskilled labor?”. This truncates the issue to the “means of production” when education is originally for the purpose of learning life skills in general.
Thinking of education as a lengthy endeavor of hard labor is probably ensuring that a large number of young people fail at it. It’s probably supposed to be fun. The dividends from making it fun again would probably be enormous.
The Prevalence of Punishment and Lack of Positive Reinforcement:
The general world view, when it comes to criminal behavior is that people have a choice; and the wrong choices should be punished. There is more positive reinforcement now then in previous centuries; but punishment is more often than not the first and only action taken. This is a complex and complicated issue when considering the sovereignty of the mind; but erring in the first action tends to be most common.
The manner in which drug addiction is addressed is a pretty good indicator of where we might make some progress. For instance, it’s difficult to have an aggressive intervention because of concerns of infringing upon the rights of the individual. It’s also more likely that the individual would require some serious form of consequence before the denial of the seriousness of the situation became evident to them. With the ethical considerations, that is probably where the first action should take place. When the first action is punishment alone, the success of the action isn’t being maximized; it’s not even really being promoted. what’s usually happening is that those who have been victimized by the actions of the addict are being avenged. Often it’s also the addict being punished for possessing an illegal substance. This in essence doesn’t really promote the correction of the behaviors of the addict. For that, positive reinforcement would probably be a more effective strategy. If part of the terms of release were to get help for the offending condition, the success of the action would probably increase dramatically. Without it, the chance that the addict would continue to victimize others is not being mitigated. The system should take responsibility for this; and it generally doesn’t.
In recent times, there has been significant maturity in local law enforcement. City police are employing psychological tools for recognizing not only deception but also red flags for mental illness and disorder. The local courts are also more sensitive to such things. This however worsens up the ladder. The states are being pressured not only from the local governance but also from federal governance. The difference in the world view is evident as well. There is of course the fact that federal cases are often severe and highly publicized cases to consider as well. This is probably feedback from an incoherent, general world view. This will not change the fact that the current approach is much less likely to be successful than one including the added positive utility of positive reinforcement. The costs of dealing with repeat offenders is probably more than the costs of mitigating repeat offenses.
It doesn’t seem likely that the life of an addict, in the darkest hours is a good life. The well being of the individual should be a concern as well as the protection of those that the individual might harm.
Polarity and Blame:
The two party political model often prevents favorable change. This is because of the tendency to choose a side and stand by it keeps the populous polarized and incapable of effecting change. The issues cannot be effectively addressed if there is a 50 / 50 ish split on the solution. The lack of change is often blamed on the opposing party; and allowed to be unsolved. There is also the issues that arise from more personal values that probably shouldn’t even be political issues at all. This distraction from favorable change is constant and pernicious in it’s ability to pit half against the other half; again with no solution.
The view that one might gain from the work of Stephen Pinker is one of a world that is getting better over time. This is probably because it’s derived from the developments from civilized society alone. The mistakes that were made early on are influencing society even today. When considering extinction and existential risk factors, the assessment may be going in the other direction. Though Graham Hancock has made some large mistakes concerning the myth of Atlantis, he has a poignant view of the level of self awareness of recent humans. His suggestion that humans are a “species with amnesia” appears to be true on many levels. Though I don’t think that we have entirely forgotten who we are, our failed attempts to re-invent ourselves has all but severed our deep connection with nature and left us in critical danger. This doesn’t however mean that this condition is necessarily fatal. We can’t entirely forget who we really are. Our base instincts are probably much more influential when the dangers become generally obvious. The purpose of this article however is to illustrate the notion that life is supposed to be better than it is; and better understanding of ourselves and our deep connection to nature is probably the solution.
Endeavoring to model a social system that serves the purposes of all of it’s components in a synergistic and thus symbiotic manner should be the focus of any one that wishes to produce a successful, thriving society. Though an initial model isn’t likely to be representative of all of the subtleties of the finished product through practical application, the basic principles of the model are likely to have a great deal of influence on it. The model is a framework that enforces the approach; which guides what solutions are sought along the way. Though it’s the individual solutions, which aren’t necessarily intuitive, that will be enabling practical application, the top down influence of the initial model will likely be influencing the synergistic function; as the initial model is considered when choosing discrete solutions. The focus of this article is to present an example of an initial model for such a synergistic social system.
Polyocracy is a principle predicated on the ubiquitous sharing of the responsibility for the function of the social system. This requires the understanding that each establishment is essential in the success of it. This suggests that there is common interest in synergy; that is likely to produce sustainable success. Disenfranchisement of any of the essential components would thus likely have an unfavorable overall result; that would eventually result, and has historically resulted in the general failure of the system. Polyocracy is in essence the mutual respect for the vitality of each and every societal structure.
Polyocracy on a global scale serves to promote diplomacy with other forms of governance. Historically, nationalization has produced a host of unfavorable results; that have been expressed in short term financial patchwork that eventually resulted in financial crisis and social unrest. Since the eventuality is similar with either of the approaches, diplomacy is the logical solution.
Education may be the most important aspect of social contribution. Whether it be hard scientific application or spiritual guidance, knowledge of the practice and of how it would either be a general service or generally non-destructive is essential in polyocratic policy. A balance between personal liberty and social responsibility can only be achieved if the understanding that everyone is necessary is promoted through curriculum. Historically there have been numbers of social movements that have divided societies; based upon a wide variety of beliefs, that have been detrimental to the overall function. These include political parties, religious backgrounds, class systems, exploitation, gender rolls etc. Education is likely the most valuable tool for sorting the issues that a developing society has to face. This might suggest that education be fully promoted and open access at all levels. This would maximize the potential for an educated populous.
A secure society entails a large number of contexts that would require a large number of solutions to produce. Much of it can not be predicted in foresight, however our current hindsight can be a useful tool in modeling an initial foundation.
Global security is an extremely complex subject that may best be approached first through a combination of judicial and diplomatic practices. An attempt at diplomacy is probably the most reasonable course of initial action. Adjudication should also be a part of this process for the purpose of promoting fairness. Systems are thus in place in the event that fails. Defense would of course be part of the security structure.
Domestic security is also very complex; though a little more manageable with respect to the legalities. This is because the laws enforced by the Judiciary are always applicable or a candidate for change. These outcomes are made possible by diplomatic and / or judicial interaction. The interface for the Judiciary would of course be the courts and police forces. The interface for the Diplomats would be the public representatives.
Legal representation might best be served with a combination of practical application of adjudication and epistecratic application of Instrumentalism. The ability to enforce laws that serve society has a great deal of dependence on the coherence of the laws themselves. An epistecratic approach to law making might minimize instance of lack of representation of important or even critical, systemic issues. The interface for Instrumenatlism would of course be academia.
Public policy would be best determined through interaction between the public and public representatives. It might be served by diplomacy and proceduralism. It would stand to reason that the most favorable outcome for the general populous is probably the outcome that functions the most favorably in practice. The details of course would be hashed out with diplomacy.
Economic influence has been a point of contention both globally and domestically, for all civilizations, throughout known history. The most effective way to approach global economics may be to produce a domestic economy that has unprecedented success. A combination of Instrumentalism and Proceduralism may produce much more favorable results in practice; as a combination of epistecratic modeling and real world, proceduralistic testing have historically resulted in general advancement.
Maximally efficient and effective function of all of the components of a social system are likely to produce a thriving society. Historically, thriving societies have enjoyed more evenly distributed wealth and have tended to promote civil rights. Where there is less concern over stability, there tends to be much more cooperation. It also seems likely that the reverse is true. Added cooperation is probably likely to result in stability. Previous models don’t account for this. The Old World view is based upon an expectation of competitiveness that evaporates when society is growing rapidly; and returns at growth maximum. Cooperation is expected when the system is functioning relatively well; therefor a model that promotes function rather than one that promotes competition is in order for the general good.
Disorder has many definitions. Even scientific disciplines have varied definitions of it. For instance, the clinical disciplines characterize it by ones diminished ability to function. This seems a little truncated from a systems perspective. This is because it doesn’t seem to coordinate with sociological questioning of the social order; much less the dogma that must exist in even our most unified models.
The issue appears to be lack of congruency between the hierarchical models that we construct for understanding of natural systems. To implement a more unified understanding of systems seems to require central dogmas that scale well with a general systems model. Even as such Entropy, which produces much more disorder than it ever will order, is by far the natural norm. Disorder is in essence the rule; and order is the exception.
The impulse toward order appears to be rooted in self-preservation. This makes the view a candidate for clinical justification; as the interest of the patient is prioritized. This doesn’t however produce the most viable model for understanding.
My efforts toward Naturalized Socioeconomics are encountering these types of semantical obstacles with almost all of the scientific disciplines. Even with Physical Science, there are instances where Entropy in the system stands to reason but can’t be demonstrated empirically due to complexity. Does this situation warrant dismissal of the logic that appears to be self-evident? Are we to assume that natural systems are not generally compatible because we have no accepted model for unification? Is our description of Entropy truncated in thermodynamic representations? Is a new general systems axiom required to solve this issue? It would appear that a Bohmian perspective would require congruence between the disciplines; in order to provide a model that would allow the rigor favored in general systems analysis.
Hierarchical Reasoning (and the lack there of):
The human brain can only parse about a dozen pieces of information at a time. This is probably a cognitive constraint of human neurology that produced the reductive aspects of music. Songs tend to be broken down into pieces like intro, verse, chorus, ontro, and outro. Octaves are of course composed of eight notes; though many wavelengths of sound lie in between each note. These wavelengths could probably be accessed and utilized; however many of the instruments are designed to only play the traditional notes and chords. This isn’t a bad metaphor for the way that humans think. We do the best that we can with the instrument that we have. On that same “note” however, our neurological resources can only reconcile a small degree of complexity. Hierarchies appear to be our cognitive solution to this issue. We search for more generalized, ordered patterns in order to analyze more complex systems.
We humans also do this with quantum systems. Measurement of particles tends to change the sate of the particle. This is because we have no way of observing elementary particulates without directly interacting with them. Since interaction influences the particles state, there is no technology for passive observation to date. This means we must find (and have found) methods for approximating the state of a particle without measuring it directly. The method that we employ influences our perspective and thus our models. We now see Quantum Mechanics as probabilistic because we use probability to infer within it. This however doesn’t mean that particles behave probabilistically. Once again, we’re doing the best that we can with the instruments that we have.
Unification may be the most difficult problem we have to solve. Considering that we are naturally tooled to be hunter gatherers in the wilds of this planet; dealing with issues that we can see, feel, hear, smell and taste, complex and quantum systems provide a wide variety of challenges to our observational skill sets. Our tendency toward dogmatic thinking often has us taking our perspectives or our cognitive models as an approximation of reality. This may be one of the more difficult trials in the quest for unification; as we will probably require a new model designed for that specific purpose. The organizing aspects of nature are self-evident and probably a sufficient rout toward a unifying model; however the pervasiveness of Entropy and Extinction suggest that we may also have a biased account of base rates of what is important in the formation of complex, self-organizing, systems. This could also hinder the understanding of our own creative processes. We humans also have a tendency to invest in false dichotomies; even when they appear to be compatible through general observations.
Entropy is often thought of as the natural variation in systems. It’s also thought of as primarily disordered. It’s most commonly attributed to thermodynamic change. The dispersal of heat in the universe has measurable influences on the state of the universe and many systems. This evidence is favorable to the logical aspects of the human mind. It may be in many instances to consider the disorder in a system as entropic, in bad form based upon the ability to have direct evidence or at least some methodology for measurement. This is of course a fair argument, however our constant and consistent uncertainty needs to be considered as well. Where systems become complex to the degree that human cognition, even with the aid of computational systems cannot make direct physical connections between processes that have been demonstrated to produce entropy in a more empirical manner, and disorder in their complex aspects, we appear to hit a wall with respect to unifying theory.
At this point, questioning our current state may be able to produce favorable results. Is Thermodynamics the most likely promoter of Entropy? Is Thermodynamics likely the only promoter of Entropy? Is Thermodynamics the simplest explanation of Entropy in all of the scientific disciplines? Can complex systemic functions be characterized as entropic, when correlated with other self organizational, theoretical principles?
Normalization has many definitions, however there is a general meaning of a preference toward a minimal level of compatibility within a system. There is not a concerning amount of semantical issues with this term, however a more unified context of Normalization would likely be enough of a variable to create some confusion. The possible issues with this term appear to be contextual.
Novelty can be characterized by that which is not normal, yet is relatively non-destructive. In a general systems sense, Novelty would have low Extinction risk associated with it as well. There may be more ways to characterize it that could prevent confusion, that could become evident in practice.
Extinction is often characterized by the process by which in-sustainability of that which cannot become normalized eventually becomes non-existent. It’s the most likely scenario for the mass of disorder that is produced by Entropy. The explanation for this appears to be that Entropy is primarily arbitrary with little to no organizing properties. Entropy has the appearance of being probabilistic; however that may be more of a perception than an objective observation. The fact that, that assertion comes from contrast with the properties of Normalization insists that perception plays a significant role. Human motivation for Normalization, due to evolutionary predispositions toward selfpreservation are probably resulting in some degree of cognitive bias.
Cross Disciplinary, Inferential, Statistical, Analysis:
The many scientific disciplines have been producing wide varieties of favorable results across the board, for a few hundred years now. Methodologies have been being refined throughout; with added promise and increase in competitive advantage. Unfortunately the various disciplines do not speak in the same tongue. One of the more concerning developments in recent times is the lack of homeostatic function in our systems. This has resulted in growing concerns for not only human extinction risk but also the existential risk of all life on the biosphere. Not all of the blame for this can be pawned off on political prowess however. If political systems were accepting of the data, would science have the answers that they required; or would decades or even centuries of uncoordinated patchwork be the result?
For the purpose of coherent, homeostatic systems, the answer is probably cross disciplinary, inferential, statistical analysis. It would stand to reason that such an endeavor would result in Normalization of systems; thus mitigating unfavorable human influence of extinction and existential risk. In order to produce such an effect, it appears that a General Systems Theory that correlates with each and every scientific discipline for the purpose of coordinating them is in order; to minimize these risks.
Get used to the idea. It’s pretty much “do or die” at this point.
All technologies become obsolete over time. In current times, the longevity of technologies is increasingly short. Internet 1.0 was implemented in the 1980s and was essentially commandeered by the .coms in the early 90s. It was probably the most lucrative platform in the history of mankind. It saved in many of the costs associated with overhead and created direct connections between producers and consumers. It brought educational resources that include higher education to the masses; at little to no cost. It connected peoples from opposing ends of the globe into virtual friendships. It sparked mass political involvement. It created new decentralized markets, with novel new ways of contributing to society. But this is only a few of the favorable outcomes that we have enjoyed over the past few decades. I haven’t yet gotten into the unfavorable outcomes that have resulted from what could easily be characterized as the 8th wonder of the world.
In The Beginning:
Internet 2.0 began with a great deal of planning. All of the systems had to be re-invisioned in order to make it a suitable platform for world wide commerce. In the development process many mistakes were made that really could only have been detected in hindsight. This is because of the self-organizing nature of social systems. Predicting how a system is going to propagate over decades is extremely difficult due to social constructs, emerging technologies, scientific advances, cultural memes etc. Predicting such things are entirely improbable and bare significant weight on the outcomes over decades. One of the more heated debates among the developers of Internet 2.0 was the issue of “Privacy vs Provenance”. This was a decision between the “copy / paste” nature of content sharing (which is essentially gratis), and a form of sharing that includes a tag of sorts that points back to the original source. This allows the original creator to receive credit for their work. This would of course have had advantages and disadvantages that many would feel strongly about… hence the heated debate. Jaron Lanier suggested that the wrong choice was made; even though he favored privacy in the beginning. In hindsight, it seems that a provenance protocol may have solved a number of issues; however I doubt that it would have been near enough to save the second implementation of the Internet.
Tim Berners Lee suggested that the Internet had to be a singular public space for it to function at all. He was probably right considering the initial conditions; but this too has created a large number of issues. This is because, it produced a centralized Internet. Central hubs are obviously convenient and effective; however they are also a singular point of failure when considering the stability of a system. The issues began with interest groups jockeying for the bulk of control of the infrastructure. The competitive advantages of this are painfully obvious today to the savvy or geeky; but I will explain. The Internet is most commonly thought of as the copper wire that carries the data over the distance. This is also what is most commonly thought of as the infrastructure for the Internet. This isn’t however the only interpretation. Many large businesses like to consider themselves and their servers or their connections to those servers as infrastructure. From this perspective, these businesses feel that they have control of infrastructure. It’s not an entirely ridiculous notion though. Though not required in principle, they were required in practice for the function of the Internet; as of late. This was however due to the implementation of middlemen. This is what modern ISPs are. You see Internet 1.0 allowed surfers to dial up servers and brows their content without an ISP. The .coms created infrastructure by man in the middle business strategies. This was implemented with the old dial-up modems; and the payoff was the lack of expense of long distance and international rates for direct dial. The price difference was enormous.
Now the servers that serve the bulk of Internet traffic are owned by companies like Google, Wal Mart and Amazon. These companies have a lot of control (by possession) of a significant amount of Internet traffic. This has become a commodity… the control of traffic, that is. This is one of the most concerning issues with the Internet today; and the main focus of Net Neutrality. Though most are concerned about the slowing of Internet speeds or expensive tiers with higher data caps, the real concern should be over egalitarian attention. There is much twisted correlation between attention and advertising in the new business models. This is probably the notion that is used to justify the business models associated with social media. The value of personal data is extremely high; though most don’t get near the return on it with usage of social media. This is implemented with license agreements that almost no one reads or even cares about. It also makes nefarious businesses extremely wealthy. It’s one of the most unethical business models I’ve ever seen. In most cases, it flirts with theft by deception. The fact that License Agreements are legally binding under these conditions is just ridiculous. From a systems perspective it’s a total failure… not to mention common decency.
The false dichotomy of Private Sector vs Public Sector has it’s dagger in the neck of Internet 2.0 as well. Both sectors are enjoying unprecedented prosperity at the expense of the “consumer”. Both are working together to constantly patch a cobbled, mature technology while the taxpayer pays for everything. This is where the infrastructure semantic is most harmful. The US government has been paying subsidies to ISPs since 1996 for the purpose of infrastructure upgrades. If ISPs see themselves as the functional infrastructure, then they have done just that… right? Meanwhile telecom is traveling from cable from the late 70s to 100 year old copper wire; when there is much much faster fiber optic cable that could be rolled out. This complete (probably deliberate) misunderstanding is unacceptable. In the US the most common way around government regulations is conflict of interest. Ex administrators or even ex CEOs of prominent companies are appointed to positions with regulatory agencies. This is the case now with the FCC. The current director of the FCC is a former lawyer for Verizon. His position on Net Neutrality is thus no surprise.
There is one public space that has no dedicated law enforcement agency… it’s the Internet. This isn’t just an issue for parents either. This is a global issue. The fact that nation states have such a difficult time with correlating political agendas with their populous and their neighbors is a huge issue for regulating public behavior. This is another issue where silly semantics plays a large roll. There is lack of agreement on whether it is even a space, much less a public one. This is because it is a virtual space. The private sector wants to be the infrastructure; however they do not want to police the space that they have created. The government is so tied up in global politics that they are essentially ineffective. This is more concerning when realizing that different nation states have different laws. The only solution that may exist is a lot of hard work in buttoning up international law. This would of course require unprecedented global diplomacy. Culture has risen to the challenge to a certain degree; however there is an unacceptable amount of failure on a daily basis. This conversation about privacy in a public space is so oxymoronic it’s just ridiculous. Though one is at home in their chair, or in their car, there should be no expectation of privacy in the public forums. This is something that needs to be hashed out; and it won’t be because of the advantages that the confusion boasts. People are making incoherent trades; and keeping the system misunderstood and unstable. A reread of the previous paragraphs will show the relevance of this argument.
The Internet as we know it cannot stand. From a systems perspective the Internet 2.0 is a system that is being driven into extinction. The problems with it are mounting to the point that it isn’t likely to be able to function at all in the next decade or so. I might not even give it ten years from today. With a ridiculous caveat of absolutely no change in it’s condition, I wouldn’t give it five years. Of course change is going to happen; but probably not enough to save it. It’s just becoming fundamentally obsolete. The degree of acceptance that the powers that be are demonstrating is alarmingly low. This is because big business runs the developed world; and business doesn’t tend to look toward even the near future. Business tends to consider maybe 2 – 5 years at most. The number of business professionals that are considering what they will be doing in the 2020s is probably a miniscule minority.
The good news is what the Internet really is. It’s the wires. The Internet is already diversifying. The .net .org .edu and .gov are indicative of natural variation. It’s completely reasonable and probably expected that new connections will emerge in the coming years. Mesh networks and home servers are likely to be at least tried in the coming years as well as they will be financially permissible. Competition for new solutions will become more probable as the current state becomes less financially permissible. Natural systems (including social and technological systems) self-organize; and the principles that govern such organization should be expected to apply with the global communication technologies that are coming. I don’t like to make hard predictions; but I think that the 2020s are going to be an interesting time for global communications. The death of Internet 2.0 is likely to result in the birth of something really interesting.
Social transformation is under heavy influence from the collective mindset of it’s time. It’s likely that our current condition is significantly influenced by the mindset that existed during the dawn of civilization. It likely functioned as axioms for the distributed, discrete solutions that solved the problems that arose. It was a top down influence that set the tone for the next several thousand years in Socioeconomics.
* What’s commonly in the collective consciousness
In all human organizations, endeavors begin with a consensus concerning a model of reality that results in a roughly agreed upon world view. Individuals share notes on what they believe to be real; and socially unifying behaviors result in significant agreement via bonding impulses. It’s this general belief about the world and humanity’s place in it that is the basis for models of possible socioeconomic change. This is an initial condition for public awareness.
The details of socioeconomic change are often worked out in the implementation. This is clearly necessary as the number of issues that arise are likely to overwhelm a central organization. This is because the shear mass of issues are the product of natural distribution. This probably suggests that distributed organization is the more adequate approach for solutions.
The two previous paragraphs are brief descriptions of two proposed aspects of society that appear to function in a feedback loop to organizational ends. It’s feedback between individuals and the collective that produces as society. The experience of the individual is data that is shared with the collective to either enforce or to amend the general world view; or to enforce or to amend agreement on possible solutions.
* What’s not commonly in the collective consciousness
Though humans are the most conscious and sentient beings that we humans are aware of, almost all of our behaviors are unconscious. This of course does include all of our unconscious, autonomic behaviors; however it also includes our sub-conscious, impulses. The latter has more of an effect on our socialization than most tend to appreciate. For instance “conscious” decisions are often effected by things like mood and the neurological state. There are two categorizations in Neurology of the nervous system concerning modal function. One is the Sympathetic Nervous System; which is responsible for the “fight or flight” mode that bolsters outward defense of the biology against predation and the like. There is also the Parasympathetic Nervous System; that is responsible for development, growth, fighting illness, healing etc. Both of these states influence our decisions; as decisions are made in the moment. Decisions are generally based as much on the environmental stimuli of the time as they are on socially supported world views.
Concerns of fear mongering exist because it’s known that the Sympathetic Nervous System can influence decision making; even in large crowds of humans. We humans are predisposed to behave in a manner that is appropriate for the initial conditions. This is in essence what “good behavior” is. This however is not exactly what happens in reality. It’s more of a perception of initial conditions that produces behavior. Where the perception is not indicative of the actual initial condition, inappropriate behaviors are more likely. This can be (and is) used as an axiom for a Pavlov strategy. Creating environmental stimuli that are likely to result in defensive responses is a common tool in political punditry. This is also a tool that is used in Social Engineering. The opposing stimuli also has significant influence in decision making. Where awareness of possible red flags is present, they can be mitigated; and interactions that are risky or inappropriate can seem like perfectly normal and appropriate interactions. For instance, a power company once hired a group of social engineers to test the security of their facility. The social engineers entered the facility and were essentially undetected; posing as technicians. Contracted technicians just don’t look out of place in such a technical environment. In this instance, just looking the part was half the battle.
Manipulation of the masses is probably one of the earliest forms of crowd technologies. This is the main focus of Noam Chomsky’s “Manufacturing Consent”. Unfortunately, consent is dependent upon the most accurate perception of initial conditions as practical. Otherwise, consent can be directed by creating a perception of a type of initial condition that is appropriate for the desired type of behavior. This is in essence fabricated signal noise that promotes acceptance of a Pavlov strategy. The understanding of how humans respond to environmental stimuli is too often exploited in the interest of personal gain for an individual or small group. The evidence of this is overwhelmingly pervasive.
* Epistemology and lack there of
Natural distributions of interest and aptitude in populations of humans is diverse and specialized. This produces a large number of individuals that are adept at narrowly defined skill sets. This of course, in reality, also produces a very small number of broad thinkers; but the vast majority are specialists. This may be because the skills of specialists are required for broad thinkers to test their models. This is probably because of human neurological constraints. This is addressed naturally by collective or distributed intelligence. Many well trained hands can make large, complex tasks light by coordinating their efforts and skills.
Considering the arguments in the previous bullet point, inconsistent or misrepresented data can (and does) have an unfavorable effect on the general world view. This of course translates to an unfavorable impact on system modeling. Systemic issues are thus likely due to this. This can be (and is) a huge hurdle for systemic change; when the systemic issues become the focus of social polarity. This prevents social consensus; even where Epistemological consensus exists. These instances are probably and historically not long lived; however in modern times they tend to produce increasing amounts of risk. This appears to be a product of both general human inclinations toward Trusting Tit For Tat strategies; and the much less numerous but still pervasive Pavlov strategies.
* Economy and lack there of
Where maximally accurate perceptions of initial conditions are present, economy is essentially maximized. This is because the collective or distributed intelligence is in the maximally effective state to produce economy. This suggests that an maximally, economically effective world view is the maximally accurate one. It isn’t however as simple as circular logic. The support is in how well the outcomes meet the expectations. If a model fails to produce an expected outcome, there is probably an issue with the model.
All too often economic issues are addressed with inputs that mitigate systemic issues. This is in essence creating issues and then leveraging resources toward patching the symptoms; rather than solving the initial issue. This is of course not economically viable. This is probably a product of the two previous bullet points.
It’s difficult to argue against social dogma as an initial condition; as the evidence for it as a major contribution to social organization is clearly… well… evident. The question as to whether it is favorable or not is much more debatable. Where it produces false positives for personal and civil liberty it also produces false positives for personal and social economy.
With the increases in population that almost always accompany rises in the standard of living, the risks become more concerning. The more humans that exist, the more of an environmental influence we become. Of course effective extrapolation of this data is probably a more difficult task than our current level of sophistication can muster; however the fact remains that we are more a part of than apart from our environment. The social dogma doesn’t generally appreciate this. This is concerning considering risks can be not only extinction risks but also existential risks.
Social dogma is a part of social organization. This doesn’t necessarily mean that it’s the most influential part of social organization though. Social organization is a more complex process than we tend to be aware of. We are always forgetting something when trying to make predictions. Social systems are chaotic systems that defy our attempts to extrapolate with accuracy. There are just too many variables for the human mind to account for. At this point, computer mediation of empirical data may be our most viable ally in the quest to organize and economize.
“The major source of unhappiness is that we are incoherent; and therefor producing results that we don’t really want; and then trying to overcome them; while we keep on producing them” ~ David Bohm
The US is the most medicated country on the planet. It is also the country that is known for having large numbers of mass murders, by single actors, with unknown motivations. This is something that has come about post 1970. It’s difficult to say what is exactly the cause; however that may not be the best way to consider it. Causation isn’t the most accurate description of interactions. Feedback loops are much more accurate. Taking medications for emotional issues may also not be the best way to address all issues; as civilized life isn’t always kind to the human psyche. The reasoning behind this post is the political discourse, the scientific responses, the bureaucratic responses and the public opinion concerning mass murder in the US. I find them all lacking in the rigor required to properly address the issue. I’m also concerned that the problem is likely to become more prevalent in the coming years with the increase of financial crises alone. Disillusionment from other factors could make it even worse.
The current climate:
The left tends to blame the guns rather than the actors or the environment. This of course is not a scientific approach; as there is no correlation between gun ownership and the steep increase of gun violence. As a matter of fact, they oppose each other.
The right tends to blame the actors themselves or maybe mental illness. This is probably a better approach; but yet not scientific either. Acts of violence are not more common among the mentally ill. This would also not correlate with the steep, recent incline.
The drug companies are marketing products to consumers. This is a bit of a paradoxical problem; as the right to be involved with one’s treatment seems ethical; however, self medication is always a bad idea. This borders self medication. “Ask your doctor about (fill in the blank)” is dubious at best. Being advised with a commercial rather than a consultation with a medical or mental health professional is far from ideal.
The government regulatory commissions are so involved with politics and bureaucracy that the conversation doesn’t even exist or is completely hand waved. This isn’t surprising; as a significant number of the bureaucrats are former administrators or legal experts of drug companies. The situation is wrought with conflict of interest
The mental health professionals spew statistics without even considering the statistical significance of the actors in question being almost unanimously mentally ill and on medications; and not questioning the lack of correlation with the data itself. This is just bad Epistemology. The bureaucrats and mental health professionals provide the data to sort this; however they don’t seem to consider it themselves. This requires a bit of qualification; of course.
Before the 1970s there wasn’t even one mass murder per year. The years where there was a mass murder, there was one that year. During the 70s and up to current times there has been much larger numbers than the population increase could account for.
This is a time when medication has become the “go to” solution for emotional discontent; along with emotional and psychological issues. This is evident in the distributions of medicated individuals. This number exceeds that of the rest of the world by a large margin. The US is 5% of the world’s population; however we consume 75% of the world’s prescription drugs.
Every day issues in modern society can result in emotional issues. Poverty and social inequality for instance, can result in mental distress. This is an issue as the US economy is probably in decline.
The red pill:
Some of the data appears to be disingenuous.
The number of people in the US that are being treated for some type of mental illness, disorder or condition doesn’t correlate with that of other countries. This is quite alarming; considering that almost all of the US’s population has roots in other countries. This appears to be more of a product of advertising and the vast income inequality in the US.
The distribution of people with mental illness in mass murder instances is surprisingly high; considering that mental illness isn’t a prerequisite for such violent behavior. Circumstances under development are a much more likely culprit. This probably isn’t something that could be addressed with prescriptions.
This is obviously a case of misrepresentation of the “facts”. This is probably due to not addressing the issue at hand in a scientific way. Few wish to take the time to actually test the claims and make some sort of coherent statement on the subject. The response tends to be rooted in political prowess or the issue is just denied all together. Either way, the issue is likely to worsen; as the environmental pressures worsen as well. A saturated health care market in the US is one of the main factors in the distribution of medications.
Here’s some food for thought concerning crime and punishment.
Here is some consideration of violent crime and social constructs.
I think that David Brin was right in suggesting that the Huxleyan implications of our current trajectory are more concerning than the Orwellian ones.
Attempting to make an assessment of the probable outcomes of UBI (Universal Basic Income), would seem to require a general systems analysis. It’s understood by Sociology and Social Psychology that social systems are extremely complex and chaotic. This significantly decreases confidence in prediction value of discrete or nuanced instances, however a more generalized, probabilistic risk assessment can still be achieved. This can be done by considering the social system as a system; and a chaotic one as well. Though a precise prediction of the resulting state of the system isn’t likely; many of the more important risk factors can be considered. Considering UBI as an input to the social system, with regard to the initial conditions is probably an effective manner of considering the probable risk factors. An understanding of the relevant disciplines can produce predictive value in various aspects of the social system. This is the value of Statistical Mechanics; as the morphology of the system is the basis of the understanding. It’s also the natural path the outcomes are likely to take.
The bulk of the issues that we face today are likely due to lack of understanding of the morphology of human presence on Earth. This has improved along with the conditions; however long term challenges still exist. This is probably the result of solving issues as they come without a general analysis of the system itself. The more time that goes by, this is less understandable and more dangerous. Taking UBI as a solution for a singular issue without a general survey of the system, though explainable by conditioning, is not justifiable by our current understanding of natural systems. This is what appears to be occurring in almost all instances.
The more influential aspects of human society in general are probably the best candidates for comparison and contrast with probable outcomes. These would likely include but may not be limited to Pavlovian Conditioning, social norms, evolutionary predispositions, emerging technologies etc.. These would represent the state of the system; and the initial conditions that UBI would be the input to.
For the analysis, I’ll be using the methodology presented in this article.
One of the likely causes of lack of general analysis in creating solutions to social issues is being conditioned to the existing state of the system. This appears to be the result of heuristic efficiencies in the processing of thought who’s value is in normative environmental influence. This can become an issue where masses of artifices can obstruct the environment. In this condition the artifices can take the place of normative influence. General analysis is not innate human behavior. The heuristic behaviors of humans are so well understood that certain types of behaviors can be predicted under certain types of initial conditions. This is the principle of “Manufacturing Consent”. The earlier crowd technologies revolved around promoting desired types of behaviors in masses of people by creating a perception of initial conditions that the desired type of behavior would be appropriate for. The population would then probabilistically behave appropriately to the perception; producing the desired types of behaviors. Over time, specific types of stimuli are now habituated. The two party system keeps populations polarized in a “divide and conquer” capacity; due to habituated behavior patterns. This is however only one example.
The current conditioning of societies is largely influenced by the industrial, 5/40 job market. The apparent, impending end to this 100 year old social construct is already displaying characteristics of Pavlovian Conditioning. The expectation of receiving funds for sustenance from a centralized institution has been clearly habituated to the point that other solutions aren’t even on the radar. This is probably to be expected.
A more generalized account of the conditions might suggest that, automation could become decentralized. This will be supported later on.
There are many interpretations of what is “fit” in systems. The accepted general assumption about fitness however, is that cooperative self-organization mitigates extinction risk to a significant degree. Being cooperative in essence means being generally useful. Humans have innate predispositions toward being useful as a consequence. The human motivational systems promote usefulness where pathology isn’t present. One could easily predict that humans would still behave in ways that would be useful even if the 5/40 job market were to evaporate and income were to be publicly funded.
What exactly would the common perception of this condition likely be?
The previous state of the system would likely influence the perception of the populous when surveying the state with the UBI input. Given that the population would likely still be doing useful tasks at home while the central government was providing income, the likely assessment might be that the work force has been taken over by the central government and egalitarian compensation replaced pay grades.
Automation isn’t just effecting the job market. It’s also effecting society in a decentralized manner that is producing interesting communities like the maker community, open source, first wave hackers, citizen scientists, self publishing, home networks etc.. This is being facilitated by automating technologies being passed down to the consumer. No one needs a printing press with affordable personal computers at hand. This is probably going to be very influential with large numbers of people with 100+ I.Q.s and plenty of free time. The pressure to be useful in the immediate environment is likely to produce home manufacturing of a wide variety of goods… facilitated by existing technologies. The obvious benefit is economizing with respect to the UBI funding. This also might effect implementation of “off grid” technologies like solar and wind power, mesh networking, art culture, research and development etc..
Artificial Intelligence may also be impacted by such a shift in the socioeconomic system. With so many people, producing so much, at such low cost, the concern of becoming less relevant might produce a proposal to not invest in AGI. This of course is going to happen anyway; however, the likelihood of bifurcation by those who are more concerned could greatly increase. If the central government decides that AGI is a security risk, this could become a messy prospect. This may seem unlikely, however the current condition is one of implemented control measures. This is the basis of all governing structures to date.
UBI could solve a large number of our current issues; however the solutions are likely to be accompanied with dangerous by-products as well. Many balances might dissolve along with such a socialized solution. People are likely to solve a lot of the issues that arise by innate behavior alone; however centuries of political conditioning would be a factor to be aware of.
UBI as a solution doesn’t seem to be sustainable; as relative self-sufficiency seems somewhat probable as a result. Bifurcation has historically played a large roll in fundamental social change; and there really is no reason to think that the results of technological advancement would be dissimilar. It seems very probable that one could expect an instance of diversification the likes that no one alive has seen. This becomes a problem in a society that is systematically polarized, psychologically manipulated and financially exploited. This is not a pessimistic view of society. It is known facts about society; that need to be considered by those who have the interest and aptitude… for our own sake.
The notion that governments can be trusted to take care of citizens is so naive that I’m not even going to consider it. I’m also not going to consider the notion that humans would be “on vacation” while everything comes to them on a whim. That’s the epitome of Utopian nonsense. I’m also not going to consider AGI providing for humans; as it’s pure speculation that has no coherent basis what so ever. Natural systems tend to behave in specific ways and that account doesn’t include them.
My own concerns about UBI revolve around the general lack of acceptance that exists in many social systems. Natural systems diversify and lack of acceptance is a poor environment for it.
There are many arguments for autonomy in humans that fall short of producing confidence in it. The common perception is that of autonomous agency; however reduction and testing tends to suggest otherwise. Though human interaction and behavior is chaotic and thus difficult to predict in discrete instances, more general predispositions are trivial to demonstrate. The notions of determinism and causation both appear to be incoherent upon further examination as well. A more wholistic approach is probably more likely to be successful.
The common perception is that of the individual being in the drivers seat of a biological organism. The cognitive constraints that we all share however, tend to produce truncated perceptions. These perceptions are produced by the limited amount of interactions that we are attending to. We often take credit for learned behaviors, evolutionary predispositions, social heuristics, family traditions, impulses etc..
Human interaction, more carefully considered, appears to be feedback loops with various environmental stimuli. It’s also subject to normative pressures. Though there are degrees of freedom, consequences are a constant concern. Almost all human behavior is a result of impulse. Though it’s over 90%, it’s difficult to say how much because even cognitive responses become habituated and thus impulsive. Habituating a generally successful cognitive response is only rational. What one has learned from experience is too often thought of as an autonomous response; however it appears to be merely a deprecation of less successful thoughts and behaviors; and the before mentioned habituation of more successful ones. The act of thinking before responding is merely an economy of this process.
The success of game theoretical understanding has uncovered some interesting arguments against determinism. The presence of cheating and signal noise are chaotic components to the system. Though reducible after the fact, discrete prediction isn’t likely. Since the ability to reduce the instances exists, cheating and signal noise are not likely candidates for autonomy either. This is because the cheating and / or signal noise are themselves products of environmental stimuli as well.
We tend to truncate the evidence in reduction as well. We try to see causal factors in the interactions; though the evidence suggests that all interactions are feedback loops. Our cognitive constraints are the likely reason behind this; though they too are economic products of the environment. In order to be capable of reducing systems and interactions, we truncate them into hierarchies. These hierarchies are products of human cognition and not so much an accurate depiction of nature. The Bohmian view holds up to scrutiny much better. General Systems Theory holds up to scrutiny well too as it doesn’t focus on hierarchies. Rather it focuses on prevalent systemic behaviors. These behaviors scale in our hierarchical accounts.
I’m having more success with General Systems Theory and Bohm’s Implicate Order than I could have anticipated. Though hierarchies are a part of my understanding of natural systems, the reality that nature is not in essence composed of hierarchies specifically creates an interface between the two. I now think of systems as a fractal froth of discrete components; with overlapping spheres of influence. None are causal or responsive; but interactive and cooperative, or at risk for extinction. Biological systems are proving to be subject to this as well… even humans. This is the understanding that I’m gaining from the sciences. It’s also allowing me to consider systems across a wide variety of disciplines; as the axioms provided by General Systems Theory are producing results that are expected by the various disciplines. Whoda’ thunk it? General Systems Theory appears to be a general systems theory.