Archive | July 2017




Endeavoring to model a social system that serves the purposes of all of it’s components in a synergistic and thus symbiotic manner should be the focus of any one that wishes to produce a successful, thriving society. Though an initial model isn’t likely to be representative of all of the subtleties of the finished product through practical application, the basic principles of the model are likely to have a great deal of influence on it. The model is a framework that enforces the approach; which guides what solutions are sought along the way. Though it’s the individual solutions, which aren’t necessarily intuitive, that will be enabling practical application, the top down influence of the initial model will likely be influencing the synergistic function; as the initial model is considered when choosing discrete solutions. The focus of this article is to present an example of an initial model for such a synergistic social system.


Polyocracy is a principle predicated on the ubiquitous sharing of the responsibility for the function of the social system. This requires the understanding that each establishment is essential in the success of it. This suggests that there is common interest in synergy; that is likely to produce sustainable success. Disenfranchisement of any of the essential components would thus likely have an unfavorable overall result; that would eventually result, and has historically resulted in the general failure of the system. Polyocracy is in essence the mutual respect for the vitality of each and every societal structure.

Polyocracy on a global scale serves to promote diplomacy with other forms of governance. Historically, nationalization has produced a host of unfavorable results; that have been expressed in short term financial patchwork that eventually resulted in financial crisis and social unrest. Since the eventuality is similar with either of the approaches, diplomacy is the logical solution.



Education may be the most important aspect of social contribution. Whether it be hard scientific application or spiritual guidance, knowledge of the practice and of how it would either be a general service or generally non-destructive is essential in polyocratic policy. A balance between personal liberty and social responsibility can only be achieved if the understanding that everyone is necessary is promoted through curriculum. Historically there have been numbers of social movements that have divided societies; based upon a wide variety of beliefs, that have been detrimental to the overall function. These include political parties, religious backgrounds, class systems, exploitation, gender rolls etc. Education is likely the most valuable tool for sorting the issues that a developing society has to face. This might suggest that education be fully promoted and open access at all levels. This would maximize the potential for an educated populous.

The Model:


A secure society entails a large number of contexts that would require a large number of solutions to produce. Much of it can not be predicted in foresight, however our current hindsight can be a useful tool in modeling an initial foundation.

Global security is an extremely complex subject that may best be approached first through a combination of judicial and diplomatic practices. An attempt at diplomacy is probably the most reasonable course of initial action. Adjudication should also be a part of this process for the purpose of promoting fairness. Systems are thus in place in the event that fails. Defense would of course be part of the security structure.

Domestic security is also very complex; though a little more manageable with respect to the legalities. This is because the laws enforced by the Judiciary are always applicable or a candidate for change. These outcomes are made possible by diplomatic and / or judicial interaction. The interface for the Judiciary would of course be the courts and police forces. The interface for the Diplomats would be the public representatives.


Legal representation might best be served with a combination of practical application of adjudication and epistecratic application of Instrumentalism. The ability to enforce laws that serve society has a great deal of dependence on the coherence of the laws themselves. An epistecratic approach to law making might minimize instance of lack of representation of important or even critical, systemic issues. The interface for Instrumenatlism would of course be academia.


Public policy would be best determined through interaction between the public and public representatives. It might be served by diplomacy and proceduralism. It would stand to reason that the most favorable outcome for the general populous is probably the outcome that functions the most favorably in practice. The details of course would be hashed out with diplomacy.


Economic influence has been a point of contention both globally and domestically, for all civilizations, throughout known history. The most effective way to approach global economics may be to produce a domestic economy that has unprecedented success. A combination of Instrumentalism and Proceduralism may produce much more favorable results in practice; as a combination of epistecratic modeling and real world, proceduralistic testing have historically resulted in general advancement.


Maximally efficient and effective function of all of the components of a social system are likely to produce a thriving society. Historically, thriving societies have enjoyed more evenly distributed wealth and have tended to promote civil rights. Where there is less concern over stability, there tends to be much more cooperation. It also seems likely that the reverse is true. Added cooperation is probably likely to result in stability. Previous models don’t account for this. The Old World view is based upon an expectation of competitiveness that evaporates when society is growing rapidly; and returns at growth maximum. Cooperation is expected when the system is functioning relatively well; therefor a model that promotes function rather than one that promotes competition is in order for the general good.


Towards a Deeper Understanding of Disorder



Disorder has many definitions. Even scientific disciplines have varied definitions of it. For instance, the clinical disciplines characterize it by ones diminished ability to function. This seems a little truncated from a systems perspective. This is because it doesn’t seem to coordinate with sociological questioning of the social order; much less the dogma that must exist in even our most unified models.

The issue appears to be lack of congruency between the hierarchical models that we construct for understanding of natural systems. To implement a more unified understanding of systems seems to require central dogmas that scale well with a general systems model. Even as such Entropy, which produces much more disorder than it ever will order, is by far the natural norm. Disorder is in essence the rule; and order is the exception.

The impulse toward order appears to be rooted in self-preservation. This makes the view a candidate for clinical justification; as the interest of the patient is prioritized. This doesn’t however produce the most viable model for understanding.

My efforts toward Naturalized Socioeconomics are encountering these types of semantical obstacles with almost all of the scientific disciplines. Even with Physical Science, there are instances where Entropy in the system stands to reason but can’t be demonstrated empirically due to complexity. Does this situation warrant dismissal of the logic that appears to be self-evident? Are we to assume that natural systems are not generally compatible because we have no accepted model for unification? Is our description of Entropy truncated in thermodynamic representations? Is a new general systems axiom required to solve this issue? It would appear that a Bohmian perspective would require congruence between the disciplines; in order to provide a model that would allow the rigor favored in general systems analysis.

Hierarchical Reasoning (and the lack there of):

*Classical Mechanics:

The human brain can only parse about a dozen pieces of information at a time. This is probably a cognitive constraint of human neurology that produced the reductive aspects of music. Songs tend to be broken down into pieces like intro, verse, chorus, ontro, and outro. Octaves are of course composed of eight notes; though many wavelengths of sound lie in between each note. These wavelengths could probably be accessed and utilized; however many of the instruments are designed to only play the traditional notes and chords. This isn’t a bad metaphor for the way that humans think. We do the best that we can with the instrument that we have. On that same “note” however, our neurological resources can only reconcile a small degree of complexity. Hierarchies appear to be our cognitive solution to this issue. We search for more generalized, ordered patterns in order to analyze more complex systems.

*Quantum Mechanics:

We humans also do this with quantum systems. Measurement of particles tends to change the sate of the particle. This is because we have no way of observing elementary particulates without directly interacting with them. Since interaction influences the particles state, there is no technology for passive observation to date. This means we must find (and have found) methods for approximating the state of a particle without measuring it directly. The method that we employ influences our perspective and thus our models. We now see Quantum Mechanics as probabilistic because we use probability to infer within it. This however doesn’t mean that particles behave probabilistically. Once again, we’re doing the best that we can with the instruments that we have.


Unification may be the most difficult problem we have to solve. Considering that we are naturally tooled to be hunter gatherers in the wilds of this planet; dealing with issues that we can see, feel, hear, smell and taste, complex and quantum systems provide a wide variety of challenges to our observational skill sets. Our tendency toward dogmatic thinking often has us taking our perspectives or our cognitive models as an approximation of reality. This may be one of the more difficult trials in the quest for unification; as we will probably require a new model designed for that specific purpose. The organizing aspects of nature are self-evident and probably a sufficient rout toward a unifying model; however the pervasiveness of Entropy and Extinction suggest that we may also have a biased account of base rates of what is important in the formation of complex, self-organizing, systems. This could also hinder the understanding of our own creative processes. We humans also have a tendency to invest in false dichotomies; even when they appear to be compatible through general observations.



Entropy is often thought of as the natural variation in systems. It’s also thought of as primarily disordered. It’s most commonly attributed to thermodynamic change. The dispersal of heat in the universe has measurable influences on the state of the universe and many systems. This evidence is favorable to the logical aspects of the human mind. It may be in many instances to consider the disorder in a system as entropic, in bad form based upon the ability to have direct evidence or at least some methodology for measurement. This is of course a fair argument, however our constant and consistent uncertainty needs to be considered as well. Where systems become complex to the degree that human cognition, even with the aid of computational systems cannot make direct physical connections between processes that have been demonstrated to produce entropy in a more empirical manner, and disorder in their complex aspects, we appear to hit a wall with respect to unifying theory.

At this point, questioning our current state may be able to produce favorable results. Is Thermodynamics the most likely promoter of Entropy? Is Thermodynamics likely the only promoter of Entropy? Is Thermodynamics the simplest explanation of Entropy in all of the scientific disciplines? Can complex systemic functions be characterized as entropic, when correlated with other self organizational, theoretical principles?


Normalization has many definitions, however there is a general meaning of a preference toward a minimal level of compatibility within a system. There is not a concerning amount of semantical issues with this term, however a more unified context of Normalization would likely be enough of a variable to create some confusion. The possible issues with this term appear to be contextual.


Novelty can be characterized by that which is not normal, yet is relatively non-destructive. In a general systems sense, Novelty would have low Extinction risk associated with it as well. There may be more ways to characterize it that could prevent confusion, that could become evident in practice.


Extinction is often characterized by the process by which in-sustainability of that which cannot become normalized eventually becomes non-existent. It’s the most likely scenario for the mass of disorder that is produced by Entropy. The explanation for this appears to be that Entropy is primarily arbitrary with little to no organizing properties. Entropy has the appearance of being probabilistic; however that may be more of a perception than an objective observation. The fact that, that assertion comes from contrast with the properties of Normalization insists that perception plays a significant role. Human motivation for Normalization, due to evolutionary predispositions toward selfpreservation are probably resulting in some degree of cognitive bias.

Cross Disciplinary, Inferential, Statistical, Analysis:

The many scientific disciplines have been producing wide varieties of favorable results across the board, for a few hundred years now. Methodologies have been being refined throughout; with added promise and increase in competitive advantage. Unfortunately the various disciplines do not speak in the same tongue. One of the more concerning developments in recent times is the lack of homeostatic function in our systems. This has resulted in growing concerns for not only human extinction risk but also the existential risk of all life on the biosphere. Not all of the blame for this can be pawned off on political prowess however. If political systems were accepting of the data, would science have the answers that they required; or would decades or even centuries of uncoordinated patchwork be the result?

For the purpose of coherent, homeostatic systems, the answer is probably cross disciplinary, inferential, statistical analysis. It would stand to reason that such an endeavor would result in Normalization of systems; thus mitigating unfavorable human influence of extinction and existential risk. In order to produce such an effect, it appears that a General Systems Theory that correlates with each and every scientific discipline for the purpose of coordinating them is in order; to minimize these risks.

Personal Note:

Get used to the idea. It’s pretty much “do or die” at this point.

The Obsolescence of Internet 2.0


Opening Remarks:

All technologies become obsolete over time. In current times, the longevity of technologies is increasingly short. Internet 1.0 was implemented in the 1980s and was essentially commandeered by the .coms in the early 90s. It was probably the most lucrative platform in the history of mankind. It saved in many of the costs associated with overhead and created direct connections between producers and consumers. It brought educational resources that include higher education to the masses; at little to no cost. It connected peoples from opposing ends of the globe into virtual friendships. It sparked mass political involvement. It created new decentralized markets, with novel new ways of contributing to society. But this is only a few of the favorable outcomes that we have enjoyed over the past few decades. I haven’t yet gotten into the unfavorable outcomes that have resulted from what could easily be characterized as the 8th wonder of the world.

In The Beginning:

Internet 2.0 began with a great deal of planning. All of the systems had to be re-invisioned in order to make it a suitable platform for world wide commerce. In the development process many mistakes were made that really could only have been detected in hindsight. This is because of the self-organizing nature of social systems. Predicting how a system is going to propagate over decades is extremely difficult due to social constructs, emerging technologies, scientific advances, cultural memes etc. Predicting such things are entirely improbable and bare significant weight on the outcomes over decades. One of the more heated debates among the developers of Internet 2.0 was the issue of “Privacy vs Provenance”. This was a decision between the “copy / paste” nature of content sharing (which is essentially gratis), and a form of sharing that includes a tag of sorts that points back to the original source. This allows the original creator to receive credit for their work. This would of course have had advantages and disadvantages that many would feel strongly about… hence the heated debate. Jaron Lanier suggested that the wrong choice was made; even though he favored privacy in the beginning. In hindsight, it seems that a provenance protocol may have solved a number of issues; however I doubt that it would have been near enough to save the second implementation of the Internet.


Tim Berners Lee suggested that the Internet had to be a singular public space for it to function at all. He was probably right considering the initial conditions; but this too has created a large number of issues. This is because, it produced a centralized Internet. Central hubs are obviously convenient and effective; however they are also a singular point of failure when considering the stability of a system. The issues began with interest groups jockeying for the bulk of control of the infrastructure. The competitive advantages of this are painfully obvious today to the savvy or geeky; but I will explain. The Internet is most commonly thought of as the copper wire that carries the data over the distance. This is also what is most commonly thought of as the infrastructure for the Internet. This isn’t however the only interpretation. Many large businesses like to consider themselves and their servers or their connections to those servers as infrastructure. From this perspective, these businesses feel that they have control of infrastructure. It’s not an entirely ridiculous notion though. Though not required in principle, they were required in practice for the function of the Internet; as of late. This was however due to the implementation of middlemen. This is what modern ISPs are. You see Internet 1.0 allowed surfers to dial up servers and brows their content without an ISP. The .coms created infrastructure by man in the middle business strategies. This was implemented with the old dial-up modems; and the payoff was the lack of expense of long distance and international rates for direct dial. The price difference was enormous.

Now the servers that serve the bulk of Internet traffic are owned by companies like Google, Wal Mart and Amazon. These companies have a lot of control (by possession) of a significant amount of Internet traffic. This has become a commodity… the control of traffic, that is. This is one of the most concerning issues with the Internet today; and the main focus of Net Neutrality. Though most are concerned about the slowing of Internet speeds or expensive tiers with higher data caps, the real concern should be over egalitarian attention. There is much twisted correlation between attention and advertising in the new business models. This is probably the notion that is used to justify the business models associated with social media. The value of personal data is extremely high; though most don’t get near the return on it with usage of social media. This is implemented with license agreements that almost no one reads or even cares about. It also makes nefarious businesses extremely wealthy. It’s one of the most unethical business models I’ve ever seen. In most cases, it flirts with theft by deception. The fact that License Agreements are legally binding under these conditions is just ridiculous. From a systems perspective it’s a total failure… not to mention common decency.

The false dichotomy of Private Sector vs Public Sector has it’s dagger in the neck of Internet 2.0 as well. Both sectors are enjoying unprecedented prosperity at the expense of the “consumer”. Both are working together to constantly patch a cobbled, mature technology while the taxpayer pays for everything. This is where the infrastructure semantic is most harmful. The US government has been paying subsidies to ISPs since 1996 for the purpose of infrastructure upgrades. If ISPs see themselves as the functional infrastructure, then they have done just that… right? Meanwhile telecom is traveling from cable from the late 70s to 100 year old copper wire; when there is much much faster fiber optic cable that could be rolled out. This complete (probably deliberate) misunderstanding is unacceptable. In the US the most common way around government regulations is conflict of interest. Ex administrators or even ex CEOs of prominent companies are appointed to positions with regulatory agencies. This is the case now with the FCC. The current director of the FCC is a former lawyer for Verizon. His position on Net Neutrality is thus no surprise.


There is one public space that has no dedicated law enforcement agency… it’s the Internet. This isn’t just an issue for parents either. This is a global issue. The fact that nation states have such a difficult time with correlating political agendas with their populous and their neighbors is a huge issue for regulating public behavior. This is another issue where silly semantics plays a large roll. There is lack of agreement on whether it is even a space, much less a public one. This is because it is a virtual space. The private sector wants to be the infrastructure; however they do not want to police the space that they have created. The government is so tied up in global politics that they are essentially ineffective. This is more concerning when realizing that different nation states have different laws. The only solution that may exist is a lot of hard work in buttoning up international law. This would of course require unprecedented global diplomacy. Culture has risen to the challenge to a certain degree; however there is an unacceptable amount of failure on a daily basis. This conversation about privacy in a public space is so oxymoronic it’s just ridiculous. Though one is at home in their chair, or in their car, there should be no expectation of privacy in the public forums. This is something that needs to be hashed out; and it won’t be because of the advantages that the confusion boasts. People are making incoherent trades; and keeping the system misunderstood and unstable. A reread of the previous paragraphs will show the relevance of this argument.

Closing Remarks:

The Internet as we know it cannot stand. From a systems perspective the Internet 2.0 is a system that is being driven into extinction. The problems with it are mounting to the point that it isn’t likely to be able to function at all in the next decade or so. I might not even give it ten years from today. With a ridiculous caveat of absolutely no change in it’s condition, I wouldn’t give it five years. Of course change is going to happen; but probably not enough to save it. It’s just becoming fundamentally obsolete. The degree of acceptance that the powers that be are demonstrating is alarmingly low. This is because big business runs the developed world; and business doesn’t tend to look toward even the near future. Business tends to consider maybe 2 – 5 years at most. The number of business professionals that are considering what they will be doing in the 2020s is probably a miniscule minority.

The good news is what the Internet really is. It’s the wires. The Internet is already diversifying. The .net .org .edu and .gov are indicative of natural variation. It’s completely reasonable and probably expected that new connections will emerge in the coming years. Mesh networks and home servers are likely to be at least tried in the coming years as well as they will be financially permissible. Competition for new solutions will become more probable as the current state becomes less financially permissible. Natural systems (including social and technological systems) self-organize; and the principles that govern such organization should be expected to apply with the global communication technologies that are coming. I don’t like to make hard predictions; but I think that the 2020s are going to be an interesting time for global communications. The death of Internet 2.0 is likely to result in the birth of something really interesting.