"It is this opening of the situation beyond codified effects that I consider to be so very necessary in an age of machine learning outputs. One could be doubtful of the claims of a human guarantor of ethics, and of the bias and discrimination that could be excised from the algorithm, for example, and begin instead from the unspecified risks emerging between the algorithm and the data corpus from which it learns."
–Louise Amoore, “Doubt and the Algorithm: On the Partial Accounts of Machine Learning”
The scattershot follow-through of an art adequate to the speculative time structure of the present and concomitant technoscience was written in the interstices of its vanguard’s theories and projects. While there has been an adept survey in critical and philosophical enclaves regarding the synthesis of neoliberalism, technology, global economy, and infrastructure—broadly speaking—recent contemporary art and exhibitions are extremely incongruent with the co-constitutive reality of the algorithmic epoch. Both institutionally and curatorially, this is not only a matter of insufficient exhibition methodology; it also concerns the fractured epistemology wherein anthropocentrism remains at the forefront. As an anecdotal aside, let us look at a product presentation by SpaceX and Tesla CEO, extraplanetary colonialist, and Artificial Intelligence (AI) alarmist Elon Musk. In the unveiling of the Cybertruck prototype, Musk mutters two phrases evincing his frustration with the collusion between technology and matter. During the unveiling, Musk asks a colleague to throw an object at the armored, unbreakable glass window of the intelligent vehicle. At first throw, the window breaks. Musk, surprised: “Oh my fucking God.” A second throw results in another break in the window, and later Musk weighs in, “We’ll fix it in post.” The truth of the technology was trusted and exhibited with audacity. The origin of that truth was at first unquestioned, and when tested, the human-centered logic and expectation shattered. He exposes a behavioral misstep of anthropocentric futility, yet “fix it in post” hints elsewhere. Liliana Farber’s Terram in Aspectu concerns geopolitics, veracity, and the structure of thought in the age of the algorithm. From there, what bearing does this line of inquiry have on exhibition and discursive methodologies wherein public engagement is of specific concern?
Liliana Farber’s work Terram in Aspectu provides a thought experiment in how truthfulness is determined in the extreme state of the post-truth era. Developed and produced through machine learning, Terram in Aspectu is a series of Google Earth screenshot images of various islands purported to exist in extreme locales. For this experiment contemporary art should be understood as a set of complex adaptive systems that evolve at the extremes of order and chaos. Complex systems theory is extensive and diverse, though here I will refer to James Crutchfield’s swift explanation, “The world economy, financial markets, air transportation, pandemic disease spread, climate change, and insect-driven deforestation are examples of truly complex systems: They consist of multiple components, each component active in different domains and structured in its own right, interconnected in ways that lead to emergent collective behaviors and spontaneous architectural re-organization.” Second, this thought experiment requires recognizing what advances in AI—and more specifically, machine learning—add to this complex assemblage. Farber’s work is also a case study of the impact of machine learning on geopolitics, veracity, and reason in the algorithmic age. I postulate, though do not resolve, how to practically involve uncertainty in curatorial practice and specifically exhibition contextualization and stewardship. This consideration of uncertainty, potentially understood as an extreme state (far from equilibrium), may be a way to develop a different type of agency in an age of future extremes.
Farber’s algorithmically produced work exemplifies a shift in critical thinking, pointing out that the means in which knowledge, intention, and expectation in exhibitions is communicated and ascertained follows a rubric that prioritizes only a pointedly human subjectivity and is no longer sufficient. The linearity of this assumptive and prescribed interpretive thinking seems to be increasingly incongruent with the complex state of globally networked social and political relations. This is augmented by machine learning and big data. My insistence on thinking with uncertainty does not outweigh or diminish individual identities, individual levels of interest or engagement, or influence by proxy in the experience of any given exhibition. Rather it is to examine the paradox of progressive thought and the limitations imposed on thinking mediated by the connectivity of the algorithmic age.
Recently, I have observed that the inherent means of determining truth in exhibitions—in which traditional deductive and inductive logic is the basis of interpretation—has not met the expectations of certain audiences, participants, viewers, and contributors. When faced with uncertainty, there is a retreat to basic logic as a means of protecting a predetermined or assumed truth. Further, by not recognizing the fallibility of human thought, agents in the field of contemporary art remain unchallenged. Taking a cue from machine learning, given its pervasiveness, inextricability from social behavior, and pace of evolution, we can derive some novel strategies for how truth is conceptualized.
What follows is a brief, technical breakdown of machine learning, an assessment of the geopolitical issues that Terram in Aspectu underscores, an outline of a speculative form of reason and thought in the age of machine learning, and concludes with a curatorial proposition that advances uncertainty in the terms developed throughout.
Machine Learning and Terram in Aspectu
Terram in Aspectu raises questions that involve an extreme form of colonialism that is inherent to big data and cartographic facticity. This work elucidates the relatively simple means by which machine learning algorithms can be manipulated to produce inaccurate outcomes—for example land masses that do not physically exist but are rendered as real in the visual-informational context of Google Earth. Through the presentation of the errors and inbuilt biases of machine learning arise seismic shifts regarding how geographic knowledge is produced, distributed, and retained.
It is no surprise that great alarm surrounds AI—from science fiction origins to Silicon Valley moguls and online conspiracy communities. Something irreversible has occurred that has catapulted these imaginaries into reality. That is, the actual integration and successful application of machine learning and a subset of this—what Farber's work is created with—deep learning. Machine learning has gained the most traction and notoriety in the areas of facial recognition and human image synthesis under the heading computer vision. This is paralleled in avionics, self-driving passenger vehicles, drones, and de facto police and military surveillance. These technologies are without question coming on the heels of what many call the contemporary re-establishment of the post-truth era. There are plentiful accounts of fear, speculation, and outright malice regarding AI to be found elsewhere. Suffice to say that as much as the errors within the development of automated thinking have caused crimes against humanity, there are as many humans who see this technology as a means to harm others and consider that one of its successes. So, rather than recapitulate long-standing—albeit renewed with urgency due to machine learning—bastions of proponents and alarmists, my point heeds these advances as signposts for political and methodological forks in curatorial practice and critical thinking.
Conditional Generative Adversarial Networks
The algorithmic architecture Farber has used is a deep learning model called a Conditional Generative Adversarial Network (cGAN). One main goal for cGANs in image-to-image translation is to “make the output indistinguishable from reality.” Deep learning is an algorithmic network in which patterns are extracted from data sets in order to acquire knowledge. This network is built of neural nets that are purportedly based on and mimic the neural structure and activity of the human brain. From the knowledge acquired, the computational machine then makes future decisions automatically or independent of the programmer. Deep learning contains increased, hidden layers of neurons and is hierarchical in the sense that the machine learns from a series of successive outputs. cGANs are one form of deep learning algorithmic architecture. “It mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another. Both networks are trained on the same data set. The first one, known as the generator, is charged with producing artificial outputs, such as photos or handwriting, that are as realistic as possible. The second, known as the discriminator, compares these with genuine images from the original data set and tries to determine which are real and which are fake. On the basis of those results, the generator adjusts its parameters for creating new images. And so it goes, until the discriminator can no longer tell what’s genuine and what’s bogus.” Key to this design is the process's inherent randomness and the fact that it is unsupervised. This means that it labels data through autonomous learning beyond a human annotator, and noise and randomness are integrated to assist in the network’s generative function.
Randomness and Complexity
So as not to conflate the terms randomness and uncertainty, randomness should be defined technically and conceptually. Simply put, randomness in machine learning is a feature of the algorithmic architecture that makes the network learn more adeptly by introducing variations to a dataset. If—in basic computation—inputs match the exact (completely predictable) known output (1+1=2), then no new information is generated. Conceptually, we can examine how algorithmic randomness reflects other complex systems. Following James Crutchfield, novel or generative activity and interaction occurs at the interplay between order and randomness. Put differently, there is a spectrum in between perfect predictability and perfect unpredictability. At one end nothing is new, and at the other end completely incoherent and thus equally non-generative. Machine learning algorithms generate new, increasingly complex outputs due to the fact that network architecture is built so that structure arises at a state between order and randomness. In Crutchfield’s words, "We now know that complexity arises in a middle ground—often at the order-disorder border. Natural [and computational] systems that evolve with and learn from interaction with their immediate environment exhibit both structural order and dynamical chaos. Order is the foundation of communication between elements at any level of organization, whether that refers to a population of neurons, bees or humans [...] Chaos, as we now understand it, is the dynamical mechanism by which nature develops constrained and useful randomness. From it follow diversity and the ability to anticipate the uncertain future." I am referring to uncertainty to suggest that inculcating uncertainty in critical thought is to embrace the randomness out of which complexity arises.
“On the gigantic liner of Spatial Big Data analytics sailing in either right or wrong directions, heading ourselves towards benefits and value lies in the reliability of the chart.”
–Wenzhong Shi et al., “Challenges and Prospects of Uncertainties in Spatial Big Data Analytics”
Spatial Big Data and Ground Truths
The generated images in Terram in Aspectu may be indistinguishable from reality to a certain degree of human-perceptual success. Yet, not only are the images themselves machine-generated, they are trained on a dataset that may not be based in fact, but rather is aggregated from spatial big datasets collected through a complex system of human and non-human sources. These training datasets are referred to as the “ground truths.” The terms are elusive in both their implication of “truth” as an ethical (and potentially juridical) consensus, and “ground” as though there is a physical ground in which this truth can be proven. Louise Amoore offers a detail of these terms, "In fact, though, the mode of truth-telling of contemporary algorithms pertains to the ‘ground truth’: a labelled set of training data from which the algorithm generates its model of the world." In Terram in Aspectu, Farber uses Google Earth screenshots as the ground truths, and outlines of phantom islands as inputs so that the algorithm generates images of non-existent islands that appear truthful in the visual context of or appearance of a Google Earth screenshot. Phantom islands are in Farber’s words, “bodies of land that appeared, sometimes for centuries, in maps, but were proven not to exist.” What is to be done, as with Google Earth, when the ground truth is not trustworthy in the first place? The scale of these datasets is so large and constantly undergoing transfer and re-distribution that they can never be contained. Further, the data and the algorithmic architecture are never true, at least on the basis of true or false. In algorithmic architecture, truth is based on the immediate acceptance of a dataset to be true regardless of its accuracy in real, spatial terms. In the observation, collection, and application of spatial big data (geographic, geospatial, and geolocational information), especially in machine learning, uncertainty is of primary concern. Uncertainty and unpredictability are inherent to reality as well as algorithmic processes. This underlines my concentration on co-constitutive human and non-human cognition, or what Katherine Hayles refers to as a “cognitive assemblage.” In this assemblage involving spatial big data, an interplay occurs between source data and its various applications. While multisource geographic datasets increase richer knowledge, there is also an increase in spurious and dubious results.
In spatial big data collection veracity is impacted by the uncertain nature of how and through what means the data corpus is compiled. Generally, the veracity of data ranges from more accurate sources like healthcare monitoring (considered to be highly accurate) to sensor technology (least accurate and fraught with noise and discrepancy). Within this spectrum, there also are other sources: enterprise and social media. Within spatial big data, some researchers are advocating for not only accounting for uncertainty from the data extracted, but for uncertainty-based methods to their analytic process. In doing so, their aims are “to understand, control, and alleviate the ubiquitous uncertainties in the real world and each stage of knowledge extraction from [spatial big data], thereby assuring and improving the reliability and the value of resultant knowledge.” Driven by a need for more reliable data, nations, corporations, climatologists, and many other disciplines have looked to the potential of machine learning.
Geopolitics and Coloniality
Terram in Aspectu also underlines interrelated geopolitical concerns: first, that of the neoliberal, corporate, and state entities that maintain a stronghold over the means of processing power required for data input, extraction, and storage; second, that of the relations between governmentality and the resource ownership of territory (land, offshore, island, seafloor, extra-planetary.) James Bridle points out, “These technologies are tainted at the source by their very emergence within cultures of capitalism and the inherently racist logic of the nation state itself. They are too easily commandeered and redeployed by those with access to more network nodes, to greater carrying capacity, bandwidth, processing power and data storage—in short, by those with access to capital.” Along these lines, if cartographic data remains in the control of financially dominant entities, then borders, planetary assets, and populations are directly and indirectly impacted without overt or immediate material proof. This is a hidden data colonialism that is in the cloud and underground.
The machine learning process that produced Terram in Aspectu is unsupervised in that it automatically recognizes patterns and creates labels from that data to generate new images. Google Earth and Google Maps—in part due to their usefulness—are taken as factual renderings of our planet. Truth here is proved by functionality: I input an address into Google Maps and when I arrive at the intended location have built trust. This trust is both affective and informational. Though Terram in Aspectu uses and manipulates machine learning in Google Earth to generate Farber’s results, the question posed is: At what level is the image and subsequent information retained by the user trustworthy? Therein lies the deeper geopolitical dimension of this question. It is not just that any person, corporation, or government can use new spatial media to construct then act on a geopolitical narrative (a cartographic-colonialist activity), but that said narratives are created by unsupervised machine learning. This is biased, prone to overfitting, and importantly, inherently incapable and altogether unaware of the human subject. To put this differently, efficiency is priority, not human expectation. It would be a devastating mistake to take the seeming neutrality of Google Earth as one without its own geopolitical subjectivity. That is, machine learning networks are learning to think on their own and generating new information from geospatial ground truths and randomness, but as a double bind, state and corporate actors are collecting, managing, and conditioning the data, effectively establishing an extreme hyperstate in which we are already implicated. (See Google's Earth Engine Data Catalog.) Bridle poses the question of what can or should art do, and how, at the hands of this hyperobject:
What the new technologies of the state continually reveal, unwittingly and often in opposition to their stated goals, is the incredible diversity and uncomputability of their subject both at the level of the individual and at the level of physical geography. Through the network, we are all already transnational, which is revealed by social media and international finance as clearly as by our lived experience. At the same time, the accelerating and leveling actions of anthropogenic climate change forcibly remind us that borders will not protect us from what happens beyond the horizon...The challenge is to view these things anew in light of what we have learned and can still learn from our technologies, because no ecological plea can be sufficient if it simply entails going backwards, or rejecting the immanent possibilities of new systemic forms. 
Faced with real, physical consequences, a different type of geopolitics is to be constructed. Geopolitics now must concern both territorial space and the mediatic infrastructure of the data corpus upon and from which political action is taken. The ubiquity and black-box aspects of machine learning architecture make it increasingly harder to locate how, who, and why state decisions are made. Bridle’s notion is both a call for extreme cautious awareness and an epistemological shift in how we think as the human and non-human become further enmeshed. This is not only about eventualities, but also concerns what can be gleaned from the architecture of these technologies to make this shift in the acquisition of knowledge.
Cognition and Uncertainty
In an attempt to follow this proposition of what can be gleaned, a return to the structure of human thought is built in the inquiry itself. Machine learning has ushered in new challenges to the nature of human thought. These challenges are not posited as a return to anthropocentrism, but as the notion that if cognition can be automated outside of the human brain, then the anthropocentric philosophy of the mind unravels. Thinking how machines think immediately implies that cognition is not essentially human and is distributed across non-carbon-based entities and intelligent networks. This entails a complex social dynamic with and between humans and machines. Take for example, if social media is widely understood as software programs wherein the human is deemed the primary cognitive agent—constructing individual identity and subjectivity—then a critical point in this dynamic has been overlooked. The algorithm(s) is also, albeit much differently, a social agent by dint of the generative capabilities of machine learning addressed above. Machine learning is at best a highly sophisticated pattern recognition network capable of “weak abductive” thinking, but is nonetheless irrevocably enmeshed in the present and future evolution of social and political behavior. The latter statement may seem obvious at the level of interface, enterprise, and human social connectivity. My emphasis is rather to address the sociality of algorithmic thinking itself.
In uncertain environments and circumstances inference and proposing hypotheses are a means of critical thinking. These approaches are, of course, ingrained in contextual and curatorial practice, yet in light of the algorithmic age their function must be deployed with the openness of the uncertain state itself. The nature of the algorithm and machine learning requires a knowledge of the potential unknowable—the uncertain infinity of potential. For Parisi, “The question of automated cognition today concerns not only the capture of the social (and collective) qualities of thinking, but points to a general re-structuring of reasoning as a new sociality of thinking. Automated decision-making already involves within itself a mode of conceptual inferences, where rules and laws are invented and experimentally structured from the social dimensions of computational learning.” The issue at hand is not to take extreme measures in efforts to try and extricate ourselves from the algorithmic world in which we are embedded. This approach would not garner any more agency as suggested by Bridle. Instead, I am advocating for a new state of thinking that re-opens a space for thought that is uncertain and incomplete. This is what leads to an advocacy for accepting uncertainty as inextricable from reality, and for an exhibitions methodology from which learning can develop in novel ways.
Uncertainty in Exhibition Practice
This essay attempts to present information regarding an artist and work that allows for speculative thought about geopolitics and curation. It is often that, as curators, we find ourselves in situations where the prescribed means of public discourse reproduce unchallenged ideologies in the pursuit of maintaining dominant narratives. As a colleague mentioned recently, “We need to stop curating for curators.” I found this direct statement relevant as is, but worthy of developing another version of what that activity would look like given the topics at hand. In this sense, the pre-determined, conventional curatorial methodologies hinder the criticality in the complex ecology of human and machinic thought. This is foremost a challenge of thought and thinking uncertainty.
The exhibition space is at times paradoxically formulaic. Hypothetical thought processes are deprioritized for direct, determined explanations, often preceded by assumption. The issue and ramifications then are the decreased efforts in discovering diverse and individual subjectivities. If uncertainty is part of the co-constituted ecology of the present (as noted in the editorial invitation to this issue and in the critical theories referenced), then it would be prudent to allow for more uncertainty in exhibition practice. This is not a call for ambiguity and less accountability—quite the opposite—it is a call for accountability to think more openly and hypothetically. Conditioned by a sense or requirement of full complete knowledge acquisition at different stages of practice, there are instances in which extreme assumptions are made about the state, truth, stake, and purpose of exhibitions often prior to their experience. These instances, exacerbated in the age of the algorithm, fall closely to the ethical and political decision-making that has contributed to extremely catastrophic and malicious acts on societies, populations, and our planet. Louise Amoore, noting Donna Haraway, poses this question of uncertainty qua doubt:
I am interested in how posthuman ethics might begin from a doubtful account, or from the impossibility of giving a coherent account of things ... What kind of relation to self and others is entailed by the algorithm’s particular claims to the truth? Could ethical relations between technoscience and society begin from the plural and posthuman doubts that grow and flourish when the boundaries of human and algorithm, always arbitrary, ‘highly permeable and eminently revisable’, are relinquished (Haraway, 1997: 11)? 
From this position, we can begin to envision uncertainty in how we think through and present exhibitions. Uncertainty requires responsivity, and thus more deeply involved learning audiences, participants, viewers, and contributors. This may give rise to a multiplicity of individual subjectivities that are becoming ever more hidden in machine learning.
What Terram in Aspectu exposes is that machine learning has the ability to generate new knowledge at the interplay between randomness and order. Reasoning in the ecology of technology and society, as Parisi puts, is “an incomplete affair [...] open to the revision of its initial conditions, and thus the transformation of truths and finality.” Therefore, macro and micro exhibition structures should allow for randomness and an opening up of their intrinsic complexity. Further, overdetermination, prescribed interpretation, and assumption limit the agency of thinking uncertainty. What does this look like? How is this enacted? Consider instilling a pedagogy—that is not exhibition-specific—based on the exploration of complex architectures (algorithmic, institutional, and physical) in general. This is a scaffold that fosters conjecture and questioning in the present. This is a stewarding of uncertainty.
Terram in Aspectu was exhibited in Liliana Farber’s solo exhibition, Proximal, Distal, Adrift at 1708 Gallery in Richmond, Virginia, curated by Park C. Myers.
Park C. Myers is the Royall Family Curator at 1708 in Richmond, VA. He has organized exhibitions and facilitated projects at Knockdown Center, NY; Actual Size, LA; Komplot, Brussels; the Judd Foundation; the Hessel Museum of Art; and the Copenhagen Art Festival. Publications include OnCurating, The Cure (Komplot), and Dear Helen (CCS Bard). He is a co-founder of the online journal aCCeSsions. With Xavi Acarín, Myers established XP, a dialogical exhibitions platform. His research aims to develop new curatorial and institutional strategies informed by the study of complex adaptive systems and attendant shifts in infrastructure and architecture. Myers holds a BFA from Maryland Institute College of Art, an MA from CCS Bard, and was an adjunct professor at Virginia Commonwealth University.
 Armen Avanessian and Suhail Malik, eds., The Time Complex: Post-Contemporary (Miami, FL: [NAME] Publications, 2016). See also the Welcome Letter to the 9th Berlin Biennale: DIS, “The Present in Drag,” Berlin Biennale 9 (9th Berlin Biennale for Contemporary Art, 2016), http://bb9.berlinbiennale.de/the-present-in-drag-2/.
 "Tesla Cybertruck Unveiling Event: Watch the $39,900 Bulletproof Truck's Full Reveal Presentation," YouTube video, 22:16, posted by "TopSpeed" November 21, 2019, https://www.youtube.com/watch?v=9P_1_oLGREM.
 It is important to understand that my stance is not one that states a hierarchy between human and non-human cognition–as if to challenge anthropocentrism means the distinct removal of human thought. Rather, I am referring to, as the editors of this issue reference, the co-existence of human and non-human objects.
 James P. Crutchfield, “Cultures of Change: Social Atoms and Electronic Lives,” in Cultures of Change: Social Atoms and Electronic Lives, eds. Cinta Massip, Josep Perello, and Gennaro Ascione (Barcelona: Actar, 2009), 4, http://csc.ucdavis.edu/~cmg/papers/FOCS.pdf.
 N. Katherine Hayles, “Cognitive Assemblages: Technical Agency and Human Interactions,” in Unthought: The Power of the Cognitive Nonconscious (Chicago, IL: The University of Chicago Press, 2017), 115-141.
 By “linearity of this assumptive and prescribed interpretive thinking,” I mean a seemingly reliable and complicit ratio of information and reception regarding a singular experience of an exhibition or work. A critique of the interpretative moment has previously been made by Suhail Malik in “Reason to Destroy Contemporary Art,” in Realism Materialism Art, eds. Christophe Cox, Jenny Jaskey, Suhail Malik (Berlin: Sternberg Press, 2015), 185-191.
 This is succinctly summed up in the Glass Bead Editorial for their issue Logic Gate, The Politics of the Artifactual Mind: “This data-driven engineering, of affects as much as information, targets users through a constrained picture of their cognitive capacities that (apparently for their own safety) corrals them into a local enclave in which they find themselves individually trapped, paradoxically by their own connectivity…This personalized incapacitation is exactly why we must reclaim impersonal reason: to extricate ourselves from such locally circumscribed horizons, and to gain the power to collectively act on global problems. This is by no means to diminish the importance of local struggles and identity politics, it is precisely because of the aggravated nature of these problems that we need to identify with the collective power of reason.” Fabien Giraud et al., “Logic Gate, The Politics of the Artifactual Mind,” Glass Bead (CNAP – Centre National des Arts Plastiques, 2017), https://www.glass-bead.org/article/logic-gate-politics-artifactual-mind/?lang=enview.
 The notion of a post-truth era can maybe carry an erroneous definition as something that is a register of time. The post-truth condition could be understood not just as a contemporary phase that is concomitant with social and political media, but rather as an immanent cognitive state of determining truth. That is, truth can reasonably only be registered retrospectively. The following quotes elucidate this:
1. Karen Hao quoting Greg Brockman, “The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to ‘make sure that we are understanding the ramifications.’” Karen Hao, “The Messy, Secretive Reality behind OpenAI’s Bid to Save the World,” MIT Technology Review, February 17, 2020, https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/.
2. “No a priori decision, and thus no finite sets of rule can be used to determine the state of things before things can run their course.” Luciana Parisi, “Instrumental Reason, Algorithmic Capitalism, and the Incomputable,” in Alleys of Your Mind: Augmented Intelligence and Its Traumas, ed. Matteo Pasquinelli (Lüneburg: Meson Press, Hybrid Publishing Lab, Centre for Digital Cultures, Leuphana University of Lüneburg, 2015), 132.
 Éric Guérin et al., “Interactive Example-Based Terrain Authoring with Conditional Generative Adversarial Networks,” ACM Transactions on Graphics 36, no. 6 (2017): 1-13, https://doi.org/10.1145/3130800.3130804.
 Martin Giles, “The GANfather: The Man Who’s given Machines the Gift of Imagination,” MIT Technology Review, February 21, 2018, https://www.technologyreview.com/s/610253/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/.
 Luciana Parisi, “Automated Thinking and the Limits of Reason,” Cultural Studies ↔ Critical Methodologies 16, no. 5 (2016): 471-481, https://doi.org/10.1177/1532708616655765.
 Mehdi Mirza and Simon Osindero, “Conditional Generative Adversarial Nets,” November 6, 2014. https://arxiv.org/pdf/1411.1784.pdf.
 James P. Crutchfield, “Between Order and Chaos,” Nature Physics 8, no. 1 (2011): 17-24, https://doi.org/10.1038/nphys2190.
 Louise Amoore, “Doubt and the Algorithm: On the Partial Accounts of Machine Learning,” Theory, Culture & Society 36, no. 6 (2019): 150-151, https://doi.org/10.1177/0263276419851846.
 Liliana Farber, “Terram in Aspectu,” Liliana Farber, n.d., https://www.lilianafarber.com/terram-in-aspectu.
 Wenzhong Shi et al., “Challenges and Prospects of Uncertainties in Spatial Big Data Analytics,” Annals of the American Association of Geographers 108, no. 6 (2018): 1513-1520, https://doi.org/10.1080/24694452.2017.1421898.
 James Bridle, “State to Stateless Machines: A Trajectory,” ed. Janez Janša, PostScript UM 33 (2019): 9, https://aksioma.org/James-Bridle-State-to-Stateless.
 Sead Turčalo and Ado Kulović, “Contemporary Geopolitics and Digital Representations of Space,” Croatian International Relations Review 24, no. 81 (January 2018): 7-22, https://doi.org/10.2478/cirr-2018-0001.
] Matteo Pasquinelli, “Machines That Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference,” Logic Gate: the Politics of the Artifactual Mind 1 (2017), https://www.glass-bead.org/article/machines-that-morph-logic/?lang=enview.
 Luciana Parisi, “Critical Computation: Digital Automata and General Artificial Thinking,” Theory, Culture & Society 36, no. 2 (2019): 114, https://doi.org/10.1177/0263276418818889.
 The conclusion of this text is for further publication on the institutional and pedagogical implantation of scaffolding: the final note here is credited to specific aspects of learning theorized by Reza Negarestani.