ESR 5 Blog March/April 2023: Claude Nassar

Machine Learning Alignment

I have written about AI in multiple blogposts, so while thinking about this current blog I found it helpful to reflect on this engagement. My research started with a preoccupation with social media platforms, the ways they affect and modulate socio-political movements, and the ways they sustain and mediate belonging across disparate territories. However, my interest slowly shifted towards the onto-epistemological backdrop that underlies the evolution of globalised and networked neoliberal governance and the modes of production where privately owned social media networks play an important role in between migrants and the territories they are disembeded from. With this shift, digital technologies are de-centered to reveal the logics of subordination and domination that informed their conception and the development of the logics that underly their infrastructures. Accordingly, because of this shift, I am finding that this blog is often the best place to work through some ideas that emerge during my online experiences. As I see it, the reason behind my recurrent drive to write about machine learning technologies, rather than other (in my view) more important, and more urgent issues, is two fold.

First, after experiencing unrest in Lebanon from a distance, I see little value in raising awareness or making social and political issues visible on online platforms. Not because this work is not important, but because I feel we have reached a point of information saturation, where being aware of events, especially those similar to what you have been interested in in the past is a given. In this sense becoming aware of new things (to you) is not dependent on people making content about new things, but is dependant on you changing your patterns of behaviour enough, and for long enough, for the algorithm of digital platforms to show you content aligned with your new patterns of behaviour. In this sense, digital platforms are not useless, the opposite, they are crucial tools, that individuals, groups, and collectives, can employ as part of larger situated tactical strategies, while labour of visibility, when curated by the algorithms of advertising funded online platforms, acts as a smoke screen of significance; a set of digitally contained signifying action, that pacifies the drive to implement actual action outside the realm of pure significance.

Second, in contrast to the doom narratives surrounding AI in my online bubble, I believe that the danger is neither the potential singularity of machine learning systems, nor a fundamental misalignment between ‘artificial intelligence’ and the transcendental ethics of a unified human species. On the contrary, I believe that the danger of machine learning systems is the same danger that people being bombed by national militaries, and those beaten, and being shot in the streets by police are experiencing. The threat of machine learning algorithms is in its enabling and justification of militarised futures, whether those militaries are in defence of a fascist territorial machine, or in defence of absolute notions of individual freedom. More and more, the misalignment that is crystallising, is revealing itself to be a misalignment between life and its governance, made global by the post-enlightenment split of the individual into a corrupt labouring individuality, and a transcendental ethical unity.

In light of this reflection, I might continue to pursue this line of thought in peripheral deliverables as an application of the onto-epistemological critique I’m developing in my dissertation, on discursive issues I am interested in.

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.