From Aristotle and Plato to Machiavelli and Marx, some of the most significant developments of human thought have been the result of an incremental transformation in our abilities to research. From scanning through mythology, to creation of archives and now, analysis of patterns in data points, the underlying principle has always been accumulation of information so as to be able to arrive at indisputable conclusions. Now, however, the emergence of AI is about to radically alter the field of academic research.
Common understanding of robots is often restricted to programmable machines capable of doing dangerous and repetitive, albeit manual tasks, such as car assembly lines, mine-sweeping, etc. However, the development of machine learning, the scheme through which computers identify patterns and this allows them to slowly conceive and predict aspects of our lives. The beauty of machine learning is that it only improves with usage and overtime becomes proficient. This is possible through the use of algorithms- mathematical formulae that help identify these patterns. The simplest examples of machine learning algorithms are recommendations on our Netflix and YouTube accounts. Simple as it may seem, the ability of a computer system to automatically identify the genre of videos or shows we watch, then collate that data with the videos in its database and further, sort its own database based on that genre to provide us the recommendations is truly remarkable. At other places, machine learning can help in face recognition, as is done by Facebook in its tagging feature, and by Apple in their iPhone. As these systems are relatively new, they are not yet perfect, but the thing about machine learning (aka AI) is that it gets better over time. This has raised concerns such as traffic CCTVs being able to recognize individuals, but those are beyond the scope of this article.
What concerns us is the ability of machine learning to crunch data. This includes the ability to recognise patterns in large swathes of data, enabling computers to test hypothesis and thereafter, publish research papers. Researchers at a new start-up IRIS have set out to use this ability to democratize learning. Their model employs AI to help users locate the exact data that may be relevant to their field of research from the vast database of publicly available data that exists from various experiments. In simple terms, the software reads through copious amounts of research papers, locates which of those have data relevant to its users and then presents it to them.
A screen-grab from IRIS' website.
At a recent TED talk, IRIS was able to use language processing algorithms to find out papers related to the talk that was presented, all on its own. This is a remarkable ability as over time, the software would be able to perfect the ability and learn about more and more fields. Once that is done, it would not be difficult for the software to refute or support any statement presented to it.
Good Morning, Professor Cerebro!
In a 2006 blogpost, Tyler Cowen, Professor of Economics at George Mason University wrote,
“Don’t expect any classes to be interesting… But a good professor can make almost any topic interesting. So, your reaction to the courses is just a reaction to the instructors you have sampled.”
This statement can prove to be ominous for any academic wishing to pursue a career in teaching. But the statement’s true impact can be felt in a world where academics are not just competing amongst themselves, they are also competing with machine learning.
Once the ability to recognise data and test hypothesis is perfected, all that stands in the way of Artificial Intelligence enabled teaching is language processing- the ability to listen to and interact with students. Already, poor teaching faces competition from Massive Online Open Courses (MOOCs), but once projects like IBM’s Watson is perfected, it would find it even more difficult to justify costs and rewards. The IBM Watson is an AI-enabled program that is being used for trial runs at call-centres. There it is interacting with users and attempting to perfect responses, understand feedback etc. This is a long way off from the time when Microsoft launched its AI enabled Twitter bot which began tweeting racist jibes in a day. The new system is far more sensitive, and is beginning to distinguish between appropriate and inappropriate conduct. Combine it with IRIS’ algorithm and you have Professor Cyborg right in front of you.
Academics are fast dwindling as companies poach them away with the promise of better rewards and dynamic work environments. Because this happens before they enter academic positions, there are implications not just in terms of fewer experts in the university space, but also because several companies choose to work on top secret projects that suffer from limitations of field testing, thereby affecting their efficacy and suitability for real world operations. One university executive warned of a “missing generation” of academics who would normally have taught students and been the creative force behind research projects, but is now crippling critical theoretical concepts due to their exodus. This is a problem that could easily be solved with the advent of AI, further we could eliminate the waste of time and cost that occurs owing to bad teaching, especially in regions where there are not enough academics to staff universities.
However, none of this is likely to happen overnight. Each of these programs will require a few years before they perfect responses and the algorithms have learnt enough to engage in fluent, and safe online interactions. Further, we might also want to think twice before we welcome Professor Cyborg to our college. An academically fluent AI algorithm would be able to process and store far more information than any human being has the ability to process. The internet is already home to almost all of mankind’s collective intellectual memory, but until now we have only had the capacity to electronically store and index that information. No computer program has ever been able to grasp the complexity, vitality or implications of such information. A missile schematic may be top secret, but if shared over an email service, the server cannot tell the difference between that and a teenager’s essay on nuclear weapons. However, an algorithm that understands data like humans do, can not only help us with our research, it can also learn how to build that missile, and perhaps build it for its own use. We wouldn’t want that, now, would we?
About The Author
Balasubramanyam Pattah is a second year student of Masters in Development Studies at the Graduate Institute of International and Development Studies, Geneva. Originally from Kerala, India, Balu has a Bachelor in Economics from the University of Delhi. His interests and past work concern migration, demography, labour and, employment. He has served as the co-chief editor of the Hans Raj college Economics journal in 2014-15. He has also published several papers on the intersection of AI with manufacturing and education in peer-reviewed journals. During his undergraduate studies, Balu was an active quizzer and respected quizmaster.