Wednesday, January 22, 2025

The rise of multidisciplinary research stimulated by AI

AI research tools such as OpenAI o1 have now reached test score levels that meet or exceed the scores of those who hold Ph.D. degrees in the sciences and a number of other fields. These generative AI tools utilize large language models that include research and knowledge across many disciplines. Increasingly, they are used for research project ideation and literature searches. The tools are generating interesting insights to researchers that they may not have been exposed to in years gone by.

The field of academe has long emphasized the single-discipline research study. We offer degrees in single disciplines; faculty members are granted appointments most often in only one department, school or college; and for the most part, our peer-reviewed academic journals are in only one discipline, although sometimes they welcome papers from closely associated or allied fields. Dissertations are most commonly based in a single discipline. Although research grants are more often multidisciplinary and prioritize practical solution-finding, a large number remain focused on one field of study.

The problem is that as we advance our knowledge and application expertise in one field, we can become unaware of important developments in other fields that directly or indirectly impact the study in our chosen discipline. Innovation is not always a single-purpose, straight-line advance. More often today, innovation comes from the integration of knowledge of disparate fields such as sociology, engineering, ecology and environmental developments, and expanding understanding of quantum physics and quantum computing. Until recently, we have not had an efficient way to identify and integrate knowledge and perspectives from fields that, at first glance, seem unrelated.

AI futurist and innovator Thomas Conway of Algonquin College of Applied Arts and Technology addresses this topic in “Harnessing the Power of Many: A Multi-LLM Approach to Multidisciplinary Integration”:

“Amidst the urgency of increasingly complex global challenges, the need for integrative approaches that transcend traditional disciplinary boundaries has never been more critical. Climate change, global health crises, sustainable development, and other pressing issues demand solutions from diverse knowledge and expertise. However, effectively combining insights from multiple disciplines has long been a significant hurdle in academia and research.

“The Multi-LLM Iterative Prompting Methodology (MIPM) emerges as a transformative solution to this challenge. MIPM offers a structured yet flexible framework for promoting and enhancing multidisciplinary research, peer review, and education. At its core, MIPM addresses the fundamental issue of effectively combining diverse disciplinary perspectives to lead to genuine synthesis and innovation. Its transformative potential is a beacon of hope in the face of complex global challenges.”

Even as we integrate AI research tools and techniques, we, ourselves, and our society at large are changing. Many of the common frontier language models powering research tools are multidisciplinary by nature, although some are designed with strengths in specific fields. Their responses to our prompts are multidisciplinary. The response to our iterative follow-up prompts can take us to fields and areas of expertise of which we were not previously aware. The replies are not coming solely from a single discipline expert, book or other resource. They are coming from a massive language model that spans disciplines, languages, cultures and millennia.

As we integrate these tools, we too will naturally become aware of new and emerging perspectives, research and developments generated by fields that are outside our day-to-day knowledge, training and expertise. This will expand our perspectives beyond the fields of our formal study. As the quality of our AI-based research tools expands, their impact on research cannot be overstated. It will lead us in new directions and broader perspectives, uncovering the potential for new knowledge, informed by multiple disciplines. One recent example is Storm, a brainstorming tool developed by the team at Stanford’s Open Virtual Assistant Lab (OVAL):

“The core technologies of the STORM&Co-STORM system include support from Bing Search and GPT-4o mini. The STORM component iteratively generates outlines, paragraphs, and articles through multi-angle Q&A between ‘LLM experts’ and ‘LLM hosts.’ Meanwhile, Co-STORM generates interactive dynamic mind maps through dialogues among multiple agents, ensuring that no information needs overlooked by the user. Users only need to input an English topic keyword, and the system can generate a high-quality long text that integrates multi-source information, similar to a Wikipedia article. When experiencing the STORM system, users can freely choose between STORM and Co-STORM modes. Given a topic, STORM can produce a structured high-quality long text within 3 minutes. Additionally, users can click ‘See BrainSTORMing Process’ to view the brainstorming process of different LLM roles. In the ‘Discover’ section, users can refer to articles and chat examples generated by other scholars, and personal articles and chat records can also be found in the sidebar ‘My Library.’”

More about Storm is available at https://storm.genie.stanford.edu/.

One of the concerns raised by skeptics at this point in the development of these research tools is the security of prompts and results. Few are aware of the opportunities for air-gapped or closed systems and even the ChatGPT temporary chats. In the case of OpenAI, you can start a temporary chat by tapping the version of ChatGPT you’re using at the top of the GPT app, and selecting temporary chat. I do this commonly in using Ray’s eduAI Advisor. OpenAI says that in the temporary chat mode results “won’t appear in history, use or create memories, or be used to train our models. For safety purposes, we may keep a copy for up to 30 days.” We can anticipate these kinds of protections will be offered by other providers. This may provide adequate security for many applications.

Further security can be provided by installing a stand-alone instance of the LLM database and software in an air-gapped computer that maintains data completely disconnected from the internet or any other network, ensuring an unparalleled level of protection. Small language models and medium-size models are providing impressive results, approaching and in some cases exceeding frontier model performance while storing all data locally, off-line. For example, last year Microsoft introduced a line of SLM and medium models:

“Microsoft’s experience shipping copilots and enabling customers to transform their businesses with generative AI using Azure AI has highlighted the growing need for different-size models across the quality-cost curve for different tasks. Small language models, like Phi-3, are especially great for:

  • Resource constrained environments including on-device and offline inference scenarios
  • Latency bound scenarios where fast response times are critical.
  • Cost constrained use cases, particularly those with simpler tasks.”

In the near term we will find turnkey private search applications that will offer even more impressive results. Work continues on rapidly increasing multidisciplinary responses to research on an ever-increasing number of pressing research topics.

The ever-evolving AI research tools are now providing us with responses from multiple disciplines. These results will lead us to engage in more multidisciplinary studies that will become a catalyst for change across academia. Will you begin to consider cross-discipline research studies and engage your colleagues from other fields to join you in research projects?

Related Articles

Latest Articles