Loading...

BizAI 2025 - Fri, Mar 28 to Sun, Mar 30

Testinomials for BizAi-2024

Panel: AI in Business Research

Prof. Suprateek Sarker
1Prof. Ahmed AbbasiSE, ISR
Prof. Suprateek Sarker
2Prof. Hemant BhargavaDE, Management Science
Prof. Suprateek Sarker
3Prof. K. Sudhirex-EIC, Marketing Science
Prof. Suprateek Sarker
4Prof. Maytal Saar-TsechanskySE, MISQ
Prof. Suprateek Sarker
5Prof. Olivier ToubiaEIC, Marketing Science
Prof. Suprateek Sarker
6Prof. Suprateek SarkerEIC, ISR

We would like to express our gratitude to Dean Hasan PirkulVice-Dean Varghese Jacob, and ISS Chair Ming Fan for their generous support in sponsoring the BizAI Conference 2024. Their commitment to advancing the field of AI in business research has made this event possible.

At BizAI 2024, we have the honor of hosting a panel discussion of distinguished experts from leading business research journals. The panelists shared their perspectives on the opportunities, challenges, and best practices associated with using AI in business research. They discussed topics ranging from domain adaptation and theoretical framing to the role of AI in writing and refining research papers and review reports. The panel provided valuable insights and guidance for researchers seeking to harness AI's power while navigating challenges and ensuring the integrity and relevance of their findings.

In the following sections, we present a detailed summary of the panel discussion, organized by the main themes and topics addressed by the panelists.

Use of AI models in research papers

Domain-adaptation & Context focused

In the world of AI research and applications, there has been a growing emphasis on the development and deployment of general-purpose AI models [6]. Context-inspired insights have revealed the limitations of general-purpose AI in decision-making or other business-related objectives, highlighting the need for AI methods that address challenges arising in specific and impactful business or societal domains [4]. However, many AI research submissions prioritize domain application over domain adaptation, focusing more on applying AI techniques to specific domains rather than adapting the models to work effectively in new or different contexts [1]. When domain adaptation is unclear or unconvincing, it faces increased scrutiny, especially considering the capabilities of general-purpose AI [1]. The limitations of general-purpose AI in decision-making tasks have become increasingly apparent, as these models may not always produce the most accurate or reliable results when applied to specific contexts without proper adaptation [4]. As a result, submissions that do not clearly demonstrate compelling domain adaptation strategies may face greater challenges in gaining acceptance and recognition within the AI research community [1].

To advance AI and improve decision-making, researchers must focus on domain adaptation by gaining a deeper understanding of each domain's unique challenges and leveraging context-specific insights to create more effective and reliable AI solutions [1, 4].

Theoretical Framing

Theoretical framing helps to bridge the gap between the technical aspects of AI models and the broader business and organizational contexts in which they are applied [3]. This may involve adapting existing theories from related fields, or developing entirely new frameworks that capture the distinctive features and dynamics of AI-driven businesses and industries. This can help to generate new knowledge and understanding of the complex relationships between AI technologies, human behavior, and organizational outcomes.

Further, it is important to recognize that a one-size-fits-all approach may not be suitable for any research [6]. A spectrum of abstractions, ranging from interesting insights to formal theory, may be necessary to fully capture the value of research [1, 6]. In this context, research focusing on finding clever and elegant ways to adapt, fine-tune, or instruction-tune AI models for a specific domain can be highly valuable. Such research can enable complex domain-specific natural language understanding (NLU) instructions, which can greatly enhance the applicability and usefulness of AI models in various business contexts. However, simply reporting accuracy improvements in AI models may be insufficient if a general-purpose AI model is applied without capturing the underlying context [1, 3]. By developing effective methods for domain adaptation, researchers can bridge the gap between general-purpose AI models and the specific requirements of different business domains, ultimately contributing to the advancement of both theory and practice [1, 4, 6].

Use of AI for Refining Research Papers

Gen AI: A “COLLABORATOR” that is resourceful and creative but unreliable [6].

Gen AI can be a valuable tool for researchers in writing and refining research papers. It can be particularly useful in improving the clarity and coherence of their writing, helping researchers express their ideas more effectively [3, 6]. It can help generate ideas by identifying gaps in the literature, proposing research questions, and suggesting relevant theories. It can also assist in designing and redesigning studies, incorporating various considerations and methodologies. Moreover, generative AI can support comprehensive literature reviews and provide suggestions for supporting hypotheses. Nevertheless, researchers are responsible to make sure that they take Gen AI's output as suggestions and not completely rely on it [5, 6].

Use of AI for Refining Review Reports

Gen AI: A second pair of eyes [6].

Generative AI can assist reviewers in evaluating research papers by assessing clarity, coherence, and formatting, summarizing key points, identifying gaps in the literature, evaluating theory applicability, and critiquing the development of hypotheses and discussions [5, 6]. This can help reviewers provide more comprehensive and constructive feedback, streamlining the review process and contributing to the overall quality of published research. However, it is crucial to recognize that generative AI can make mistakes, and reviewers are ultimately responsible for ascertaining the correctness and validity of the AI's suggestions. Reviewers should use generative AI as a supportive tool while exercising their own judgment and expertise to ensure the integrity of the peer-review process [5, 6].

Authors:
Prof. Suprateek Sarker
Prof. Rohit Aggarwal,
Prof. Suprateek Sarker
Prof. Harpreet Singh

Opinion: Teaching Ai & Critical Thinking

The opinions expressed herein are derived from our research and own experiences in:

  • developing a few AI Agents,
  • observing student engagement across different variations of our AI classes,
  • engaging in discussions within AI committees and with attendees of BizAI 2024.

The Growing Importance of Critical Thinking in the AI Era

In the new era of artificial intelligence (AI) and large language models (LLMs), critical thinking skills have become more important than ever before. A 2022 survey by the World Economic Forum revealed that 78% of executives believed critical thinking would be a top-three skill for employees in the next five years, up from 56% in 2020 [1]. As AI systems become more advanced and capable of performing a wide range of tasks, it is crucial for humans to develop and maintain strong critical thinking abilities to effectively leverage these tools and make informed decisions.

The Necessity of Human Insight for Effective AI Utilization