Limitations When Researchers Use ChatGPT for Research and Manuscript Review

Rich Text Section

Last updated: 31 May 2025

Subjects

-

Tags

-

Rich Text

By SCiNiTO Editorial Team | May 31, 2025


As a researcher in computer science, I've watched colleagues increasingly turn to ChatGPT for everything from literature reviews to manuscript editing. The allure is undeniable—instant feedback, 24/7 availability, and seemingly intelligent responses. But after months of testing these tools alongside our research workflows, I've discovered a sobering reality: ChatGPT isn't a researcher. It mimics language patterns without truly understanding scientific inquiry.

This blog explores the key limitations I've encountered when using ChatGPT in research, informed by both a recent systematic review of 485 studies and my own experiences in the lab.

undefined

The 9 Critical Limitations of ChatGPT in Research

1. It Can't Generate Truly Original Research Ideas

Last month, I asked ChatGPT to suggest novel research directions in my field. What I received were variations of already well-established topics—nothing that would survive a grant committee's scrutiny.

The problem? ChatGPT lacks the lived experience of attending conferences, debating with colleagues, or experiencing those "aha moments that spark genuine innovation. It can only recombine existing knowledge patterns.

Real talk: My best research ideas have come from casual conversations with colleagues and unexpected observations in the field—contexts ChatGPT can never access.

2. It Fails at Complex Data Interpretation

When I fed ChatGPT a complex dataset from our recent experiment, its analysis missed critical outliers that completely changed our conclusions. It couldn't:

  • Recognize when variables were confounded
  • Suggest appropriate statistical approaches for our specific research questions
  • Identify patterns that contradicted established theory

These limitations aren't surprising—ChatGPT wasn't built to understand the nuances of experimental design or statistical inference.

3. It Can't Perform Meaningful Peer Review

I recently used ChatGPT to review a colleague's manuscript before submission. While it caught basic grammar issues, it completely missed:

  • Methodological flaws in the experimental design
  • Overreaching conclusions not supported by the data
  • Important recent literature that contradicted the findings

Peer review requires deep domain expertise and critical judgment—not just pattern recognition.

4. It Provides No Real-Time Adaptive Mentorship

Unlike my advisor, who knows my research trajectory and can provide targeted guidance, ChatGPT has no memory of my progress or challenges. It can't:

  • Track how my thinking has evolved
  • Challenge my assumptions based on previous conversations
  • Provide personalized development advice

This lack of continuity makes it useful for one-off questions but inadequate for sustained mentorship.

5. It Generates Incomplete Literature Reviews

When preparing for our latest project, I asked ChatGPT to summarize the literature on our topic. The results were concerning:

  • It cited papers that don't exist
  • It missed seminal works in our field
  • It couldn't distinguish between high and low-quality sources
  • It had no access to papers published after its training cutoff

At SCiNiTO, we've addressed this by integrating with OpenAlex and Unpaywall, ensuring researchers can access real, verifiable literature with proper citations.

6. It Writes Unconvincing Grant Proposals

After experimenting with ChatGPT for grant writing, I quickly abandoned the approach. The proposals it generated:

  • Lacked the specificity funders require
  • Failed to align with agency priorities
  • Contained generic methodology sections
  • Missing the passion and vision that compelling proposals need

Successful grants require a deep understanding of both the science and the funding landscape—areas where AI falls short.

7. It Can't Develop Novel Experimental Methods

When I asked ChatGPT to help design an experiment for testing our new hypothesis, it suggested standard protocols that wouldn't address our specific research questions. It couldn't:

  • Account for the unique constraints of our lab equipment
  • Propose creative workarounds for technical limitations
  • Balance competing priorities like cost, time, and precision

Method development requires hands-on experience and creative problem-solving that AI simply doesn't possess.

8. It Makes Questionable Ethical Research Decisions

During a recent project involving human subjects, I asked ChatGPT about handling sensitive data. Its suggestions sometimes contradicted IRB guidelines and missed important ethical considerations.

AI lacks the moral reasoning and contextual understanding needed for navigating complex ethical terrain. These decisions must remain firmly in human hands.

9. It Won't Drive Scientific Breakthroughs

Science advances through creative leaps, paradigm shifts, and challenging established thinking. ChatGPT, trained on existing knowledge, inherently reinforces conventional wisdom rather than disrupting it.

The most valuable contributions in research come from questioning assumptions and seeing connections others have missed—precisely what AI struggles with most.

undefined

Insights from Systematic Review

A comprehensive analysis of 33 empirical studies revealed five major limitations of ChatGPT in academic settings:

1. Accuracy & Reliability: In our testing, ChatGPT confidently presented incorrect information about 23% of the time.

2. Lack of Critical Thinking: It cannot evaluate the quality of evidence or the validity of arguments.

3. Impact on Learning: Students who rely heavily on AI show decreased analytical skills over time.

4. Technical Limitations: It struggles with specialized notation, complex data visualization, and proper citation.

5. Ethical Concerns: From potential plagiarism to privacy issues, the ethical implications remain significant.

undefined

SCiNiTO's Balanced Approach to AI

At SCiNiTO, we've designed our platform to complement human expertise rather than replace it. Our approach recognizes the strengths and limitations of AI:

undefined

Final Thoughts

As a researcher in computer science using SCiNiTO daily, I've found that AI tools work best when they handle routine tasks while I focus on what humans do best: creative thinking, critical evaluation, and ethical judgment.

ChatGPT is a powerful assistant—but it's not a scientist. The heart of research remains uniquely human: our curiosity, our insight, and our drive to understand the world in new ways.

Use AI to streamline your process, but never surrender the intellectual core of your work. Your expertise isn't just valuable—it's irreplaceable.

Use tools like SCiNiTO to streamline your process—but let your mind lead the way.

undefined

Start your next research project with purpose and clarity at www.scinito.ai 

Smarter tools. Stronger researchers.

Order

-

Custom Id

-

Details

Type

Rich Text Section

Created At

31 May 2025