Using LLMs effectively for academic work

This summary reflects my experience with LLMs over the past several months. I have interacted with Gemini (formerly Bard), ChatGPT (including version 4.5), Perplexity, Claude, and Grok3, along with some use of DeepSeek, Mistral, AllenAI, Leonardo, Midjourney and Copilot.

Every LLM has its strengths and weaknesses, and no one LLM is the best at doing everything for your academic needs. However, the capabilities and functionalities are changing regularly and fast.

SearchPerplexity leads for research; Gemini excels with Google integration. Deepsearch of ChatGPT is excellent in my opinion. Grok DeepSearch is also good.
HallucinationChatGPT (DALL-E 3) and Gemini lead in generating stock images.
Claude is good at producing charts and graphics for academic purposes. ChatGPT cen refine academic images but is often unreliable.
ReliabilityClaude for logic; Perplexity for facts
Deep SearchPerplexity offers it free; Grok, ChatGPT and Gemini shine in paid tiers.
Image GenerationChatGPT (DALL-E 3) and Gemini lead in generating stock images.
Claude is good at producing charts and graphics for academic purposes. ChatGPT can refine academic images but is often unreliable.
CodingClaude and DeepSeek are top-tier
VideoGemini has this capability.
AspectChatGPTPerplexityGrokClaudeDeepSeekGemini
SearchPaid only, decentBest, real-timeReal-time, X (twitter) focusBasicGood, technicalExcellent, Google
Risk of HallucinationModerateLowLowVery LowModerateModerate
ReliabilityHigh, generalHigh, researchHigh, technicalVery High, logicHigh, STEMHigh, multimodal
Deep Search (Free)NoYesLimitedNoYes, limitedYes
Deep Search (Paid)Yes (Pro)Yes (Pro)Yes (Premium+)NoYesYes (Advanced)
Image GenerationYes (Paid)NoYes (Paid)NoNoYes (Paid)
Coding CapabilityStrongDecentStrongExceptionalExceptionalStrong
Video GenerationNoNoNoNoNoYes (Paid)

You can use LLMs for

•searching literature,
•generating ideas,
•blog posts,
•social media post,
•generating structure for your article,
•reviewing your paper or correcting spelling/grammar/structure
•summarising
•statistical analysis,
•critically appraise a paper
•Write an article (you must be careful about generating the whole article for a paper)
•generating images (stock image, graph, tables, diagram, or just an absurd image for fun),
•testing your understanding of a subject by conversing with you
•generating podcast
•generating code
•generating a short video.

How can you use LLM safely?

•Do not upload confidential data (for example: no patient identifiable data or business intelligence/business critical data)
•Follow your organisation’s AI/data policy or your National policy
Although LLM use is considered sessional (the information you upload is not retained after you end the conversation, it might still be reviewed by someone to train the LLM.
Also, there is always a potential policy change from the LLM companies, leading to data retention
There is always some possibility of data retention if you are using services like ChatGPT scheduled task or perplexity spaces.
•Use your prompts effectively (more on this later)
• Cross-check the output (text and reference)
There is always a risk of hallucination (LLM producing information or reference that does not exist). It is important to check the information if you are producing something that is not in the field of imaginative work (writing a story)
Any academic work must be checked for its validity. I find expert knowledge is important here. If you are not an expert in that field you risk missing the hallucination.
Appropriate prompt can help to some extent, but still cross checking is important
•Check the AI use policy of the organisation
•Declare the scope of use of AI in your work
If you are using AI/LLM for serious work (preparing a journal article or business report) your organisation or the publisher may want to know the extent of the use of the AI. Always check what the scope is and comply.
•Beware of the guardrail
•Refrain from using AI/LLM for offensive/illegal purpose
There is always a risk of hallucination (LLM producing information or reference that does not exist). It is important to check the information if you are producing something that is not in the field of imaginative work (writing a story)
Any academic work must be checked for its validity. I find expert knowledge is important here. If you are not an expert in that field you risk missing the hallucination.
Appropriate prompt can help to some extent, but still cross-checking is important

Using prompts effectively for an academic purpose

  1. Detailed and Explicit Prompts:
  • There is no such thing as a prompt that's "too long." If you have to use a long prompt to get an answer you want, go for it.
  • Include context, specific tasks, intended outcomes, and any constraints clearly.

2. Assign a Role or Character to the LLM:

  • Clearly define what role the LLM should adopt (e.g., "senior consultant," "editor of a journal," "statistician," "historian"). This significantly influences the nature and depth of responses.
  • Example: If you are asking LLM to review an article, write  "Assume you are an editor of a reputed medical journal evaluating a research article. Review this article {name}”
  • If you ask LLM "You are Napoleon Bonaparte, analyse the Battle of Gaugamela", you will get a different answer than asking it “You are William Shakespeare – describe the Battle of Gaugamela".

As Napoleon Bonaparte, I would analyse the Battle of Gaugamela thus:

The Battle of Gaugamela (331 BC) epitomises Alexander the Great’s brilliance in battlefield strategy, a masterpiece of tactical flexibility and psychological dominance. Alexander's deliberate use of the oblique order, combined with disciplined troops and precisely executed maneuvers, showcases his strategic genius. His deliberate weakening of the left flank to entice Darius III into committing heavily, thereby creating vulnerabilities in the Persian center, exemplifies masterful deception.

Alexander’s bold cavalry charge, precisely timed and directed straight at the heart of Darius' formation, demonstrated courage and calculated risk-taking—principles essential to decisive victories. Darius's inability to adapt to rapidly changing battlefield conditions and his premature retreat signified a failure in leadership and the psychological collapse of Persian morale.

•From my own perspective, victory demands decisiveness, clear communication, and exploitation of the enemy’s psychological weaknesses—all expertly demonstrated by Alexander. Had I faced Darius at Gaugamela, my approach would have resonated with Alexander’s methods: breaking enemy cohesion, maintaining initiative, and achieving decisive victory through audacious yet calculated maneuvers. Alexander achieved a model of strategic victory that military leaders should study for centuries hence.

As William Shakespeare, my depiction of the Battle of Gaugamela would unfold thus:

Upon the plains of Gaugamela wide,
Two mighty hosts in fateful clash abide.
Great Alexander, fierce with youthful fire,
Met Darius, crowned by desperate desire.

A sea of steel and courage brightly gleamed,
As spears like stars against the sunlight streamed.
Alexander, bold, with eagle’s eyes did see
The Persian weakness—fate’s decree.

He charged with valour's flame, like tempest wind,
A sharpen’d wedge, as Destiny had skinned
The fabric of the Persian king’s array;
The centre shattered, courage fell away.

The royal Darius fled the grievous sight;
His armies broken, routed from the fight.
Thus Fortune favoured daring hearts that day,
And Alexander claimed immortal sway.

•This battlefield, where glory spilt its flood,
Is ink’d forever in the scrolls of blood.
O Gaugamela! Name remembered long,
Where pride did fall, and heroes’ deeds belong.

3. Demand factual accuracy explicitly:

  • LLMs tend toward generating coherent narratives, occasionally at the expense of factual accuracy. Explicitly instruct the LLM to adhere strictly to factual information.
  • LLMs are reluctant to say “no” unless you give it permission to do so.
  • Tell it “Provide answers based solely on factual information. Avoid generalisations, assumptions, or extrapolations beyond explicitly stated or verifiable data. If uncertain, explicitly state uncertainty, and if information is not available, say not available."

4. Set clear boundaries and constraints:

Specify clearly what kind of resources and guidelines to use or avoid. For instance:

  • “Only cite peer-reviewed journal articles published within the last 10 years, prioritising systematic reviews and meta-analyses.”
  • “Primarily use UK guidelines (e.g., NICE, BIA, HIS, BASHH, BHIVA). If no UK guidelines are available, explicitly indicate this and cite European, US, or WHO guidelines.”
  • “Explicitly state when using non-peer-reviewed or unofficial sources, clearly indicating their limitations.”

5. Ensure rigorous citation and verification:

Be extra-careful about citation. Citations are a frequent area of inaccuracies or hallucinations. Explicitly instruct the LLM to verify citations and clearly communicate citation details:

  • “List citations clearly with authors, title, journal/guideline name, year, DOI or URL. Explicitly confirm that the cited information is accurately located within the provided source (mention page number, section, or heading).”
  • “If citation verification fails or is uncertain, explicitly flag this with statements like ‘Citation existence not confirmed,’ ‘Information not found in cited source,’ or ‘Citation accessibility issue.’”

6. Provide a clear structure for answers:

Offer a detailed structure or framework for responses. This significantly improves consistency, completeness, and readability. Example:

  • “When describing a bacterial species, provide information under the following structured headings: Classification, Virulence Factors, Laboratory Characteristics (Gram stain, culture media, biochemical tests), Clinical Presentations, Differential Diagnoses, Investigations, and Management/Treatment.”

7. Iterative refinement (Ask the LLM for Help):

If unsure, first ask the LLM to propose an initial outline, structure, or scope for your topic. Example:

  • “Suggest a comprehensive outline covering all relevant aspects of bacterial pathogens. Refine this outline based on my feedback.”

8. Precision and language style:

Specify the desired tone and style of writing (e.g., formal academic tone, concise and technical, detailed yet concise). Example:

  • "Write in a formal academic style, concise yet detailed, appropriate for postgraduate medical students."
  • Or you can ask – guideline style, short summary, summarise in 3 lines, use a language suitable for a lay person..

9. Start a conversation:

  • LLM, although not human but remarkably humanlike in its conversational capability. Do not consider it as a machine but assume it as a person who is willing to help you and will never be angry if you ask the same thing hundreds of times (well you can go over your usage limit).
  • Start a conversation. If you do not like a part of the answer, suggest it. Ask for refinements. If you like a part – appreciate it. It will take these interactions on board and adjust its answer according to this. However, remember it's memory is sessional, as soon as you close the conversation this small modification will be lost.

10. Make a list of common prompts and store:

•Store your common prompts in a separate place.

  • As the LLM's memory is sessional, you can make a big prompt to set the tone of the conversation and then start your interaction.
  • Your starting line could be something like – “You are a consultant in the NHS involved in patient care and teaching. You look for answers based solely on factual information. Avoid generalisations, assumptions, or extrapolations beyond explicitly stated or verifiable data. If uncertain, explicitly state uncertainty, and if information is not available, say not available….”

Invite LLM to all your workflows.. (as long as it is safe to do so)..

Leave a Reply

Your email address will not be published. Required fields are marked *