This page documents the changes made to reduce friction for real site conditions: mixed devices, variable connectivity, and multilingual readability.
Design goal
Fast to read, easy to complete
Device reality
Mobile-first, desktop clean
Language handling
Comfort-first, no switching pain
Help approach
Short answers + expandable detail
The survey form has undergone multiple iterations to make the experience the best for all who respond to the survey. The original lime survey was as good as what we could have got from Google Forms or Microsoft Forms. I am sure that the commercial survey platforms would have given a better experience.
I strongly recommend that any University or an academic survey will explore the open source options. Medical research could explore Vanderbilt university's RedCap. There are excellent tools and platforms waiting to be adopted and adapted.
In this article, some of the customisation achieved is shared.
1) Custom CSS for a consistent experience
The landing page and supporting pages use a single scoped style system to keep fonts, spacing, and colour behaviour consistent across devices and across languages.
Note: CSS or Cascading Style Sheets area set of styling rules that control how the look and feel of websites and web apps running in a browser behave like fonts, spacing, alignment and the like. Any developer or organsiation can use CSS to generate documents that are consistent across devices.
Cards Clear separation of sections without heavy borders.
Responsive Three columns on desktop, natural stacking on mobiles.
Dark mode Automatic based on system preference.
Indic scripts Font stack supports Devanagari, Tamil, Telugu, Kannada.
Note: CSS is scoped (e.g., .ehub-survey) to prevent unwanted styling across the site. This keeps other pages independent while maintaining consistency where needed.
2) Updated help system with minimal scrolling
The help approach is deliberately “short-first”: the user sees the minimum needed to proceed, with optional detail on demand. This reduces screen fatigue and is friendlier on phones.
One-line guidance shown immediately.
Examples only when needed (not everywhere).
Expandable details for definitions and edge cases.
Consistent placement so users learn where to look.
Gloves, dust, glare, and movement make long scrolling error-prone.
Many respondents use budget Android phones with small screens.
Time pressure is real; the UI must respect it.
3) Multilingual survey designed for reading comfort
The survey is multilingual, but the aim is not just translation. It’s reading comfort: clear typography for Indic scripts, and predictable placement of language controls so users don’t hunt for them.
Users pick their comfortable language once and continue without interruption.
Key instructions are phrased simply, avoiding academic wording.
When needed, bilingual hints can appear inline without forcing a page switch.
Example (inline language comfort):
Instruction: Please answer based on your recent real work experience.
निर्देश: कृपया अपने हालिया वास्तविक कार्य अनुभव के आधार पर उत्तर दें।
வழிமுறை: உங்கள் சமீபத்திய உண்மை பணியனுபவத்தை அடிப்படையாகக் கொண்டு பதிலளிக்கவும்.
సూచన: దయచేసి మీ ఇటీవలి నిజమైన పని అనుభవం ఆధారంగా సమాధానం ఇవ్వండి.
ಸೂಚನೆ: ದಯವಿಟ್ಟು ನಿಮ್ಮ ಇತ್ತೀಚಿನ ನೈಜ ಕೆಲಸದ ಅನುಭವವನ್ನು ಆಧರಿಸಿ ಉತ್ತರಿಸಿ.
This pattern can be used selectively for difficult terms, without forcing users to switch the whole site language.
4) Informed by field observation, not guesswork
Many design choices come from on-site observation: how people actually fill forms, where they pause, and what makes them abandon. The UI is designed to fit that reality.
People prefer short prompts and familiar wording.
Long “instruction paragraphs” get skipped.
Ambiguous questions cause random clicking just to move forward.
When one person fills on behalf of others, clarity matters even more.
Practical focus: if a question is hard to answer on a real site, the question needs improvement—blaming respondents is pointless.
5) Thanks to everyone who shaped the experience
This interface exists because many people gave time, patience, and honest feedback. Field teams, practitioners, and early respondents helped identify what works and what doesn’t.
Site professionals who shared day-to-day constraints and workflows.
Supervisors and coordinators who facilitated access during visits.
Colleagues who reviewed questions for clarity and bias.
Everyone who flagged confusing terms, awkward translations, or missing options.