Skip to main content
Loading...
Scholar9 logo True scholar network
  • Article ▼
    • Article List
    • Deposit Article
  • Mentorship ▼
    • Overview
    • Sessions
  • Questions
  • Scholars
  • Institutions
  • Journals
  • Login/Sign up
Back to Top

Transparent Peer Review By Scholar9

VIDEO TO VIDEO TRANSLATION USING MBART MODEL

Abstract

There are many languages are spoken in India due to different diversities and different regions,so it is difficult to understand the global languages such as English ,Spanish ,French ,German. so this paper aims to translation of one of the global language English to their regional languages such as Tamil.So what our project does is it takes the Youtube url as an input in which the video should be in English and then save the video and perform the Machine Learning libraries as gTTS and Whisper model,Mbart50 model etc.Through this we do Audio Extraction,Speech-To-Text-Conversion,Text-Translation,Text-To-Speech-Synthesis. Through this we had Integrating language translation and audio synthesis and break down the the Linguistic barriers.

Balaji Govindarajan Reviewer

badge Review Request Accepted

Balaji Govindarajan Reviewer

15 Oct 2024 05:14 PM

badge Approved

Relevance and Originality

Methodology

Validity & Reliability

Clarity and Structure

Results and Analysis

Relevance and Originality

This research addresses a critical need in India—a country characterized by linguistic diversity—by focusing on translating global languages like English into regional languages such as Tamil. Given the challenges that non-English speakers face in accessing content in English, this paper's focus on leveraging technology to bridge these linguistic gaps is highly relevant. The originality lies in the integration of various Machine Learning (ML) libraries, such as gTTS, Whisper, and M-BART50, to develop a comprehensive solution for language translation and audio synthesis. By utilizing platforms like YouTube as a source of content, the research presents a unique approach to real-time translation, enhancing its applicability in everyday scenarios.


Methodology

The methodology outlined in the paper is well-structured, detailing the step-by-step process involved in transforming English video content into Tamil audio. Starting from audio extraction to speech-to-text conversion, translation, and text-to-speech synthesis, each stage of the workflow is clearly defined. The use of specific ML libraries adds credibility to the methodology. However, the paper could benefit from a more detailed explanation of the algorithms and models used, including their selection criteria, performance metrics, and any preprocessing steps involved in handling audio and text data. A discussion on the challenges encountered during implementation and how they were addressed would also enhance the methodological rigor.


Validity & Reliability

The validity of the research is reinforced by the choice of well-established ML libraries and models, which are recognized for their efficacy in natural language processing and speech synthesis. However, to bolster the reliability of the findings, the paper should include empirical results showcasing the translation accuracy and audio quality achieved through the implemented models. A comparative analysis with existing translation tools could further validate the effectiveness of the proposed approach. Additionally, considerations of language nuances and regional dialects in Tamil should be addressed to ensure that the translations are contextually appropriate.


Clarity and Structure

The paper is generally well-structured, with a logical flow from problem identification to proposed solutions. Each section is clearly labeled, making it easy for readers to follow the progression of the research. However, certain technical terms related to machine learning and natural language processing may require further explanation to ensure accessibility for a broader audience. Including diagrams or flowcharts to visually represent the workflow of the translation process could enhance clarity and comprehension, making the complex steps involved in the methodology more digestible.


Result Analysis

The result analysis section should provide detailed insights into the outcomes of the translation and synthesis processes. While the paper mentions the integration of various technologies, it lacks concrete data demonstrating the effectiveness of the translations and audio outputs. Presenting metrics such as translation accuracy, audio clarity, and user satisfaction ratings would provide a more comprehensive understanding of the project's impact. Furthermore, discussing potential limitations of the models used—such as challenges in recognizing accents, idiomatic expressions, or context—would present a balanced view. Finally, recommendations for future research, such as incorporating additional regional languages or improving model accuracy, would be beneficial in framing the ongoing exploration of language translation technology in diverse linguistic contexts.

avatar

IJ Publication Publisher

ok sir

Publisher

User Profile

IJ Publication

Reviewer

User Profile

Balaji Govindarajan

More Detail

User Profile

Paper Category

Computer Engineering

User Profile

Journal Name

JETIR - Journal of Emerging Technologies and Innovative Research

User Profile

p-ISSN

User Profile

e-ISSN

2349-5162

Subscribe us to get updated

logo logo

Scholar9 is aiming to empower the research community around the world with the help of technology & innovation. Scholar9 provides the required platform to Scholar for visibility & credibility.

QUICKLINKS

  • What is Scholar9?
  • About Us
  • Mission Vision
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • Blogs
  • FAQ

CONTACT US

  • logo +91 82003 85143
  • logo hello@scholar9.com
  • logo www.scholar9.com

© 2025 Sequence Research & Development Pvt Ltd. All Rights Reserved.

whatsapp