Sarath0x8f commited on
Commit
d13449f
·
verified ·
1 Parent(s): 261f4cb

Update markdown.py

Browse files
Files changed (1) hide show
  1. markdown.py +3 -3
markdown.py CHANGED
@@ -2,7 +2,7 @@ description = '''
2
  <link href="https://fonts.googleapis.com/css2?family=Roboto+Mono&display=swap" rel="stylesheet">
3
 
4
  <div style="font-family: 'Roboto Mono'; 'monospace'; font-weight: 400; line-height: 1.6; font-size: 18px; text-align: justify; text-justify: inter-word;">
5
- The <strong>Linguistic Lens Application</strong> is an intelligent, user-friendly platform designed to bridge language barriers by combining Optical Character Recognition (OCR), translation, and text-to-speech (TTS) functionalities. Built with a combination of Gradio for the interface, <code>PaddleOCR</code> for text extraction, <code>GoogleTranslator</code> for translation, and <code>Google Text-to-Speech (gTTS)</code> for audio conversion, the application provides a seamless experience for users needing real-time text translation from images with audio playback support. Users start by uploading an image, which the system processes to extract text using PaddleOCR. This extracted text is then translated into a selected language via GoogleTranslator using the deep_translator library, supporting a wide array of global languages. The translated text is subsequently converted into audio using gTTS, allowing users to listen to the translation in the chosen language. This multi-component design enables a comprehensive service flow where user inputs are transformed into text, translated, and delivered as both written and spoken output, making the application a robust tool for users needing on-the-go linguistic assistance. By modularizing the OCR, translation, and TTS functions, the application ensures scalability, maintainability, and ease of integration with other services, making it an ideal solution for enhancing accessibility and communication in diverse, multilingual environments.
6
  </div>
7
  '''
8
 
@@ -11,7 +11,7 @@ usecase_diagram = """
11
  <p>To konw more about the project study the below UML diagrams:</p>
12
  <h3 id="the-use-case-diagram-">The Use Case Diagram:</h3>
13
  <p><img src="data:image/png;base64,{}" alt="usecase" width="500" height="500" style="display: block; margin-right: 40px;"></p>
14
- <h4 id="1-actors-">1. <em>*Actors</em></h4>
15
  <ul>
16
  <li><strong>User</strong>: Represents the end-user who interacts with the application by uploading images, initiating OCR, requesting translations, listening to audio, and clearing inputs.</li>
17
  </ul>
@@ -443,7 +443,7 @@ footer = """
443
  <p style="margin: 0;">&copy; 2024 </p>
444
  </div>
445
  <div style="text-align: center; flex-grow: 1;">
446
- <p style="margin: 0;">This website is made with ❤ by SARATH CHANDRA</p>
447
  </div>
448
  <div class="social-links" style="display: flex; gap: 20px; justify-content: flex-end; align-items: center;">
449
  <a href="https://github.com/21bq1a4210" target="_blank" style="text-align: center;">
 
2
  <link href="https://fonts.googleapis.com/css2?family=Roboto+Mono&display=swap" rel="stylesheet">
3
 
4
  <div style="font-family: 'Roboto Mono'; 'monospace'; font-weight: 400; line-height: 1.6; font-size: 18px; text-align: justify; text-justify: inter-word;">
5
+ The <strong>Linguistic Lens Application</strong> is an intelligent, user-friendly platform designed to bridge language barriers by combining Optical Character Recognition (OCR), translation, and text-to-speech (TTS) functionalities. Built with a combination of Gradio for the interface, <code1>PaddleOCR</code1> for text extraction, <code1>GoogleTranslator</code1> for translation, and <code1>Google Text-to-Speech (gTTS)</code1> for audio conversion, the application provides a seamless experience for users needing real-time text translation from images with audio playback support. Users start by uploading an image, which the system processes to extract text using PaddleOCR. This extracted text is then translated into a selected language via GoogleTranslator using the deep_translator library, supporting a wide array of global languages. The translated text is subsequently converted into audio using gTTS, allowing users to listen to the translation in the chosen language. This multi-component design enables a comprehensive service flow where user inputs are transformed into text, translated, and delivered as both written and spoken output, making the application a robust tool for users needing on-the-go linguistic assistance. By modularizing the OCR, translation, and TTS functions, the application ensures scalability, maintainability, and ease of integration with other services, making it an ideal solution for enhancing accessibility and communication in diverse, multilingual environments.
6
  </div>
7
  '''
8
 
 
11
  <p>To konw more about the project study the below UML diagrams:</p>
12
  <h3 id="the-use-case-diagram-">The Use Case Diagram:</h3>
13
  <p><img src="data:image/png;base64,{}" alt="usecase" width="500" height="500" style="display: block; margin-right: 40px;"></p>
14
+ <h4 id="1-actors-">1. <strong>Actors</strong></h4>
15
  <ul>
16
  <li><strong>User</strong>: Represents the end-user who interacts with the application by uploading images, initiating OCR, requesting translations, listening to audio, and clearing inputs.</li>
17
  </ul>
 
443
  <p style="margin: 0;">&copy; 2024 </p>
444
  </div>
445
  <div style="text-align: center; flex-grow: 1;">
446
+ <p style="margin: 0;"> This website is made with ❤ by SARATH CHANDRA</p>
447
  </div>
448
  <div class="social-links" style="display: flex; gap: 20px; justify-content: flex-end; align-items: center;">
449
  <a href="https://github.com/21bq1a4210" target="_blank" style="text-align: center;">