Ethiopia Institute of Technology- Mekelle
Permanent URI for this communityhttps://repository.mu.edu.et/handle/123456789/58
Browse
3 results
Search Results
Item Word Sequence Prediction Model for the Tigrigna Language Using a Deep Learning Approach(Mekelle University, 2026-01-22) Yibralem Hagos MekonnenThis research explores the development of a word sequence prediction model for the Tigrigna language using deep learning techniques, specifically Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). Tigrigna, primarily spoken in Eritrea and Ethiopia, faces significant challenges in natural language processing (NLP) due to the scarcity of comprehensive computational resources and annotated corpora. This study addresses the urgent need for effective NLP tools tailored to Tigrigna, focusing on the fundamental task of word sequence prediction, which underpins various applications such as machine translation and text generation. Despite the limited dataset of 10,000 sentences compiled from diverse sources, the models were evaluated for their ability to predict and generate coherent word sequences. Results indicate that while LSTM and GRU models demonstrated potential in capturing Tigrigna’s unique linguistic characteristics, they faced issues with overfitting and underfitting, particularly influenced by the choice of embeddings Word2Vec and Keras Embedding. The findings highlight the necessity for improved regularization techniques and the importance of data augmentation to enhance model generalization. This research contributes to the nascent field of Tigrigna NLP by demonstrating the applicability of deep learning models in resource-scarce languages. The outcomes suggest pathways for future advancements in Tigrigna language technology, emphasizing the potential for enhanced predictive text applications and deeper insights into Tigrigna's grammatical structures. Ultimately, this work lays a foundation for further developments in Tigrigna NLP, advocating for increased investment in linguistic resources and innovative modeling techniques to support the digital representation of the Tigrigna language.Item A Deep Learning Approach for the Detection and Prediction of Tuberculosis Using Chest X-Ray Imaging(Mekelle University, 2025-12-22) BIREY GIRMAYTuberculosis (TB) is a major global health concern, particularly in resource-limited settings where diagnosis in the initial phase is crucial but limited by limited radiologists and diagnostic centers. This study develops an artificial intelligence-based model for early diagnosis and prediction of TB by chest radiography using a Convolutional Neural Network (CNN) and CNN-Long Short-Term Memory (CNN-LSTM) hybrid model for binary classification (TB-positive or TB-negative). A 10,000 chest X-ray image dataset, comprising 4,000 images from Ayder Comprehensive Specialized Hospital, Ethiopia, and 6,000 images from Kaggle, was preprocessed, augmented, and split into 80% for training and 20% for testing. Expert annotations ensured firm ground truth. The CNN model worked with 86% accuracy, with precision, recall, and F1-score of 0.86, while CNNLSTM achieved 85%, both running smoothly on quite modest hardware. The CNN functioned slightly better than the hybrid model, depicting superior discriminative capacity. The machine learning technique offers an inexpensive, scalable way to enhance early TB diagnosis and forecasting in high-burden, low-resource environments, reducing the diagnostic delay and supporting medical staff in nations like Ethiopia.Item DEVELOP A Bi-DIRECTIONAL ENGLISH - NUER MACHINE TRANSLATION USING DEEP LEARNING APPROACH(Mekelle University, 2024-12-28) Lemlem GebremedhinThe advancement of deep learning has revolutionized natural language processing, with machine translation playing a pivotal role in bridging linguistic barriers. This research focuses on developing a bi-directional English-Nuer machine translation system using deep learning techniques. The primary challenge is the lack of linguistic resources for the Nuer language, hindering its technological representation and global accessibility. To address this, the study constructed a parallel corpus of 46,134 English-Nuer sentence pairs and employed models such as GRU, Bi-GRU, LSTM, LSTM with attention and transformer mechanisms. The findings revealed that the Transformer model achieved superior BLEU scores compared to the other architectures, scoring 0.2567 for Nuer-to-English and 0.2431 for English-to-Nuer translations. The results highlight the potential of the proposed deep learning-based machine translation for low-resource languages. As future work, the researcher highlights to explore integrating speech-to-text and textto-speech capabilities to enhance usability.
