كتابة النص: الأستاذ الدكتور يوسف أبو العدوس - جامعة جرش قراءة النص: الدكتور أحمد أبو دلو - جامعة اليرموك مونتاج وإخراج : الدكتور محمد أبوشقير، حمزة الناطور، علي ميّاس تصوير : الأستاذ أحمد الصمادي الإشراف العام: الأستاذ الدكتور يوسف أبو العدوس
فيديو بمناسبة الإسراء والمعراج - إحتفال كلية الشريعة بجامعة جرش 2019 - 1440
فيديو بمناسبة ذكرى المولد النبوي الشريف- مونتاج وإخراج الدكتور محمد أبوشقير- كلية تكنولوجيا المعلومات
التميز في مجالات التعليم والبحث العلمي، وخدمة المجتمع، والارتقاء لمصاف الجامعات المرموقة محليا واقليميا وعالميا.
المساهمة في بناء مجتمع المعرفة وتطوره من خلال إيجاد بيئة جامعية، وشراكة مجتمعية محفزة للابداع، وحرية الفكر والتعبير، ومواكبة التطورات التقنية في مجال التعليم، ومن ثم رفد المجتمع بما يحتاجه من موارد بشرية مؤهلة وملائمة لاحتياجات سوق العمل.
تلتزم الجامعة بترسيخ القيم الجوهرية التالية: الإلتزام الإجتماعي والأخلاقي، الإنتماء،العدالة والمساواة، الإبداع، الجودة والتميّز، الشفافية والمحاسبة، الحرية المنظبطة والمستقبلية.
دكتوراة نظم المعلومات الحاسوبية / الذكاء الأصطناعي
PH.D Degree in Computer Information Systems, University of Banking and Financial Sciences, Jordan-2011
Master’s Degree in Information Systems, Arab Academy for Banking and Financial Sciences, Jordan -2000
High Diploma in Information Systems, Arab Academy for Banking and Financial Sciences, Jordan -1999
2010-2011 Manager Department of Computer, Amaken Plaza Hotel, Jordan Amman.
2008-2012 Head of Applied Arts and Information Technology Dept, Al-Andalus Collage.
2000-2011 lecturer, The Arab Collage, Jordan Amman.
2008 Part-time lecturer, NewHorizons , Jordan Amman .
2000-2003 Part-time lecturer, The Arab Academy For Banking And Financial Sciences “AABFS”,Jordan Amman .
2002-2003 Part-time lecturer, Alisra Private UNV, Jordan Amman .
2002-2007 Part-time lecturer, Princess Alia UNV Collage, Jordan Amman.
2001-2002 Part-time lecturer, Al-Fashir UNV, Jordan Amman.
2001-2004 Part-time lecturer, The Kadecy Collage, Jordan Amman.
1999-2000 MIS officer, Multi Base Systems company, Jordan Amman.
Face recognition from non-identical face photos is a prominent area of research in pattern recognition and computer vision. Existing face recognition systems struggle with diverse changes like lighting conditions, expressions, and facial occlusions. This paper proposes a new Face Recognition (FR) approach that combines the Elastic Bunch Graph Matching (EBGM) approach with the greedy algorithm to automatically identify face landmarks. The proposed approach independently selects each optimal landmark of face image from different corresponding face images where the corresponding landmark of corresponding face image which achieves the best similarity is used rather than using one or at most two corresponding face images and computing the average between both. The locations of corresponding landmarks can be displaced to achieve maximum similarity with optimal landmarks. This proposed approach demonstrates improved recognition performance compared to contemporary face recognition methods. It effectively handles changing ratios of face parts and can recognize faces even with increasing occlusion sizes.
The sixth most common cause of mortality worldwide is Alzheimer's disease (AD), a progressive neurological condition. Large healthcare data has been a topic of interest for the past ten years due to the digitization that has led to a rise in data captured in medical industries. Lately, various categorization problems have been addressed with deep learning and machine learning techniques, which have demonstrated notable gains in effectiveness. On the other hand, their limited generalization abilities and inadequate model variance provide a problem, as reported in the literature. By using a mixed model that combines machine learning methods and deep learning, this project aims to increase the accuracy of Alzheimer's image classification. Particularly in the area of computer vision, the deep learning approach to machine learning has outperformed traditional machine learning in terms of its capacity to identify complex structures in complex, high-dimensional data. To improve the detection ability to classify images of Alzheimer's disease, this work has presented a hybrid model using CNN a Deep Learning technology, and the XGBoost machine learning. Based on structural MRI images, our findings imply that AD can be accurately classified using CNN-XGBoost algorithms. This may result in the creation of tools for AD diagnosis and aid in enhancing the illness's early identification and management. Using a structural MRI image hold-out test set, we assessed our model's performance. In comparison to other techniques used to classify AD from MRI scans, our model's accuracy of 98.7% is much greater.
As the Internet of Things (IoT) continues to expand, incorporating a vast array of devices into a digital ecosystemalso increases the risk of cyber threats, necessitating robust defensemechanisms. This paper presents an innovativehybrid deep learning architecture that excels at detecting IoT threats in real-world settings. Our proposed modelcombines Convolutional Neural Networks (CNN), Bidirectional Long Short-Term Memory (BLSTM), GatedRecurrent Units (GRU), and Attention mechanisms into a cohesive framework. This integrated structure aims toenhance the detection and classification of complex cyber threats while accommodating the operational constraintsof diverse IoT systems. We evaluated our model using the RT-IoT2022 dataset, which includes various devices,standard operations, and simulated attacks. Our research’s significance lies in the comprehensive evaluationmetrics, including Cohen Kappa and Matthews Correlation Coefficient (MCC), which underscore the model’sreliability and predictive quality. Our model surpassed traditional machine learning algorithms and the state-ofthe-art, achieving over 99.6% precision, recall, F1-score, False Positive Rate (FPR), Detection Time, and accuracy,effectively identifying specific threats such as Message Queuing Telemetry Transport (MQTT) Publish, Denialof Service Synchronize network packet crafting tool (DOS SYN Hping), and Network Mapper Operating SystemDetection (NMAP OS DETECTION). The experimental analysis reveals a significant improvement over existingdetection systems, significantly enhancing IoT security paradigms. Through our experimental analysis, we havedemonstrated a remarkable enhancement in comparison to existing detection systems,which significantly strengthensthe security standards of IoT.Our model effectively addresses the need for advanced, dependable, and adaptable security solutions, serving as a symbol of the power of deep learning in strengthening IoT ecosystems amidstthe constantly evolving cyber threat landscape. This achievement marks a significant stride towards protectingthe integrity of IoT infrastructure, ensuring operational resilience, and building privacy in this groundbreakingtechnology.
Phishing, an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions. Data indicates a persistent rise in phishingattacks. Moreover, these fraudulent schemes are progressively becoming more intricate, thereby rendering themmore challenging to identify.Hence, it is imperative to utilize sophisticated algorithms to address this issue.Machinelearning is a highly effective approach for identifying and uncovering these harmful behaviors. Machine learning(ML) approaches can identify common characteristics in most phishing assaults. In this paper, we propose anensemble approach and compare it with six machine learning techniques to determine the type of website andwhether it is normal or not based on two phishing datasets. After that, we used the normalization techniqueon the dataset to transform the range of all the features into the same range. The findings of this paper for allalgorithms are as follows in the first dataset based on accuracy, precision, recall, and F1-score, respectively:DecisionTree (DT) (0.964, 0.961, 0.976, 0.968), Random Forest (RF) (0.970, 0.964, 0.984, 0.974), Gradient Boosting (GB)(0.960, 0.959, 0.971, 0.965), XGBoost (XGB) (0.973, 0.976, 0.976, 0.976), AdaBoost (0.934, 0.934, 0.950, 0.942),Multi Layer Perceptron (MLP) (0.970, 0.971, 0.976, 0.974) and Voting (0.978, 0.975, 0.987, 0.981). So, the Votingclassifier gave the best results.While inthe seconddataset, all the algorithms gave the same results in four evaluationmetrics, which indicates that each of them can effectively accomplish the prediction process. Also, this approachoutperformed the previous work in detecting phishing websites with high accuracy, a lower false negative rate, ashorter prediction time, and a lower false positive rate.
The global impact of the COVID-19 pandemic has reached virtually every part of the world, significantly impacting people's health and daily routines. It has disrupted physical activities and necessitates early identification of infected individuals for proper care. Identifying the disease through radiography and radiology images stands out as one of the quickest approaches. Previous research indicates that COVID-19 patients often exhibit distinct abnormalities in chest radiographs. Radiologists have the ability to detect the existence of COVID-19 by analyzing these images. This study employs a deep learning model that utilizes CT scan images to identify COVID-19 disease in patients. In the beginning, a dataset comprising 746 CT scan images from openly accessible sources is compiled, and then same dataset had been applied for augmentation which creates total 2984 images. Transfer learning is employed to train Convolutional Neural Networks (CNN) using VGG19, enabling the recognition of COVID-19 disease in the examined CT scan images. Additionally, it integrates an IoT-based application and validates the framework. The model undergoes assessment using 521 images for training, 112 images for validation, and the remaining 113 for testing, and then same images created a new dataset using the concept of augmentation and total 2984 images were spitted into 2088 images for training, 449 images for testing and remaining 447 images for validation. The model's efficiency is evaluated by assessing parameters such as precision, recall, FScore, and the confusion matrix. The implementation and validation of the study have been successful. The results obtained demonstrate significant improvement compared to previous efforts, as detailed in the results section. While the model's performance is highly promising, conducting additional analysis on a larger dataset of COVID-19 images is necessary to obtain more reliable accuracy estimations.
In normal pharmacological effect screening protocols, natural substances that were thoroughly diluted and without their active components separated are employed. Over the last two decades, strong active isomeric compounds have been identified and isolated. The notion of multi-target treatment was novel in the mid-2000s, but it will be one of the most significant advancements in drug development by 2021. Instead, then relying on organically generated mixtures, researchers are looking at target-based drug development based on precisely specified fragments for effective organic anticancer medicines. This study emphasizes the breakdown of structures utilizing computer aids or fragments, as well as a process for applying natural anticancer medications. The use of computer-assisted drug development (CADD) is becoming more frequent. The major areas of this study were the development of computer-aided pharmaceuticals and anticancer agents. The discovery of effective all-natural cancer treatments will be accelerated. Multitarget drug development methodologies have enabled the development of cancer medicines with fewer negative side effects. Cutting-edge analytical and bioinformatics approaches, particularly machine learning, will be employed to uncover natural anticancer therapies.
The idea of integrating traditional terrestrial networks with emerging space, aerial, and underwater networks suggests a move towards a truly comprehensive and ubiquitous network infrastructure. This integration aims to provide seamless connectivity across diverse environments, addressing the limitations of current networks. The ultimate goal of this integration is to achieve ubiquitous coverage, ensuring that connectivity is available consistently across different geographical locations and technology platforms. Incorporating pervasive AI is highlighted as a crucial aspect. This likely involves embedding AI algorithms and capabilities throughout the network infrastructure to enhance efficiency, responsiveness, and adaptability. AI can play a key role in network management, resource optimization, and enabling advanced features for diverse applications. The mention of an enhanced network protocol stack suggests that 6G will likely introduce new or improved communication protocols to handle the complexities of integrated networks and meet the requirements of future applications. The statement emphasizes that the capabilities and requirements of future applications are driving the development of 6G. This could include applications in areas such as augmented reality, virtual reality, the Internet of Things (IoT), and other emerging technologies. The commendable focus on sustainable and socially seamless networks reflects an awareness of the importance of minimizing environmental impact and ensuring that technology benefits society as a whole. Exploring technologies such as terahertz and visible light communication can potentially contribute to achieving these goals by leveraging new communication paradigms. The integration of blockchain technology can enhance security, privacy, and trust in the network, which are essential for the success of future wireless systems. The symbiotic radio involves intelligent cooperation among different wireless systems, which optimizes resource allocation and improves overall network efficiency. The paper draws upon a comprehensive and up-to-date account of the architectural adjustments and potential technologies in the field of green 6G, along with a novel method for assessing efficacy that will foster innovation and propel wireless networking forward toward a more sustainable and efficient future.
In an era characterized by the relentless evolution of Internet of Things (IoT) technologies, marked by the pervasive adoption of smart devices and the ever-expanding realm of Internet connectivity, the IoT has seamlessly integrated itself into our daily lives. This integration has ushered in a new era for manufacturing companies, enabling them to conduct real-time monitoring of their machinery, supervise product quality, and closely monitor environmental variables within their facilities. In addition to the immediate benefits of risk mitigation and loss prevention, this multifaceted approach has provided decision-makers with a comprehensive perspective for making informed decisions. People are now more dependent than ever in IoT devices and services. However, anomalies within IoT networks pose a critical concern despite the IoT's immense potential. These anomalies can pose significant security and safety risks if they go undetected. Identifying and alerting users of these anomalies on time has become crucial for preventing potential damages and losses. In response to this imperative, our research endeavors to utilize the power of Machine Learning and Deep Learning techniques to detect anomalies in IoT networks. We undertake exhaustive experiments with the IoT-23 dataset to validate our methodology empirically. Our research examines an exhaustive comparison of numerous models, assessing their performance and time efficiency to determine the optimal algorithm for achieving high detection accuracy under strict time constraints. This research represents an important step towards enhancing the security of Industrial IoT environments, thereby protecting vital infrastructure and ensuring the integrity of industrial operations in our increasingly interconnected world.
The detection of cephalometric landmarks in radiographic imagery is pivotal to an extensivearray of medical applications, notably within orthodontics and maxillofacial surgery.Manual annotation of these landmarks, however, is not only labour-intensive but also subjectto potential inaccuracies. To address these challenges, we propose a robust, fully automatedmethod for detecting soft-tissue landmarks. This innovative method effectively integratestwo disparate types of descriptors: Haar-like features, which are primarily employed tocapture local edges and lines, and spatial features, designed to encapsulate the spatialinformation of landmarks. The integration of these descriptors facilitates the construction ofa potent classifier using the AdaBoost technique. To validate the efficacy of the proposedmethod, a novel dataset for the task of soft-tissue landmark detection is introduced,accompanied by two distinct evaluation protocols to determine the detection rate. The firstprotocol quantifies the detection rate within the Mean Radial Error (MRE), while the secondprotocol measures the detection rate within a predefined confidence region R. The conductedexperiments demonstrated the proposed method's superiority over existing state-of-the-arttechniques, yielding average detection rates of 76.7% and 94% within a 2mm radial distanceand within the confidence region R, respectively. This study's findings underscore thepotential of this innovative approach in enhancing the accuracy and efficiency ofcephalometric landmark detection.
The Internet of Things (IoT) technology has recently emerged as a potential global communication medium that efficiently facilitates human-to-human, human-to-machine, and machine-to-machine communications. Most importantly, unlike the traditional Internet, it supports machine-to-machine communication without human intervention. However, billions of devices connected to the IoT environment are mostly wireless, small, hand-held, and resourced-constrained devices with limited storage capacities. Such devices are highly prone to external attacks. These days, cybercriminals often attempt to launch attacks on these devices, which imposes the major challenge of efficiently implementing communications across the IoT environment. In this paper, the issue of cyber-attacks in the IoT environment is addressed. An end-to-end encryption scheme was proposed to protect IoT devices from cyber-attacks.
المبنى ورقم القاعة
الوقت
اليوم
المادة
المواد التي يدرسها
الهندسة 604
8:00
ح ن
تراكيب البيانات وتنظيم الملفات
9:30
البرمجة الكينونية
الهندسة 610
11:00
البرمجة المتفدمة
الهندسة 603
12:30
الهندسة 301
14:00
البرمجة الكينونية المتقدمة
الساعات
الساعات المكتبية
الى
من
:11
10:30
الأحد
8:30
الاثنين
All Rights Reseved © 2025 - Developed by: Prof. Mohammed M. Abu Shquier Editor: Ali Zreqat