Kamel BENACHENHOU1, Abdelmalik TALEB-AHMED2 and Mhamed HAMADOUCHE3 1Aeronautics and Space Studies Institute, Blida1 university, Algeria 2IEMN DOAE UMR CNRS 8520, Polytechnic university of Hauts de France, Valenciennes, France
This paper deals with the implementation of an adaptive acquisition stage in a global navigation satellite - GNSS - receiver with a pilot and data channel in case of GNSS L5 signal. Adaptive acquisition decides the presence or absence of GNSS signal by comparing a cell under test with adaptive threshold and provides a code delay and Doppler frequency estimation. Firstly, we introduce an adaptive acquisition with a cell- averaging-constant false alarm rate -CFAR- for pilot channel then we propose a data-pilot fusion. At a second level, the proposed schemes are implemented on FPGA by using system generator and Xilinx tools.
GPS L5, Acquisition, CFAR, FPGA, implementation
Omid Jafari, Khandker Mushfiqul Islam and Parth Nagarkar, Computer Science Department, New Mexico State University, Las Cruces, USA
Nearest-neighbor query processing is a fundamental operation for many image retrieval applications. Often, images are stored and represented by high-dimensional vectors that are generated by feature-extraction algorithms. Since tree-based index structures are shown to be ineffective for high dimensional processing due to the well-known “Curse of Dimensionality”, approximate nearest neighbor techniques are used for faster query processing. Locality Sensitive Hashing (LSH) is a very popular and efficient approximate nearest neighbor technique that is known for its sublinear query processing complexity and theoretical guarantees. Nowadays, with the emergence of technology, several diverse application domains require real-time data storing and processing capacity. Existing LSH techniques are not suitable to handle real-time data and queries. In this paper, we discuss the challenges and drawbacks of existing LSH techniques for processing real-time high-dimensional image data. Additionally, through extensive analysis, we propose improvements for existing LSH techniques for efficient processing of high-dimensional image data.
Image Retrieval, Similar Search Query Processing, Locality Sensitive Hashing
Christiana Panayiotou, Cyprus University of Technology, Cyprus
The purpose of the current paper is to present an ontological analysis to the identification of a particular type of prepositional natural language phrases called figures of speech  via the identification of inconsistencies in ontological concepts. Prepositional noun phrases are used widely in a multiplicity of domains to describe real world events and activities. However, one aspect that makes a prepositional noun phrase poetical is that the latter suggests a semantic relationship between concepts that does not exist in the real world. The current paper discusses how a set of rules based on Wordnet classes and an ontology representing human behavior and properties, can be used to identify figures of speech. It also addresses the problem of inconsistency resulting from the assertion of figures of speech at various levels identifying the problems involved in their representation. Finally, it discusses how a contextualized approach might help to resolve this problem.
ontologies, NLP, linguistic creativity
Dr.Mohammad Alodat, Department of Information Systems & Technology, Sur University College, Sur, Oman
The purpose this paper is to find a smart and effective tool for evaluating students and overcoming human defects, such as lack of expertise of Instructor, psychologists and over-trust when students. We have been providing Instructor Program for Student Assessment (PISA) because it has a positive impact on academic performance, and self-regulation and improved in their final exam scores. In order to test the efficiency of the prediction among the models used that give the closer expectation of the degree of the student in the final exam score of a course after the first exam, using four algorithms: Multiple Linear Regressions (MLP), K-mean cluster , Modular feed forward neural network and Radial basis Function (RBF). After comparing the four models, results show that RBF has the highest average classification rate, followed by neural network and K-mean cluster, while Multiple Linear Regressions was the worst at performance.
Euclidean Dissimilarity, Radial Basis Function, Neural Network, K Mean Cluster, Deep Learning
Rory Lewis, Department of Computer Science, University of Colorado Colorado Springs, Colorado, 80919, USA
This paper addresses what the role of artificial intelligence will be in space, and specifically, what China’s research in artificial intelligence for space war has been, is, and strives to become. The author first presents testimony from scholars and space research scientists from many countries who all categorically state, without a trace of doubt, that all future space warfare will rely heavily on artificial intelligence. This includes China’s strengths in Space artificial intelligence and, its weaknesses. The second portion of this research drills down into what are the specific mathematical theoretical research areas of artificial research for space wars in various countries, including China. The author concludes with research strategies that will combat China’s dominance of space wars.
Artificial Intelligence, Machine Learning, Deep Neural Networks, GPUs, Space War, Chinese artificial intelligence.
Mbarek Zaoui, Driss Gretete and Khalid El Aroui, Ibn Tofail University, Morocco
In this paper, we attempt to find a characterization of some implicative filters in some fuzzy algebras. Towards this end, firstly, the notion of a De Morgan triples and properties of implicative filters in a de Morgan implicative structure are stated. finally, some interesting with detail of most classical cases of an implicative filters are given.
Fuzzy Logic, Implicative Filters, Logic Algebra, Triangular Norms
Pavel Smrz, Brno University of Technology, Faculty of Information Technology Bozetechova 2, 61266 Brno, Czech Republic
This paper discusses a new approach to creating semantic resources consisting of complex associations among words that can be used for evaluating the content of word embeddings as well as in various language-learning scenarios. We briefly introduce Codenames – an existing party board game – and the way of recording word associations suggested by human players. Advanced word embedding models are then compared on the collected data and it is demonstrated that they often fail in the cases of complex word associations that go beyond simple contextual interchangeability. We conclude with an initial evaluation of the computer player in the electronic version of the Codename game and a discussion on further extensions of the system towards association explanation in the language learning context.
Natural Language Processing, Word Embedding, Distributional Semantics, Implicit Crowdsourcing, Games with Purpose, Semantic Representation
Jasur Atadjanov, Tashkent University of Information Technology named after Al Khorazmiy, Uzbekistan
The search for articles that are similar in content is important not only in solving plagiarism issues, but also in similar researches, partners, scientific advisers, etc. In such situations language barriers in searching texts which similar in topic also create a problem. Currently, many effective information systems of searching for similar texts and plagiarism have been developed. But most of them search in a monolingual database. Available multilingual anti-plagiarism systems have a number of disadvantages that reduce their effectiveness and practicality.In this article the author describes morphological and lexical analysis of the text, the stages and algorithms of determining its multilingual identity.
Rezvan Azimi Khojasteh1, Reza Rafeh2, Naji Alobaidi3, 1Department of Computer Engineering, Malayer Branch, Islamic Azad University, Hamedan, Iran, 2Centre for Information Technology, Waikato Institute of Technology, Hamilton, New Zealand and 3Department of Computer Engineering, Unitec Institute of Technology, Auckland, New Zealand
Emotion recognition has been a research topic in the field of Human Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need for human-like interaction to better communicate with computers. Many researches have become interested in emotion recognition and classification using different sources. A hybrid approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. Innovation of this approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%.
Emotion Classification, Emotions Analysis, Emotion Detection, SVM, Speech Emotion Recognition
Sajib Sen1, Kishor Datta Gupta1, Subash Poudyal1 and Md ManjurulAhsan2, 1Department of Computer Engineering, University of Memphis, 2Department of Industrial Engineering, Lamar University 2MediDeniz Software, Old Street, New York, USA
Distributed network reconfiguration techniques are used widely to optimize power distribution systems. As renewable energy generation are very stochastic in nature, network reconfiguration with this stochastic nature does not provide the optimal solution. To address this problem a three-objective genetic algorithm approach has been taken in this project to find the optimal solution of energy scheduling throughout a day, simultaneously using the concept of network reconfiguration. In this research paper, we have applied a genetic algorithm approach, in order to optimize dispatching power with reconfiguring the network and scheduling the power sources. Our proposed methods shows that, it is possible to get 1MW less line lose compared to general condition.
Microgrid, Genetic algorithm, Power distribution, Network reconfiguration.
Xiaoli Sun and Farong Zhong, Department of Computer Science, Zhejiang Normdal University, Jinhua, China
In order to adapt the needs of the actual pursuit-evasion problem and the diversity of the area of the pursuit-evasion problem, we investigate the fast searching on the cage graph. First, we study the properties of the cage graph to get lower bounds on the fast search number, which is the minimum number of searchers needed to capture the intruder of the cage graph. Then we apply lower bounds on the fast search number of cage graphs to get the fast search number of cage graphs. We also provide a fast searching algorithm of the cage graph.
Graph searching, Fast searching, Cage graph.
Neenu Ignatious and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
This research study is focused on identifying a regression test prioritization technique and suggesting a tool for automating the testing activities for the Trade Me website New Zealand. Identifying the importance of regression testing for a frequently growing application this project is proposed that can be used in similar projects in future. Regression testing is the costliest and time taking part of a software under test. Suggested method can be used for identifying cost and time efficient technique.
Regression Test Prioritization, Ant Colony Optimization, Selenium WebDriver.
Zainab Alkindi, Mohamed Sarrab and Nasser Alzidi
Mobile applications can collect large private user’s data including user bank details, contact numbers, photos, saved locations, etc. This poses privacy concerns for many users while they are using mobile applications. In Android 6.0 and above, users can control the apps permissions, where the system allows the users to grant and block the dangerous apps permissions at any time. However, there are additional permissions used by the apps (normal permissions) that cannot be controlled by users which may lead to many privacy violations. In this paper, we present a new approach that provides users with the ability to control the applications' access to Android system resources and private data based on user-defined policies. This approach allows users to reduce the level of privacy violation by giving the user some options that are not available in the Android permission system during the installation and run-time of Android apps. The proposed approach enables the users to control the behavior of the apps including the app network connections, permissions list, and app to app communication. The proposed approach consists of four main components that can check the app behaviors during the installation and running time, provide the users with resources and data filtration and allow users to take appropriate action to control the leakage of the application.
Omotosho, O. I, Akinwale, Y. O and Idris A. O, Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomosho, Oyo State 210214, Nigeria.
Vitamins play major roles in safeguardingthe wellbeing of every individual. The basis of high incidence of morbidity and mortality, in the human race, mostly characterized by vitamin deficiency has been traced to great level of ignorance among others.Information requires a medium and a major source of it in this advent of advancement in technology is the web.Currently, the web is majorly syntactic and the keyword based search result is characterized by ambiguous contents, only readable by humans. Suchchallenge is eminent in vitamin domain related informationsearches, posed with several human readable web pages, of which a lot time is spent jogging on the diffusedweb contents for most relevant information. Based on this, vitamin consumption has been restricted to just a few, while the body system weeps over the deficiencies, resulting in one ailment to the other. This work takes into considerationthe advantage of the ongoing web evolution of adding semantics to Web resources and the available tools. As such is implemented with the use of ontology (VIDEMO), a knowledge representation formalism, which allows machine readable descriptions to be added to the vitamin domain concepts or data. This enables a semantic search with precise and accurate results through the developed web application.
Deficiency monitoring, Vitamin domain, Semantic search, Ontology
Menna Maged Kamek1, Alberto Gil-Solla2 and Manuel Ramos-Carber3, 1Department of Computer Science, Arab Academy for Science, Technology, & Maritime Transport, Cairo, Egypt, 2Department of Telematics Engineering, University of Vigo, Vigo, Spain and 3Department of Telematics Engineering, University of Vigo, Vigo, Spain
Crowdsourcing allows to build online platforms that make use of the power of human intelligence to complete tasks that are difficult to tackle for current algorithms. Current approaches to crowdsourcing adopt a methodology where tasks are published on specialized web platforms to a group of networked workers who can pick their preferred tasks freely on a first-come-first-served basis. Although this approach has several advantages, however it doesn’t consider workers differences and capabilities. With the vast number of tasks posted by the requesters every day it’s a challenging issue to satisfy both workers and requesters. In this paper, a crowdsourcing recommendation approach is proposed and evaluated that is based on a push methodology. This method aims to help workers to instantly find best matching tasks according to their interests and qualifications as well help the requesters to pick from the crowd the best workers for their desired tasks.
Crowdsourcing, Task Recommendation, Recommendation Systems, Classification.
Hind Baaqeel and Rachid Zagrouba, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Saudi Arabia
The technology of Internet of Things (IoT) has been widely famous nowadays due to its ability to provide machine to machine communication without human intervention. This feature promises its users a better way of living when they implement it into their environments. However, employing IoT requires a sufficient user authentication to prevent any intruders from accessing the IoT network. Users Biometrics has been incorporated in many recent solutions since it is proved to deliver highly secure authentication process. This paper presented an overview of the latest suggested solutions for Biometric authentication scheme for IoT environments. It highlights the advantages and disadvantages of different solutions which constitutes a fundamental first step for researchers. This step has led to the next research goal which is to define the requirements for a sufficient biometric authentication scheme.
IoT, Biometrics, Fog Computing, User Authentication, Security
Anjali Rawat and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
Regression testing is very important for dynamic verification. It helps to simulate a suite of test cases periodically and after major changes in the design or its environment, in order to check that no new bugs were introduced. Evidences regarding benefit of implementing automation testing which includes saves of time and cost as it can re-run test scripts again and again and hence is much quicker than manual testing, providing more confidence in the quality of the product and increasing the ability to meet schedules and significantly reducing the effort that automation requires from testers are provided on the basis of survey of 115 software professionals. In addition to this, automated regression suite has an ability to explore the whole software every day without requiring much of manual effort. Also, bug identification is easier after the incorrect changes have been made. Genius is going through continuous development and requires testing again and again to check if new feature implementation have affected the existing functionality. In addition to this, Erudite is facing issue in validation of the Genius installation at client site since it requires availability of testers to check the critical functionality of the software manually. Erudite wants to create an automated regression suite for Genius which can be executed at client site for checking the functionality of the software. In addition to this, this suite will also help the testing team to validate if the new features which have been added to the existing software are affecting the existing system or not. Visual studio, Selenium Webdriver, Visual SVN and Trello are the tools which have been used to achieve the creation of automation regression suite. The current research will provide guidelines to the future researchers on how to create an automated regression suite for any web application using open source tools.
Automation testing, Regression testing, Visual Studio, C#, Selenium Webdriver, Agile- Scrum
Marty Kelley, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
The manufacturing industry is rapidly changing due in part to widespread adoption of information, communication and operational technologies. This new landscape, described as the fourth industrial revolution, will be characterized by highly complex and interdependent systems. One particular aspect of this industrial paradigm shift is horizontal integration, or the tight coupling of firms within a value chain. Highly interconnected and interdependent manufacturing systems will encounter new challenges associated with coordination and collaboration, specifically with regards to trust. Data was collected from manufacturing professionals to explore the nature of trust and the potential use of a Blockchain as a collaboration mechanism. Concepts from game theory, systems theory and organizational economics are used to augment research data and inform a collaborative manufacturing blockchain model and architecture.
Manufacturing, Collaboration, Industry 4.0, Horizontal integration, Trust, Blockchain, Smart Contracts, Provision Point, Payback Mechanism, User Boundaries, Resource Boundaries, Provision of Public Goods, Governance
Albert F. H. M. Lechner, Steve R. Gunn, School of Electronics and Computer Science University of Southampton, UK
Sales forecasts are essential to every business strategic plans and can both save the business money and increase its competitive advantage. However, many current businesses underestimate the op- portunities accurate forecasts provide and rely on judgemental forecasts from experts within the business. Machine learning and statistical forecasting methods are used by both academics and practitioners to increase the accuracy of these forecasting methods and can be further improved by applying the newly developed dynamic cluster based Markov model, presented in this work. This approach gathers global sales pipeline data to build a short-term sales forecast. The prediction of future sales for the next three months is improved over a regular Markov transition model. The new model can support short-term planning, thereby supporting regional and product-specific forecasting to steer business activities to their given targets and remain profitable.
Demand forecast, Time series data, Clustering, Markov model