Anthony A. Adole1, Dr. Chris Bearchell2 and Prof. Eran Edirisinghe1, 1Department of Computer Science, EPSRC Centre for Doctoral Training in Embedded Intelligence, Loughborough University, Leicestershire, UK, 2Surface Intelligence, Oxford, UK
In recent years detection and recognition of Off-line handwriting character has being a major task in the computer vision sector, researchers are looking on to developing deep learning models to avoid the traditional approaches which involves the tedious task of using the conventional methods for feature extraction and localization. However, state-of-the-art object detection modelsrely upon region proposal algorithms as a result, they settle for object locations principles, such network reduces thetime period of those detection network, exposing region proposal computation as a bottleneck. Faster-RCNN is a popular model used for recognition purpose in many recognition tasks, the goal of this paper is to serve as a guide for Multi-Classification on offline Handwriting Document using Pre-trained Faster-RCNN with inception resnet v2 feature Extractor. The result obtained from the experiments shows improved pre-trained models can be used insolving the research question concerning handwriting detection and recognition.
Offline Handwriting recognition and detection, faster-RCNN, inception resnet v2, Kanji handwriting, Japanese offline document, recognition and detection
Yueqi Han1,2, Bo Yang1,2, Yun Zhang1, Bojiang Yang1 and Yapeng Fu1,2, 1College of Meteorology and Oceanography, National University of Defense Technology, Nanjing, China, 2National Key Laboratory on Electromagnetic Environmental Effects and Electro-optical Engineering, PLA Army Engineering University, Nanjing, China
Data assimilation (DA) for the non-differentiable parameterized moist physical processes is a complicated and difficult problem, which may result in the discontinuity of the cost function (CF) and the emergence of multiple extreme values. To solve the problem, this paper proposes an inner/outer loop ensemble-variational algorithm (I/OLEnVar) to DA. It uses several continuous sequences of local linear quadratic functions with single extreme values to approximate the actual nonlinear CF so as to have extreme point sequences of these functions converge to the global minimum of the nonlinear CF. This algorithm requires no adjoint model and no modification of the original nonlinear numerical model, so it is convenient and easy to design in assimilating the observational data during the non-differentiable process. Numerical experimental results of DA for the non-differentiable problem in moist physical processes indicate that the I/OLEnVar algorithm is feasible and effective. It can increase the assimilation accuracy and thus obtain satisfactory results. This algorithm lays the foundation for the application of I/OLEnVar method to the precipitation observational data assimilation in the numerical weather prediction (NWP) model.
Ensemble-variational Data Assimilation, Non-differentiable, Inner/Outer Loop
Ting-Yu Lin, Chia-Min Lai and Chi-Wei Chen, Institute for Information Industry, Taipei, R.O.C
Due to the advent of the Internet of Things era, the number of related wireless devices is increasing, making the abundant and complex information networks formed by communication between devices. Therefore, security and trust between devices a huge challenge. In the traditional identification method, there are identifiers such as hash-based message authentication code, key, and so on, often used to mark a message that the receiving end can verify it. However, this kind of identifiers is easy to tamper. Therefore, recently researchers address the idea that using RF fingerprint, also called radio frequency fingerprint, for identification. Our paper demonstrates a method that extracts properties and identifies each device. We achieved a high identification rate, 99.9% accuracy in our experiments where the devices communicate with Wi-Fi protocol. The proposed method can be used as a stand-alone identification feature, or for two-factor authentication.
Internet-of-Things (IoT), Authentication, RF fingerprint, Machine Learning (ML), Device Identification
Hasara Maithree, Dilan Dinushka and Adeesha Wijayasiri, Department of Computer Science and Engineering, University of Moratuwa, Moratuwa, Sri Lanka
Many researches have been carried out for change detection using temporal SAR images. In this paper an algorithm for change detection using SAR videos has been proposed. There are various challenges related to SAR videos such as high level of speckle noise, rotation of SAR image frames of the video around a particular axis due to the circular movement of airborne vehicle, non-uniform back scattering of SAR pulses. Hence conventional change detection algorithms used for optical videos and SAR temporal images cannot be directly utilized for SAR videos. We propose an algorithm which is a combination of optical flow calculation using Lucas Kanade (LK) method and blob detection. The developed method follows a four steps approach: image filtering and enhancement, applying LK method, blob analysis and combining LK method with blob analysis. The performance of the developed approach was tested on SAR videos available on Sandia National Laborataries website and SAR videos generated by a SAR simulator.
Remote Sensing, SAR videos, Change Detection
MinhTri Tran, Anna Kuwana and Haruo Kobayashi, Division of Electronics and Informatics, Gunma University, Kiryu 376-8515, Japan
Proposed derivation and measurement of self-loop function for a low-pass Tow Thomas biquadratic filter are introduced. The self-loop function of this filter is derived and analyzed based on the widened superposition principle. The alternating current conservation technique is proposed to measure the selfloop function. Research results show that the selected passive components (resistors, capacitors) of the frequency compensation of Miller’s capacitors in the operational amplifier and the Tow Thomas filter can cause a damped oscillation noise when the stable conditions for the transfer functions of these networks are not satisfied.
Superposition, Self-loop Function, Stability Test, Tow-Thomas Biquadratic Filter, Voltage Injection
MinhTri Tran, Anna Kuwana, and Haruo Kobayashi, Division of Electronics and Informatics, Gunma University, Kiryu 376-8515, Japan
Proposed stability test for RLC low-pass filters is presented. The self-loop functions of these filters are derived and analyzed based on the widened superposition principle. The alternating current conservation technique is proposed to measure the self-loop function. An active inductor is replaced with a general impedance converter. Our research results show that the values of the selected passive components (resistors, capacitors, and inductors) in these filters can cause a damped oscillation noise when the stable conditions for the transfer functions of these networks are not satisfied.
Widened Superposition, RLC Low-Pass Filter, Stability Test, Self-loop Function, Voltage Injection
Ye-Shun Shen, Fang-Biau Ueng* and Hung-Sheng Wang, Department of Electrical Engineering,National Chung-Hsing University, Taichung, Taiwan
Single carrier-frequency division multiple access (SC-FDMA) has been adopted as the uplink transmission standard in fourth generation cellular network to enable the power efficiency transmission in mobile station. Since multiuser multiple input multiple output (MU-MIMO) is a promising technology to fully exploit the channel capacity in mobile radio network, this paper investigates the uplink transmission of MU-MIMO SC-FDMA system with orthogonal space frequency block codes (SFBC). It is preferable to minimize the length of the cyclic prefix (CP) to improve the transmission energy and spectrum efficiency. Several techniques for block transmission without CP have been investigated. CP removal at the transmitter is compensated by a CP reconstruction at the receiver where only the past interference components are considered. In this paper, the chained turbo equalization with chained turbo estimation is employed in the designed receiver. The chained turbo estimation employs short training sequence (TR) that can improve the spectrum efficiency without sacrificing the estimation accuracy. In this paper, we propose a novel spectrally efficient iterative joint channel estimation, multiuser detection and turbo equalization for MU-MIMO SC-FDMA system without CP and with short TR. Some simulation examples for uplink scenario are given to demonstrate the effectiveness of the proposed scheme.
MU-MIMO SC-FDMA, chained turbo equalization, chained turbo estimation
Abdulhamit Subasi, Saeed Mian Qaisar, Effat University, College of Engineering, Jeddah, 21478, Saudi Arabia
In the cerebral surgery the positioning of epileptic foci is an elementary step. It is carried out by detecting the seizure in the electroencephalographic (EEG) recordings. In this framework, EEG Signals are composed of two classes, focal and non-focal. The focal signals are captured from brain areas in which the initial modifications to ictal EEG are sensed. The non-focal signals are recorded from brain areas that are not included at the seizure onset. A new focus area localization method is introduced based on various ensemble machine learning strategies and signal processing approach. The efficiency of the proposed method is assessed using classification accuracy, the area under the receiver operating characteristic (ROC) curve (AUC), and the F-measure. The system attains 98.8% accuracy. It confirms the potential of using the proposed solution in modern EEG analysis systems.
Electroencephalogram (EEG), Auto regressive (AR) method, Ensemble Machine Learning Methods.
Amulya Sri Pulijala and Suryakanth V Gangashetty, International Institute of Information Technology, Hyderabad, India
The concept of Raga and Tala is integral part of Indian Classical music. Raga is the melodic component while Tala is the rhythmic component in the music. Hence, Tala classification and identification is a paramount problem in the area of Music Information Retrieval (MIR) systems. Although there are seven basic Talas in Carnatic Music, a further subdivision of them gives a total of 175 ragas. Statistical and machine learning approaches are proposed in Literature Survey to classify Talas. However, they use complete musical recording for training and testing. As part of this paper, a novel approach is proposed for the first time in Carnatic music to classify Talas using repetitive structure called Thumbnails.
Tala Classification, Carnatic Music, Audio Thumbnails, SVM, CNN.
Fouzia Adjailia, DianaOlejarova and Peter Sincak, Dep. Cybernetics and Artificial Intelligence, Technical University of Kosice, Kosice, Slovak Republic
Facial expressions are an important communication channel among human beings. The Classification of facial expressions is a research area which has been proposed in several fields in recent years, it provides insight into how human can express their emotions which can be used to inform and identify a person's emotional state. In this paper, we provide the basic outlines of both two dimensional and three-dimensional facial expression classification with a number of concepts in detail and the extent of their influence on the classification process. We also compare the accuracy of two-dimensional (2D) and three-dimensional (3D) proposed models to analyse the 2D and 3D classification using comprehensive algorithms based on convolution neural network, the model was trained using a commonly used dataset named Bosphorus. Using the same experimental setup, we discussed the results obtained in terms of accuracy and set a new challenge in the classification of facial expression.
Convolution neural network, facial expression classification, bosphorus, voxel classification.
Tubonimi Jenewari and David Mulvaney, Mechanical, Electrical and Manufacturing Engineering, Loughborough University, Loughborough, UK
The aim of the research is to provide a framework for prototyping executable model for distributed embedded system, it includes both hardware and software, in this way the modelling and implementation process will allow seamless execution of distributed system at functional, hardware simulation and hardware realization levels. An example of the setup will be an ABS (anti-lock braking system) that comprises of system components; sensor, actuator, ECU’s which are interconnected through the vehicle communication bus. Software automobile vehicle model will provide the vehicle dynamics and the model of the ABS that will mimic the functionality of ABS. Multiple processors (ECU) will be interconnected in a distributed format to reduce the time of execution.
All this is captured in a Unified Modelling Language (UML) a standard for the idea of the design to be captured, expressive enough to be understood. The UML notation will be converted to Extensible Markup Language (XML), and a parser code written to extract all the necessary class information (to get the classes where the simulation code will come in) from the overwhelming lines of XML to be transferred to the ECU for execution of the simulation. The simulation is carried out using GDB.
Prototyping, distributed embedded system, executable model, UML
Sudeep Kumar, Deepak Kumar Vasthimal, and Musen Wen, eBay Inc., 2025 Hamilton Ave, San Jose, CA 95125, USA
Today, a plethora of distributed applications are managed on internally hosted cloud platforms. Such managed platforms are often multi tenant by nature and not specifically tied to a single use-case. Smaller footprint of infrastructure on a managed cloud platform has its own set of challenges especially when applications are required to be infrastructure aware for quicker deployments and response times. There are often times and challenges to quickly spawn ready to use instances or hosts on such infrastructure. As part of this paper we outline mechanisms to quickly spawn ready to use instances for application while also being infrastructure aware. In addition, paper proposes architecture that provides high availability to deployed distributed applications.
cloud computing, virtual machine, elastic, elastic search, consul, cache, java, kibana, mongoDB, high performance computing, architecture.
Liqun Ding, School of Transportation, Wuhan University of Technology, Wuhan, Hubei, 430063 & Logistics College, Wuhan Technical College of Communications, Wuhan, Hubei, 430065, Chennai
Aiming at the problem of multi-modal Transportation Organization optimization, making full use of the advantages of cloud platform artificial intelligence technology and big data, the general idea, process and principle of multi-modal transportation dispatch strategy are designed,The primary consideration in the top-level design of multimodal transport system is the matching degree and coordination degree of port facilities and transport information platform. In the process of container transportation, there exists a mixed time window consisting of hard time window and soft time window. The scale effect of transportation is the result of a combination of internal and external factors. This section will study the coordination of transport organization considering scale effect under the constraint of mixed time window. This paper will elaborate the functions and operation status of the relatively independent information system of railway and port, and try to establish an electronic platform suitable for the information interconnection and interoperability of multimodal transport stations in combination with the traditional information exchange mode.
Logistics, Cloud Platform, information system, collaborative, operation.
Mridula Korde, Department of Electronics and Communication Engineering, Shri Ramdeobaba College of Engineering and Management, Nagpur, India
Increasing Internet data traffic has driven the capacity demands for currently deployed 3G and 4G wireless technologies. Now, intensive research toward 5th generation wireless communication networks is progressing in many fronts. 5G technologies are expected to be in use around 2020. Moving toward 5G, network synchronization is expected to play a key role in the successful deployment of the new mobile communication networks. Synchronization is an essential prerequisite for all mobile networks to operate. It’s fundamental to data integrity, and without it data will suffer errors and networks can suffer outages. ‘Loss of synchronization’ problems can be difficult to diagnose and resolve quickly and add to operational costs. Poor synchronization affects customer satisfaction and is therefore revenue affecting too. This paper presents synchronization requirement and related aspects in upcoming 5G technologies.
5G, MIMO, Synchronization
Rosli Uzairi and Ir. Badrul Zaman Adnan, Department of Research and Development, MIMOS Berhad, Malaysia
The implementation of communication technology and infrastructure has it challenges especially in the rural area when there are issues that needs exploration such as the basic infrastructure, size of coverage and the right application to suit the local flavour in order to bridge the digital divide plus mitigating the gap between urban and rural in terms of internet literacy. The project in Kiulu, Sabah presents our involvement in planning, wireless infrastructure design, site survey, interaction with local authorities and communities, site preparation and implementation, operations and management of community based communications solution. There was no internet access in this area prior to this project therefore this effort receive overwhelming support from the public. The contribution of this exercise includes the sharing of deployment experience together with the successful execution of a locally developed high tech radio in mesh and point to point topology.
Community Wi-Fi, Wi-Fi Mesh rural area, Solution Wi-Fi rural area
Mason Chen, OHS, Stanford University, Palo Alto, USA
This paper will address Altitude Sickness risk when hiking on the high Mountains. It’s very risky if the people are not aware of their altitude sickness symptom such as Fatigue, Headache, Dizziness, Insomnia, Shortness of breath during exertion, Nausea, Decreased appetite. The consequence of altitude sickness could be dangerous on the inconvenient high mountains. Pulse Oximeter was used to monitor the Oxygen% and Heart Beat at different altitude levels from near-sea level in San Jose, Denver (5,000 Feet), Estes Park (8,000 Feet), Rocky Mountains Alpine Center (12,000 Feet). 2.5-mins Jumping Rope exercise was conducted to analyze the fatigue behavior associated with Altitude Sickness. Statistical analysis was conducted to verify several hypotheses to predict the Altitude Sickness Risk as well as the Exercise Fatigue Behavior. This paper has demonstrated how to assess their body strength and readiness before they may take a strenuous hiking on the high mountains.
JMP, Statistics, Altitude Sickness, Data Mining, AI
Emeka Ogbuju, Federal University Lokoja, Nigeria
Big data has been defined in terms of the V-dimensions, namely volume, variety and velocity to mention a few. It is within the context of this definition of big data that some database models have been faulted and departure from their usage contemplated by the database community. The drive towards a one-size-fits-all the dimensions of data as proposed by several researchers may end up as a mirage given that the application area determines the priority each dimension gets in a software development project. A transaction-laden application may demand more of the volume dimension of big data and a guarantee of the ACID properties of its transaction than a variety of data types. Given that it is not always the case that all the dimensions are required on every application, this paper is of the view that it may yield more results if database models are rated and used on the basis of their inherent strengths augmented by the extent to which they can be made adaptive to some or all the V-dimensions of data. Based on this submission, a volume-adaptive big data model of the relational database model is proposed. The model partitions a relation such that the sum of all partitions makes up the original relation. The query times of equivalent queries on the original and any of the partitions show that the query time of the partitions are well optimised relative to the query time of the original relation. The partitions are scalable across several servers and in this way, the model adapts to the volume dimension of data and at the same time, takes advantage of the ACID properties of the relational database model.
Big Data, V-dimensions of data, adaptive model of relational DBMS, application prototypes, NoSQL, ACID properties
Wei-hong WANG, Hong-yan LV, Yu-hui CAO, Lei SUN, Qian FENG, School of Information Technology, Hebei University of Economics and Business, Shijiazhuang, China
Student behavior analysis plays an increasingly important role in education data mining research, but it lacks systematic analysis and summary. Based on reading a large amount of literature, this paper has carried out the overall framework, methods and applications of its research. Comprehensive combing and elaboration. Firstly, statistical analysis and knowledge map analysis of the relevant literature on student behavior analysis in the CNKI database are carried out, and then the research trends and research hot spots are obtained. Then, from the different perspectives of the overall process and technical support of student behavior analysis, the overall framework of the research is constructed, and the student behavior evaluation indicators, student portraits and used tools and methods are highlighted. Finally, it summarizes the principal applications of student behavior analysis and points out the future research direction..
Student Behavior, Knowledge Graph, Behavior Analysis, Student Portraits, Data Mining
Fred N. Kiwanuka, Louay Karadsheh, Ja’far alqatawna, and Anang Hudaya Muhamad Amin, Higher Colleges of Technology, Dubai Men’s College, Dubai
The problem of developing a high quality employee schedule has been studied by many researchers and is now widely used by many organization in an attempt to automate and achieve high quality scheduling. The challenge is how to develop a proper quality schedule that e?ectively caters for employee needs and satisfaction. During process of employee scheduling many constraints have to be considered and may require negotiating a large dimension of hard and soft constraints. These constraints make scheduling a complex task. The problem with current scheduling models is that, they are rigid and always sacri?ce the soft constraints. Current scheduling algorithms are mostly modeled as NP problems or constraint optimization problems and this comes with massive computational complexity. In this research, we propose a machine learning approach that takes advantage of mining user-de?ned constraints or soft constraints and transforms these constraints into a classi?cation problem. We propose automatically extracting employee personal schedules like calendars in order to extract their availability. We then show how to use the extracted knowledge to formulate a machine learning problem in order to generate a schedule for faculty sta? in a University that supports ?exible working. We show that the results of this approach are comparable to that of a constraint satisfaction and optimization method that is commonly used in literature. Results show although our approach achieved a performance of 86.4% of satisfying all constraints as compared to 92.7% of a common SAT Solver approach, it was simpler and faster to implement.
Schedulin, Constraint Programming, Data mining, Machine Learning, Deep Learning
Mohammed I. Alghamdi, Department of Computer Science,Al-Baha University, Al-Baha City, Kingdom of Saudi Arabia
The rapid technological advancement has led the entire world to shift towards digital domain. However, this transition has also result in the emergence of cybercrimes and security breach incidents that threatens the privacy and security of the users. Therefore, this paper aimed at examining the use of digital forensics in countering cybercrimes, which has been a critical breakthrough in cybersecurity. The paper has analyzed the most recent trends in digital forensics, which include cloud forensics, social media forensics, and IoT forensics. These technologies are helping the cybersecurity professionals to use the digital traces left by the data storage and processing to keep data safe, while identifying the cybercriminals. However, the research has also observed specific threats to digital forensics, which include technical, operational and personnel-related challenges. The high complexity of these systems, large volume of data, chain of custody, the integrity of personnel, and the validity and accuracy of digital forensics are major threats to its large-scale use. Nevertheless, the paper has also observed the use of USB forensics, intrusion detection and artificial intelligence as major opportunities for digital forensics that can make the processes easier, efficient, and safe.
Digital forensics, data security, cybercrime, data theft, security attack.
Gustavo A. Lado and Enrique C. Segura, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires
This paper presents a new technique for efficient coding of highly dimensional vectors, overcoming the typical drawbacks of classical approaches, both of the type of local representations and those of distributed codifications. The main advantages and disadvantages of those classical approaches are revised and a novel, fully parameterizaed strategy is introduced to obtain representations of intermediate levels of locality and sparsity, according to the neccesities of the particular problem to deal with. The proposed method, called COLOSSUS (COding with LOgistic Softmax Sparse UnitS) is based on an algorithm that permits a smooth transition between both extreme behaviors -local, distributed- via a parameter that regulates the sparsity of the representation. The activation function is of the logistic type. We propose an appropiate cost function and derive a learning rule that happens to be similar to the Oja's Hebbian learning rule. Experiments are reported that show the efficiency of the proposed technique.
Neural Networks, Sparse Coding, Autoencoders
Haixin Wang1,2 and Jianxin Shen1, 1College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China and 2School of Mechanical & Aerospace Engineering, Queen's University Belfast, Belfast, UK
One of the challenges for autonomous aircraft is safe and reliable navigation in urban or indoor environments. The path planning of aerial robot is a complicated task due to the factors such as the decreasing accuracy of global positioning system (GPS), the narrow space and the dynamic movement of obstacles. To navigate effectively in such an environment, one of the skills an agent needs is to develop the ability to avoid collisions. In this study, we investigated a possible approximation (called a partially observable Markov decision process) to improve the performance of autonomous UAVS in GPS-free environments by combing the newly developed A3C reinforcement learning approach. Developing and testing algorithms in the real world is expensive and time consuming for drones. In addition, taking advantage of current research and advances in machine intelligence requires the collection of extensive training and testing to determine changes in conditions and environment. This article leverages open source tools such as Microsoft’s state-of-the-art drone simulator Airsim, a machine learning framework that leverages TensorFlow, a tensor library of Google. The main method of the paper is Asynchronous Advantage Actor-Critic network.
Unmanned aerial vehicle, Asynchronous advantage actor-critic algorithm, Simulation, Path planning
Biraja Ghoshal and Allan Tucker, Brunel University London, Uxbridge, UB8 3PH, United Kingdom
Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians’trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (COVID-19) infections are a major healthcare challenge around the world. Detecting COVID-19 in X-ray images is crucial for diagnosis, assessment and treatment. However, diagnostic uncertainty in a report is a challenging yet inevitable task for radiologists. In this paper, we investigate how Dropweights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solutions to improve the diagnostic performance of the human-machine combination using publicly available COVID-19 chest X-ray dataset and show that the uncertainty in prediction is strongly correlated with the accuracy of the prediction. We believe that the availability of uncertainty-aware deep learning will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.
Bayesian Deep Learning, Predictive Entropy, Uncertainty Estimation, Dropweights, COVID-19
Owen Xuereb, Frankie Inguanez and Thomas Gatt, Institute of Information & Communication Technology Malta College of Arts, Science & Technology, Corradino Hill Paola PLA 9032, Malta
This research proposes the use of autonomous law enforcement systems to aid in apprehending highway code violations. Traditional ways of law enforcement require the presence of an enforcer to be on sight when a contravention occurs. This allows multiple violations to go unnoticed and hence reduce the efficacy of highway code regulations. Automated law enforcement should fill this gap by reducing unnoticed contraventions and thus, contributing to safer driving and possibly reducing accidents factored by highway code violations. This research proposes the use of a Convolutional Neural Network (CNN) for object detection using a custom-made vehicle and STOP sign datasets. The combination is used to identify a STOP sign and detect any vehicles passing through it. The proposed solution managed to detect 100% of STOP sign violations and non-violations in a controlled environment based on real-life scenarios with an individual model accuracy of 87.8% for the cars and 91.4% for the STOP signs.
Artificial Intelligence, Convolutional Neural Network, Violation Detection, Vehicle Detection, Highway Code Violations
Jiaming Lu, Sichuan University, China
In order to overcome the non-stationary, correlated and non-linear characteristics of financial time series, the empirical mode decomposition algorithm(EMD) in the engineering field is introduced into the independent recurrent deep learning neural network to build a deep learning neural network prediction model(CEEMD-IndRNN). In this model, the nonstationary time series is first decomposed into 11 stationary IMF components by using the complementary ensemble empirical mode decomposition algorithm(CEEMD), and each component is trained, validated and tested by an independent recurrent neural network (IndRNN), and then the prediction results are synthesized. Taking the closing price of Shanghai Composite Index as an example, this paper explores the prediction ability of CEEMD-IndRNN model to the actual financial series data, and compares the prediction effect with ARIMA model, IndRNN, LSTM model and CEEMD-LSTM model. The empirical results show that the CEEMD algorithm can solve the non-stationary problem of stock time series and the IndRNN neural network can identify the non-linear characteristics of the stock index time series. The CEEMD-IndRNN hybrid prediction model can integrate the advantages of both, so as to improve its prediction effect in Shanghai Securities Composite Index.
Shanghai Securities Composite Index, Complementary ensemble empirical mode decomposition, Independently recurrent neural network, Forecast.
Hung Lay2, Peiqi Gu1 and Yu Sun2, 1University High School, 2California State Polytechnic University, Pomona, CA 91768
Web Application, ReactJS
Malek I. Hudaib, Computer Science, Babes-Bolyai University, Cluj-Napoca, 400084, Romania
Recently, it is becoming increasingly apparent that there is a huge efforts toward leveraging the quality software models results, as the software products are being involved in almost all life domains. Noting the great amount of potentials and resources available, comparing to the low cost and degree of customization, there is a huge concern in reference to fulfil the ultimate consumers’ needs and to meet their expectations. Depending on different software quality models that have been examined through a variety of researches, new models have been created to enrich the quality measurement process. The main aim of this research is to compare amongst these models, and to conclude the variances, the results shows that, there is a semi-consensus of Functionality, Efficiency, Maintainability, Usability, Reliability and Portability factors. As well, there is no significant difference about definitions of factors related to the quality models under this study.
Software Quality, Quality Model, Quality Factors/ Characteristic / Attributes
F. Barbato and M. Giacalone, University of Naples Federico II, Italy
This paper aims to show how Big Data analysis can direct scientific investigations in the field of astronomy, of which a considerable amount of data is available. The goal is to use Big Data and statistical approach to make decisions about the events to be investigated regarding a specific problem, in particular the analysis of celestial bodies that could meet the conditions of planetary habitability. In particular, starting from an initial number of 120000 celestial bodies, the statistical approach will allow to consider only 2367, reducing scientific analyses, and therefore times and costs, by 98%.
Astronomy, Stars, Big Data, planetary habitability
Yusuf Yazici, Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
Credit card fraud is an ongoing problem for almost all industries in the world, and it raises millions of dollars to the global economy each year. Therefore, there is a number of researches either completed or proceeding in order to detect these kinds of frauds in the industry. These researches generally use rule-based or novel artificial intelligence approaches to find eligible solutions. The ultimate goal of this paper is to summarize state-of-the-art approaches to fraud detection using artificial intelligence and machine learning techniques. While summarizing, we will categorize the common problems such as imbalanced dataset, real time working scenarios, and feature engineering challenges that almost all research works encounter, and identify general approaches to solve them. The imbalanced dataset problem occurs because the number of legitimate transactions is much higher than the fraudulent ones whereas applying the right feature engineering is substantial as the features obtained from the industries are limited, and applying feature engineering methods and reforming the dataset is crucial. Also, adapting the detection system to real time scenarios is a challenge since the number of credit card transactions in a limited time period is very high. In addition, we will discuss how evaluation metrics and machine learning methods differentiate among researches.
Credit Card, Fraud Detection, Machine Learning, Survey, Artificial Intelligence
NedaNavidi AI Redeﬁned, AI-R Inc. 400 McGill st., Montreal, Canada
Reinforcement-Learning (RL) in various decision-making tasks of Machine-Learning (ML) provides effective results with an agent learning from a stand-alone reward function. However, it presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards. This complexity, coming from high dimensionality and continuousness of the environments considered herein, calls for a large number of learning trials to learn about the environment through RL. Imitation-Learning (IL) offers a promising solution for those challenges using a teacher. In IL, the learning process can take advantage of human-sourced assistance and/or control over the agent and environment. A human teacher and an agent learner are considered in this study. The teacher takes part in the agent’s training towards dealing with the environment, tackling a speciﬁc objective, and achieving a predeﬁned goal. Within that paradigm, however, existing IL approaches have the drawback of expecting extensive demonstration information in long-horizon problems. This paper proposes a novel approach combining IL with different types of RL methods, namely State–action–reward–state–action (SARSA) and Asynchronous Advantage Actor-Critic Agents (A3C), to overcome the problems of both stand-alone systems. It is addressed how to effectively leverage the teacher’s feedback – be it direct binary or indirect detailed – for the agent learner to learn sequential decision-making policies. The results of this study on various OpenAI-Gym environments show that this algorithmic method can be incorporated with different combinations, signiﬁcantly decreases both human endeavor and tedious exploration process.
Sheikh Mohamad Arsalan and Farooq Hussain, Department of Computer Engineering, University of Technology Sydney, Sydney, Nsw, Australia
The blockchain innovation is accepted by numerous individuals to be a distinct advantage in numerous application areas, particularly monetary applications. While the origin of blockchain innovation (i.e., Blockchain 1.0) is used exclusivel y for digital currency purposes, the later era (i.e., Blockchain 2.0), as described by Ethereum, is a transparent and decentralised stage that empowers another authentication paradigm — Decentralized Applications (DApps) running through blockchains. DApps ' rich applications and semantics inevitably pose numerous vulnerabilities to protection, which have no partners in unadulterated digital money systems like Bitcoin. Since Ethereum is another, yet intricate, framework, it is basic to have an orderly and far reaching understanding on its security from an all-encompassing point of view, which is inaccessible. As far as we could possibly know, the present review, which can likewise be utilized as an instructional exercise, fills this void. In particular, we systematise three parts of Ethereum's security frameworks: vulnerabilities, assaults, and resistance. We bring bits of knowledge into, in addition to other things, powerlessness underlying drivers, assault outcomes, and barrier abilities, which shed light on future research.
Blockchain, Ethereum, Security, Smart Con-tract, Network
Kevin Wallis, Jan Stodt, Eugen Jastremskoj, and Christoph Reich, Furtwangen University of Applied Science, Germany
The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts.
Blockchain, Smart Contracts, Industry 4.0, Digitized Agreements, Maintenance
Raimundas Savukynas, Institute of Data Science and Digital Technologies, Faculty of Mathematics and Informatics, Vilnius University, Akademijos 4, LT-08412, Vilnius, Lithuania
The launching and linking process of heterogeneous objects to the Internet of Things (IoT) is related to some important problems of the identification, authentication for ensuring security over the wireless connections. The possibilities of connections to the IoT differ in a broad spectrum of equipment, functionality of objects, communication protocols, etc. This research study is related to the implementation of safeguard algorithms on the first stages of object identification and authentication before the permission stage for launching into the working area of the IoT. Our application domain is related to the requirements for the security of the multi-layered infrastructure of objects by linking to the whole IoT. Such infrastructure became more complex according to the risks of unsafe possibilities. The aim of this research is to evaluate some safety means related to the identification and authentication stages of objects by integrating them with the functionality of blockchain. The objectives of this research are related to the development of more safety working algorithms by representing the stages of checking of the identity of objects. The results demonstrated integration possibilities of implementing the blockchain functionality for establishing and managing the operational rules for pre-connection stages of objects to IoT. The paper shows new results of developing protection means for ensuring reliable communication in the transmission of outgoing confidential data and transmission data integrity from different smart devices. As a result, components of necessary functional capabilities of the communication of IoT are developed by intending to ensure the safety and reliability of the wireless connection of objects.
Internet of Things (IoT), data transferring, smart environment, safety means, blockchain functionality.
Jan Stodt and Christoph Reich, Institute for Data Science, Cloud Computing, and IT Security at the University of Applied Sciences Furtwangen, Furtwangen, Baden-W¨urttemberg, Germany
Increased collaborative production and dynamic selection of production partners within industry 4.0 manufacturing leads to everincreasing automatic data exchange between companies. Automatic and unsupervised data exchange creates new attack vectors, which could be used by a malicious insider to leak secrets via an otherwise considered secure channel without anyone noticing. In this paper we reflect upon approaches to prevent the exposure of secret data via blockchain technology. We show that previous blockchain based privacy protection approaches offer protection, but give the control of the data to (potentially not trustworthy) third parties, which also can be considered as a privacy violation. The approach taken in this paper is not utilize centralized data storage for data. It realizes data confidentiality of P2P communication and data processing in smart contracts of blockchains.
blockchain, privacy protection, P2P communication, smart contracts, industry 4.0
Amr Adel, Whitecliffe College of Technology & Innovation, Auckland, New Zealand,Cyber Forensics Research Centre, Auckland University of Technology, Auckland, New Zealand, Brian Cusack, Cyber Forensics Research Centre, Auckland University of Technology, Auckland, New Zealand
Enhancements in technologies and shifting trends in customer behaviour have resulted in an increase in the variety, volume, veracity and velocity of available data for conducting digital forensic analysis. In order to conduct intelligent forensic investigation, open source information and entity identification must be collected. Challenge of organised crimes are now involved in drug trafficking, murder, fraud, human trafficking, and high-tech crimes. Criminal Intelligence using Open Source Intelligence Forensic (OSINT) is established to perform data mining and link analysis to trace terrorist activities in critical. In this paper, we willinvestigate the activities done by a suspect employee. Data mining is to be performed and link analysis as well to confirm all participating parties and contacted persons used in the communications. Then all possible emails and IP addresses were traced to report these findings in a comprehensive report. The proposed solution was to identify the scope of the investigation to limit the results, ensure that expertise and correct tools are ready to be implemented for identifying and collecting potential evidences. Filtering results to reduce the large amount of data into a range which is needed for the investigation. Analysis and examinations stage is the last step to extract useful information and perform entity information loading into a charting software. This enhanced information and knowledge achieved are of advantage in research. This form of intelligence building can significantly support real world investigations with efficient tools. The major advantage of analysing data links in digital forensics is that there may be case-related information included within unrelated databases.
Heba Almorad, Sara Helal, and Abdulhamit Subasi, College of Engineering, Effat University, Jeddah, Saudi Arabia
ABSTRACT A chatbot is an intelligent conversation simulator that interacts with users utilizing natural languages. This technology has the power to give the illusion of a true human-to-human interaction in a wide application, i.e., health consultation. Due to the exclusively explosive demand for Coronavirus disease (COVID-19) awareness guidelines, this paper proposes an Arabic chatbot (Arabot) that answers any question regarding this disease to take the burden off consultation centres. Many websites and organizations have provided tips and advice to prevent spreading the virus to others. However, none were in an Arabic, interactive, and intelligent approach. Thus, Arabot is a retrieval based self-learning program that is trained based on the World Health Organization (WHO) database to provide reliable assistance.
Artificial intelligence (AI), chatterbot, Arabic chatbot, Python, chatterbot corpus, neural network (NN), heuristics, coronavirus disease (COVID-19), World Health Organization (WHO).
John Kalung Leung, Igor Griva and William G. Kennedy, George Mason University, USA
We extend the concept of using an active user’s emotion embeddings and movies’ emotion embeddings to evaluate a Recommender top-N recommendation list as illustrated in a previous paper to encompass the emotional features of a film as a component of building Emotion Aware Recommender Systems. Using textual movie metadata, we develop a comparative platform that consists of five recommenders based on content-based and collaborative filtering algorithms. We then apply the movie emotion embeddings obtained from classifying the emotional features of movie overviews by the Tweets Emotion Classifier, which we have developed to add an emotional dimension of embeddings for the Recommender. Emotion Aware Recommender’s top-N recommendations list shows intrigue results which are quite different from its peer. We reckon that the Emotion Aware Recommender top-N list, which matches the active user’s emotional profile, is useful for providing serendipity recommendations and remedying the cold start problem commonly present in Recommender.
context-aware, emotion text mining, affective computing, recommender systems, machine learning.
Copyright © SEA 2020