1) IRAJ is moving to next issue from 16th April 2015.
2) IRAJ Management has issued a thank you note to all the editors for their constant endeavours for enhancement for IRAJ.
3) Chief editor of IRAJ has achieved UGC Net qualification.
|VOLUME 14, ISSUE 1, 1 October, 2018 To 31 December, 2020|
|VOLUME 13, ISSUE 1, 1 April, 2017 To 30 June, 2017|
|VOLUME 12, ISSUE 1, 1 January, 2017 To 31 March, 2017|
|VOLUME 11, ISSUE 1, 16 October, 2016 To 31 December, 2016|
|VOLUME 9, ISSUE 1, 15 April, 2016 To 14 June, 2016|
|VOLUME 8, ISSUE 1, 15 January, 2016 To 14 April, 2016|
|VOLUME 7, ISSUE 1, 15 October, 2015 To 14 January, 2016|
|VOLUME 6, ISSUE 1, 15 July, 2015 To 14 October, 2015|
|VOLUME 5, ISSUE 1, 16 April, 2015 To 15 July, 2015|
|VOLUME 4, ISSUE 1, 16 January, 2015 To 15 April, 2015|
|VOLUME 2, ISSUE 1, 16 August, 2014 To 15 November, 2014|
|VOLUME 1, ISSUE 1, 15 June, 2014 To 15 August, 2014|
Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. In general, a map task is divided into map and combine phases, while a reduce task is divided into copy, sort and reduce phases. Since Hadoop-0.20, reduce tasks can start when only some map tasks complete, which allows reduce tasks to copy map outputs earlier as they become available and hence mitigates network congestion. However, no reduce task can step into the sort phase until all map tasks complete. This is because each reduce task must finish copying outputs from all the map tasks to prepare the input for the sort phase. This research is proposing an innovative approach to solve the issue of the performance on MapReduce Based Hadoop implementation for which an additional sorting mechanism with incremental clustering approach is being applied.
this work proposes a multi-agent system that cooperates in order to detect intrusions. Some of the agents implement IDS models, other evaluates the predictions made by the firsts and finally a third kind of agent considers the evaluator suggestion establishing the IDS effectiveness. The dynamic weights involved in the adaptive evaluation performed to generate a final suggestion from several different intrusion detection models showing a better performance than classical approach based on the average sum of the predictions received. The improvement has been measured introducing a promising metric that takes into account the response costs. The recent introduction of decision making techniques to intrusion detection reveals the necessity of formal robust metrics that considers all the parameters involved in the task. Testing with real data instead of synthetic data has been carried. This is not an easy task because of the problems to experiment in real networks and the suspicious results based on simulated traffic. Adaptive behavior of agents can be really useful in the intrusion detection field due to the very changing environment that is faced and the need of automated responses.
Internet is growing very fast and large information is being stored over it every minute. This increasing amount of information provides users with more options, but also makes it difficult to find the aEoerightaE or aEoeinterestingaE information out of the huge information. When a user accesses the Web his usage details are logged on the servers. Web access log contains a lot of information about how the users explore the web. Web usage mining discovers user preference from this log and makes recommendations based on the extracted knowledge. Clustering is a pivotal building block in many data mining applications and in machine learning. In this work, two types of processing has been considered 1) Off-line (batch) processing 2) Online or incremental Clustering. Incremental Clustering requires initial clusters to be decided in advance i.e. they must pre exist for processing. If the initial clusters are to be fixed, then there are several ways it can be achieved.
Preserving color information on color image is a challenging task. Many widely used algorithms are able to enhance contrast of given color image but these methods are not able to preserve the color information in the processed image. In this work we propose an algorithm for enhancement of contrast of color image without much affecting its color information. The proposed method works on HSV (Hue Saturation Value) color space and it decomposes the input color image's V channel into high and low frequency part using the Discrete Cosine Transform and then it uses the Singular Value Decomposition for contrast enhancement. The results show that the proposed method is able to enhance contrast of given color image without much affecting its color information.
the process of grouping a set of physical or abstract object into classes of similar objects is called clustering. There are several techniques and algorithms are used for extracting the hidden patterns from the large data sets and finding the relationships between them. The main novelty of the Hierarchical Data Divisive Soft Clustering (H2DSC) algorithm is that it is a quality driven algorithm, since it dynamically evaluates a multi-dimensional quality measure of the cluster to drive the generation of the soft hierarchy. Specifically, it generates a hierarchy in which each node is split into a variable number of subnodes. Cluster at the same hierarchical level share a minimum quality value: cluster in lower levels of the hierarchy have a higher quality, this way more specific clusters (lower level clusters) have a higher quality than more general clusters (upper level clusters). Further, since the algorithm generates a soft partition, a document can belong to several sub-clusters with distinct membership degrees. The proposed algorithm is divisive, and it is based on a combination of a modified bisecting K-Means algorithm with a flat soft clustering algorithm used to partition each node.
Classification rule mining is that the most wanted out by users since they represent extremely apprehensible variety of data. The foundations are evaluated supported objective and subjective metrics. The user should be ready to specify the properties of the foundations. The foundations discovered should have a number of these properties to render them helpful. These properties are also conflicting. Thence discovery of rules with specific properties may be a multi objective optimization drawback. Cultural algorithmic rule (CA) that derives from social structures, and which contains organic process systems and agents, and uses five data sources (KSaETMs) for the evolution method higher suits the requirement for resolution multi objective optimization drawback. Within the current study a cultural algorithmic rule for classification rule mining is projected for multi objective optimization of rules. The social data created by the people within the ECA is to be regenerate into unjust social data or collective social intelligence to be applied in numerous applications like Associate in Nursing Intrusion detection and hindrance system to unravel the pc security drawback.
the increasing adoption of information systems in healthcare has led to a scenario where patient information security is more and more being regarded as a critical issue. Allowing patient information to be in jeopardy may lead to irreparable damage, physically, morally and socially to the patient, potentially shaking the credibility of the healthcare institution. This demands adoption of security mechanisms to assure information integrity and authenticity. Before digital medical images in computer-based patient record systems can be distributed online, it is necessary for confidentiality reasons to eliminate patient identification information that appears in the images. Structured descriptions attached to medical image series conforming to the DICOM standard make possible to fit the collections of existing digitized images into an educational and research framework. Progressive transmission of medical images through Internet has emerged as a promising protocol for teleradiology applications. The major issue that arises in teleradiology is the difficulty of transmitting large volume of medical data with relatively low bandwidth. With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques which can achieve high compression ratios with user specified distortion rates become necessary. With neural network compression techniques based on Dynamic Associative Neural Networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for aEoesimpleaE patterns.